CIFellows 2020 For the Record

< Back to CIFellows 2020 for the Record Landing Page


3. Reflections and Refinements

This Section provides reflections and refinements: what we think worked well and areas for improvement. These opinions were shared by many, but not necessarily all, of the co-Principal Investigators. 

On the whole, we are very pleased with the process. In our opinion, there were no major flaws that affected the selection of excellent Fellows, even with the tight schedule. There were eight weeks from the first emails to a funded grant and then eight more weeks until emails were sent to inform applicants and mentors of the result.  

We wish to express great appreciation to the NSF for funding this program and operating with the urgency that this program engendered. Without commitment by NSF, this would not have happened as rapidly or at all.


We recommend that any CIF programs start earlier if possible to allow less compressed time for proposal development, proposal review, application development, selection process development, advertising, applicant-mentor matching, application and letter of recommendation preparation, reviewer selection, reviewing, selection, providing outcomes and negotiating subaward in time for selected CIFs and mentor to plan for a Fall start. We were able to run at this speed because of extreme dedication by CRA staff and a considerable amount of concentrated time invested by the PIs. We were also significantly aided by access to the materials and individuals involved in the prior CIF program. We are “paying it forward” with this document, to provide similar support for future offerings. 

We also assert that the structure of PIs, Steering Committee, and Selection Committee worked well, the size of each group was appropriate (large enough for broad views and yet small enough to facilitate discussion). Moreover, we were fortunate that all group members were gifted and conscientious. Indeed, each group rapidly developed the rapor and collaboration norms necessary to work well together. Considering that this all took place over videoconferencing and some involved had never met in person, this was remarkable. It certainly helped that everyone felt the importance of the initiative, in no small part because of how successful the early program was. 

While the application basically worked well, there are some areas to consider improving. We should explicitly ask for a proposal title, mentor name, mentor/host institution, and a mentoring plan (not embedded in the mentor letter).  There should also be a cleaner way to link two applications from the same person (if that feature is allowed in the future), and to allow the candidate to indicate a preference between the two explicitly. We may want to consider the potential flags at the time of application creation and make sure we collect the necessary information to populate the flags automatically. Letters of Recommendation were an issue, as the system only allowed them to be entered via text box (rather than uploaded).  Since our community is used to sending/uploading pdfs, we allowed people to do that with CRA staff managing the process of getting these pdfs into the system.  This created a lot of work on the backend and opportunity for error.  We should collect not only Primary Affiliation, but also PhD granting Institution, and whether or not the applicant is a grad student or current Postdoc (if future programs allow both).    

We did not provide a platform to match applicants and mentors, but we received multiple requests to do so, and the community spontaneously created some of this support infrastructure. This decision could be revisited and supported with a simple form allowing mentor and mentee interest and contact information to be registered. Support for matching likely benefits those with less well developed professional networks and may support a more diverse pool of applicants.

We thought that the decision to let applicants apply with a mentor (or two) worked well, as it encouraged substantive dialog early.

Our decision to select two reviews per application (instead of more) was probably necessary given that managing the resulting 1000+ reviews in two weeks was a challenge. Even with more time, it is not clear that more reviews are valuable. The top candidates had uniformly high scores.

To encourage reviewers to be frank on a tight schedule, we informed them that their reviews would be used for selection only. A consequence of this is that applicants received no feedback. While probably necessary this year, it might be worthwhile reevaluating this decision in future years, as many applicants requested feedback after they were not selected (see Appendix V for response to feedback requests).  We also may want to put something in the initial message informing reviewers about time / workload, as many of the reviewers inquired about that, prompting many individual emails to be written. 

The review form also worked well. One potential issue is that reviewers were asked to provide a rating on each of five questions on a 1-4 scale:

  • Fund; application is excellent in nearly all respects,
  • Likely fund; application is very good,
  • Fund if room; application has merit, and
  • Do not fund; application has serious deficits.  

A large number of the best applicants—more than could be selected—earned 4s on four or five questions. Future programs may wish to provide more choices to discriminate applicants, provided that answers can be calibrated sufficiently. For such a selective process as CIFs, there is value in separating the truly exceptional from the very good. Might seven be a better number of choices? Perhaps consulting with the experts at CERP makes the most sense.

As discussed above, the final selection process was supported by various software. Most worked well. Most challenging was the use of Cadmium, the key system supporting the process.  We would not use Cadmium again and instead would be inclined to stand up an instance of one of the conference review systems that are used in computer science. 

Cadmium was recommended due to its previous use by CRA-WP and its existing contact with CRA. However, CRA-WP’s use is different from CIF use, principally because CRA-WP’s equivalent of CIF’s Selection Committee does all reviewing rather than sending applications to external reviewers. Externally, Cadmium was satisfactory for applicants, mentors, letter writers and reviewers. One significant issue is that Cadmium did not identify or track most conflicts of interest, so we had to rely on reviewers self-identifying the COIs. Internally, Cadmium was less good for the Selection Committee and especially support staff. Many actions in Cadmium require many clicks, e.g. to view application components. Referees had to be asked out of band, added to the Cadmium, and then assigned to applications. Many COIs were dealt with after review assignments were made on an already short time table. 

Another major issue was Cadmium’s communication component. While in theory it was convenient to be able to send mass emails to applicants and reviewers from the system, we found many emails were blocked from certain mailboxes and blacklisted on a number of university emails. This caused many problems not only with reviewers and applicants, but with referees. Emails asking for letters of recommendation were automatically sent out and collected through the system, many reported never receiving the request causing confusion, last minute letter writing and declined late applications. 

We lacked confidence that Cadmium would support all questions and views we might want for the final selection process. With considerable staff effort, we were able to export key information into Google sheets and documents that served the process well. Of course, it is easier to find flaws in the software that one is using than the software one is considering to use. 

Subaward indirect costs for CIF 2020 were limited to 35% (https://www.nsf.gov/bfa/dias/policy/):

Computing Innovation Fellows Project 2020, Award No. 2030859, Prime Awardee: Computing Research Association (subawardees indirect cost recovery limited to 35%).

This is a larger number than the 25% used for the original CIF program that generated no push back. The current rate of 35% generated pushback—but ultimately acceptance—by several institutions, perhaps reflecting how COVID-19 is hurting university finances worse than the Great Recession. Insisting on a lower overhead rate does allow CIF to fund more fellows, but these are difficult times for universities.