CIFellows 2020 For the Record

< Back to CIFellows 2020 for the Record Landing Page


1. The Computing Innovation Fellows 2020 Story

As universities ceased in person operations in March 2020 due to COVID, academic faculty searches were affected. Many universities moved to online interviews and some cancelled remaining interviews. Concerns about university budgets led to speculation that faculty hiring would be curtailed, possibly substantially. In this context of budget and hiring uncertainty, the Computing Innovation Fellows (CIF) 2020 planning process began on March 25, 2020, with the exchange of several emails among the Computing Research Association (CRA), its standing Computing Community Consortium (CCC) committee, and the National Science Foundation (NSF). We quickly decided to pursue a program modeled after the postdoctoral  CIF programs of 2009, 2010, and 2011, but modified by the differences between the “great recession” of the earlier time period and the emerging COVID-19 pandemic, as well as changes reflecting new understanding of postdoc best practices. 

We soon decided upon a joint effort between CRA Leadership–CRA Board Chair Ellen Zegura and CRA Executive Director Andrew (Andy) Bernat–and CRA’s CCC Leadership–CCC Chair Mark D. Hill, CCC Vice Chair Elizabeth (Liz) Bradley, and CCC Director Ann Schwartz Drobnis. Eventually, this group became co-Principal Investigators with Ellen Zegura as the official PI for a funding submission to NSF. Table 1 “Key People Involved” provides more information on PIs and members of the other committees discussed below.

We initiated a survey from CRA’s internal Center for Evaluating the Research Pipeline (CERP) that took place from March 27 to April 1, 2020, to better understand and document the emerging impact of COVID-19 on academic hiring in computing. Although the survey was run in a time of great uncertainty, the pandemic was predicted to have a substantial impact on academic hiring. We summarize survey results in Appendix B.

In addition, the CCC had been tracking how the original CIF program positively impacted the careers of the original Fellows from 2009, 2010 and 2011. We updated that data and constructed slides presenting individual stories to remind everyone that these efforts are about people, not just data. To this end, Appendix C presents seven success stories from the original CIFellow cohorts.

We constructed an initial vision for CIF 2020, in the form of a Concept Paper, augmented it with the above survey data and success stories, and sent the materials to NSF on April 6, 2020. We include this Concept Paper as Appendix A. We used the document as the basis for discussion with NSF and other stakeholders, producing a second concept paper on April 10, 2020 (Appendix D). Following additional discussion and effort, on April 22, the PIs submitted a proposal on behalf of CRA to NSF CISE (Computing and Information Science and Engineering) for $14M to support 49 Fellows. NSF funded the award on May 14, 2020 (See Appendix E for full Proposal).

As the PIs recognized that broad perspectives often lead to better processes and outcomes, we decided to form a CIFellows 2020 Steering Committee shortly after submitting the initial proposal to NSF (i.e., early May). It included academic co-PIs Bradley, Hill, and Zegura augmented by three other respected individuals not currently associated with CRA but with experience with the previous CIF program. Consulting with others, we chose Aruna Balasubramanian (SUNY Stonybrook), an original CI Fellow, Stefan Savage (UC San Diego), an original CI mentor, and Anita Jones (U of Virginia), an original CI Fellows advisor and author of a 2009 CRA postdoc mentoring best practices memo. The Steering Committee also included the CRA Executive Director and CCC Director as ex officio members. The Steering Committee met weekly through the CIF advertisement and selection time period to develop and review processes. 

With NSF’s concurrence we announced the CIF 2020 program on May 14, prior to completion of award contracting, with the text given in Appendix F to the CRA Bulletin, CCCBlog, directed email to CRA Members and Other Societies to share.  

We then had discussions with the NSF CISE Education and WorkForce Development (EWF) cluster about a possible supplement to the grant to especially target fellows whose research focuses on Broadening Participation in Computing, Computer Science Education and/or promote Diversity, provided that applications were as strong as the broader cohort. On May 28, NSF funded a supplement for $2.8M to support up to ten additional fellows (See Appendix F for the Supplement).

We announced CIF 2020 with a preliminary announcement on May 14, 2020 (Appendix G) and followed with an official announcement on May 20, 2020 (Appendix H).  NSF CISE sent out a mailing to the CISE-ANNOUNCE list on May 22, 2020 (See Appendix I). Announcements went out on the CRA blog and CCC blog.  We also explicitly forwarded the information to sister societies: ACM, IEEE-Computing, USENIX, SIAM, AAAI, ADSA, ASA, CASC, ECEDHA, ASEE, iSchools, and CSEd Leaders.

We held a community webinar on May 26  with 550 participants, covering an overview of the program and FAQs already received. We took questions in real-time from the audience. A link to the Recording and Presentation can be found in Appendix J.   

Appendix K provides the CIF 2020 application. Most notably it included a 2-page research proposal, 1-page fellowship plan, letter of recommendation from current research supervisor, letter of support from proposed mentor, additional letter of recommendation, 2-page academic CV, and application information checklist. Please see the web page’s “Final Application Guidelines” in Appendix L for more details. 

We decided to allow applicants to apply twice, with two different mentors. We did this because we planned to restrict to two the number of applicants going to the same destination institution. We knew some destinations would be very popular, thus a second mentor gave applicants a chance to diversify. Each application was reviewed separately, with at most one applicant-mentor pair to be awarded. 

While the original CIF program provided an official applicant-mentor matching site, we chose not to do so because feedback indicated that only a small percentage of matches were established using that site. Nonetheless we received multiple requests for matching assistance, and the community created their own grassroots site. We decided to point people to the community-developed matching site, without endorsement, in the CIFellows 2020 online FAQ. 

We selected an aggressive due date of June 12 to respect that both selected and unselected applicants needed to make plans for the next academic year. This was extended by 5 days due to the Black Lives Matter protests following the death of George Floyd and the potential impact on applicants. 

We decided to evaluate applicants using a 2-level technical program committee model similar to the paper selection process for many computing conferences. The top level Selection Committee was responsible for finding reviewers for applications and representing applications in the selection meeting.  A challenge here was sizing the Selection Committee to be large enough to competently cover all areas of computing and yet small enough to facilitate frank discussion. To this end, we chose eight members to augment co-chairs Bradley, Hill, and Zegura, totaling to eleven members. We chose members from diverse computing subareas with some areas (HCI and AI/ML) having two members due to an (expected) large number of applications in those areas. In some cases, Selection Committee members covered both their primary research area and another (neighboring) subarea. Selection Committee members were expected to be well respected in the field and known for following through on service responsibilities.  We list names in Table 1. To mitigate conflicts of interest, Selection Committee members could not be a Ph.D. advisor or postdoc mentor of a CIF 2020 applicant. We did not hold that same standard for reviewers, as this would have made it very difficult to get enough qualified reviewers.

Each Selection Committee member was in charge of identifying and recruiting reviewers for their subarea(s) (see Appendix M for reviewer invitation). We utilized 270 reviewers, each reviewing 2-5 applications, to produce two reviews per application. Reviewers were not allowed to review fewer than two applications in order to ensure some calibration. Like Selection Committee members, potential reviewers were expected to be well respected in the field and known for following through on service responsibilities, but could and did vary in seniority, unlike the more-senior Selection Committee members. Reviewers were allowed to be mentors or advisors to a CIF applicant, with a self-identified conflict for these applications. Self identification was necessary because the reviewing system we used had extremely limited support for conflict identification. 

The assignment of specific reviewers to specific applications was done either by hand by the Selection Committee member (if desired) or by a pseudo-random process supported by the software we used (Cadmium). This software prevented the assignment of reviewers to applications with institutional conflicts, but did not automatically identify other conflicts of interest (COIs). These COIs were either recognized by the Selection Committee member or noted by the reviewer, aborting the review and causing a reassignment to a different reviewer. In some, but not all cases—where an applicant applied twice with two different mentors, as was permitted—the Selection Committee member assigned a common reviewer for the application pairs so that the reviewer could provide more information on which looked stronger. We settled on two reviews per application due to the tight two-week time table. The community supplied more than 1000 reviews in this time period.  Appendices O and N provide the Charge to Reviewers and the Review Form respectively.

The PIs, Steering Committee, and Selection Committee extensively discussed processes for CIF selection. We decided that reviewers would review for application content only. They would submit their review and be done. It was then up to the Selection Committee to run a holistic process, described below, to select Fellows that best strengthened the overall computing community. This enabled the consideration of demographic factors and distribution across subareas of computing. We followed the previous CIF lead by distributing awards among institutions using two restrictions. First, MAX2-FROM stated that at most two selected Fellows could come from the same institution. Second, MAX2-TO stated that at most two selected Fellows could go to the same institution. These two restrictions were announced to applicants.

Below we describe the process that the Selection Committee followed before and at the four-hour virtual selection committee meeting held on July 17, 2020. See the full CIFellows 2020 Selection process in Appendix P.

  • All candidates received 2 numerical scores: a TECHNICAL score which was the average of their two technical review scores (up to 20 points) and a FLAG score with one possible point for each of gender, ethnicity, citizenship, and disability (up to 4 points).  When reviewing applicants, Selection Committee members considered the TOTAL Score (up to 24 points) and the technical score component, individually.
  • We documented and monitored progress with a single web-based spreadsheet and two charts, one for demographics of those we had accepted and one for subarea. Table 2 gives the column names of the spreadsheet as well as an optional comment on its purpose and potential value.
  • Two days before the selection committee meeting, each Selection Committee member was asked to propose some “pre-accepts” in the subject areas they were responsible for. These choices were subject to the MAX2 rules within their areas, but no MAX2 coordination was attempted across areas. Not surprisingly there were some MAX2 conflicts among the pre-accepts. The number of pre-accepts made up one-third of the eventual accepts, distributed across subareas of computing proportional to the number of proposals in the subarea. The goal here was to exercise processes and use the main meeting time as effectively as possible to discuss the other two-thirds of the slots.
  • Because we had additional funding from EWF for up to 10 Fellows, we looked at any applicant who had an EWF flag (CS Ed or BPC as Research Area, not just primary, or coming from / going to an Minority Serving Institution) separately.  EWF had 5 pre-accepts.  
  • We started the selection committee meeting by discussing and resolving the pre-accept MAX-2 conflicts. There were five of these, involving two institutions (MIT and CMU). 
  • After that, applicants were discussed in descending total score order.
  • Selection Committee members with personal conflicts left the conversation and were invited back after the discussion was finished. We allowed institutional conflicts to remain in the room but stay silent, to reduce the disruption caused by lots of entering and leaving. Each Selection Committee member presented applicants from their subarea.
  • We periodically monitored demographic data (final version shown in Section 3) but there were no quotas.
  • We also periodically monitored subarea data (final version shown in Section 3) with a target of each subarea getting at least two-thirds of the selected proposals, had we divided the selected proposals exactly proportional to proposal pressure.
  • We had planned to construct an explicit waitlist should any selected fellows decline, but we instead opted to let Hill and Zegura implement any need to select substitute fellows, anticipating that there would be few of these. The plan was to identify substitutes using the MAX2 slots freed by declining applicants (if possible), but this was tougher than expected, as the MAX2 (from /to) was more involved than just replacing someone going to Institution X.

After the Selection Committee process was complete on July 17, we submitted a slate of 59 candidates to the Steering Committee for review. They were approved on July 20 and we submitted similar material to NSF later the same day. NSF approved on July 22,  and we sent letters to selected fellows (Appendix Q) and mentors (Appendix R), as well as non-selected applicants (Appendix S). Within a one-week deadline (with a possible extension in special cases), 54 fellows accepted, 2 asked for an extension with good reasons, and 3 declined. Appendices T and U provide the forms filled out by accepting fellows and mentors. We invited three more applicants with the declined slots, and they all accepted by August 1.  One of the Fellows who had asked for an extension also had to decline, so a fourth invite was made and accepted on August 7. Section 3 discusses selected fellows in more detail.

The selection process was supported by various software. Broad announcements used the CRA Bulletin, CCC Blog, and Mail Merges. Applications and reviewing were done through Cadmium. The final selection virtual meeting took place on Zoom with the data in Google Sheets and Google Docs. Most of the correspondence was drafted using Google Docs. Final letters to applicants and mentors used Mail Merges. Acceptances were collected through Wufoo Forms.  Correspondence and collection on subawards was handled through email and Google Docs.  Section 4 includes a discussion of the strengths and weaknesses of Cadmium (the application / review software selected).

Given the complex optimization considerations represented by selecting CIFs, selection committee member Emery Berger volunteered to write an automated solver to propose “solutions.” The script, written in Python, uses Microsoft’s Z3 solver to generate a set of CIF assignments that meet all stated constraints. At the same time, the solver is set to maximize several metrics: the review plus diversity scores for each candidate, overall geographic diversity, and first choice preferences. We had the solver generate 100 solutions, and report, for each applicant, the fraction of solutions in which that candidate was selected. We did not use the solver to select CIFs. However, several selection committee members used the solver’s recommendations to identify specific applicants for additional attention. The solver proposed different solutions than the human selection committee process, in part because some aspects of our holistic process were not encoded for in the solver.

Funding for each CIFellow required a subaward between CRA and the host institution. Each subaward is unique but is as identical as possible to facilitate batch processing by NSF. The NSF approval of all subawards is required by NSF rules. Because the subawardees will not be known until the review and selection process is completed, this step can only happen late in the entire process near the point at which funding needs to flow. So the goal is for NSF to review and approve the awards as quickly as possible. 

A subaward template was drafted over two weeks in June 2020 and sent to NSF for preliminary review on June 19th. On July 1st comments were received from NSF and final template modifications were made over the subsequent 2 days. The subaward template includes a sample two-year budget with a set salary of $75,000 per year, sample institutional fringe rate and indirect cost rate (IDC) capped at 35% per NSF. Any mandatory computing fees could be added. If an institution had a mandated, non-discretionary postdoc salary increase for year two, the increase could also be added into the subaward budget. CRA included the relevant fellow’s research plan from their CIFellow application. Mentors were also required to submit a postdoctoral mentoring plan in NSF approved format to include in the final version of the subaward. 

Subaward documents were sent to each mentor and their designated grant administrator by CRA’s Reimbursement Specialist and Grant Specialist. Although CIFellows were not signing the documents, including them in the email exchange proved helpful. Host institutions were asked to fill in contact information (Administrative Contact, Project Director (PI), Financial Contact and Authorized Individual) and update the sample budget to reflect host institution rates. September awards were sent out starting July 27th and January subawards went out the following week. Documents were only sent if both the fellow and mentor had officially accepted their CIFellow offer.

With 59 subawards, roughly split between Sept 1, 2020 and January 1, 2021 start dates, tracking the status of each subaward was critical. Key tracking elements include: subaward number, mentor name, fellow name, date postdoc mentoring plan was submitted, date the institution signed (partially executed copy), date CRA signed (fully executed copy), total requested budget, any post-NSF approval requested modifications and batch number. Keeping notes of the latest status of each subaward helped with high email volume. Individual Dropbox folders for each fellow kept track of the various documents and subaward versions.

Negotiations with host institutions brought up a variety of questions but the two main categories consisted of small non-substantive modifications to the subaward language and clarifications on the capped IDC rate. For institutions that requested minor modifications to the subaward text, CRA asked to postpone any changes until after NSF reviewed and approved the agreements. Any unexpected variations of the awards threatened to jeopardize their rapid batch approval. Once fully approved, the non-substantive changes are made directly between CRA and the host institution without further NSF approval. 

The request to cap the subaward IDC rate at 35% prompted strong pushback by some institutions. Host institutions have federally negotiated indirect cost rates that are generally higher than 35%. Federal regulation 2 CFR §200.414 requires all federal awarding agencies to accept negotiated rates. An exception can be made “only when required by Federal statute or regulation, or when approved by a Federal awarding agency head or delegate based on documented justification…” In order to satisfy 2 CFR §200.414, NSF worked to provide satisfactory documentation. On their website, under NSF Cost Sharing Policy and subheading “Individual NSF-funded Projects with Mandatory Cost Sharing” was listed the text “Computing Innovation Fellows Project 2020, Award No. 2030859, Prime Awardee: Computing Research Association (subawardees indirect cost recovery limited to 35%).” This official documentation provided the basis needed by host institutions to accept the capped rate. 

To streamline the batch review process by NSF, the awards had to be assembled in a particular order. NSF requested a single pdf with a spreadsheet summary at the front and subawards listed after. The spreadsheet included: Subaward Name, Project Title, Institution, CIFellow name, Total Amount, and Subaward Period. Batch 1 was compiled and sent to NSF’s CIFellow Program Manager for initial screening on September 1. It consisted of 18 subawards with September 1st start dates. It was decided to focus on September agreements as they were most time sensitive.  A second batch of 5 were sent for NSF approval on September 25. An additional four September agreements were still being finalized and would wait for a subsequent batch.

NSF’s CIFellow program manager found some formatting issues with Batch 1 on September 2nd. An updated file was sent back to her on September 3rd. An additional correction was requested on September 7. On September 8th the final Batch 1 was sent to NSF for batch processing review. On September 23rd, CRA received word that DGA had given a “green light” and requested formal submission through Fastlane. CRA’s subaward PI (Executive Director) submitted them into the online system (having to enter through SPO). On September 30 NSF granted approval of Batch 1 subawards and notice was sent to all parties on October 1st. 

Batch 2, consisting of 5 additional September subawards, was sent to NSF’s CIFellow Program Manager on September 25 for initial screening. CRA did not receive any correspondence on the preapproval for Batch 2. In order to move the process along, Batch 2 was recreated with another award attachment, which brought it to 6 agreements. Batch 3 was also created with 22 January start date awards, and both batches were sent for approval on October 13 through the online system. Batch 4 contained the remaining 13 awards and was sent through Fastlane in November.

 After subaward agreement approval was granted, the process moved into award management. Fellows were required to submit a quarterly report using a form included in the subaward. The form categories include: Subaward Number, Date, CIFellow Name, Mentor Name, Pass Through Entity, Institution/Organization, Progress on Research Plan, Progress on Mentoring Plan, Publications Submitted, Any Additional Information to Report. Mentors needed to work with their Fellow to comment on the Mentoring Plan component. Once completed, quarterly reports were submitted to CRA’s Administrative Contact (Reimbursement Specialist) and were paired with an institution invoice. The quarterly report must be submitted within thirty days of the end of each quarter. Invoices from the institutions can not be processed until the quarterly report is submitted. Once reviewed and processed, a quarterly payment was sent to the institution. 

Additional expense funding was made available to the Fellows. Each Fellow was given the chance to submit a reimbursement form for an amount of no more than $1500 per fellowship year (two year fellowship). This additional expense fund was meant to help the Fellows with research equipment, moving expenses, work related travel expenses, and conference registration fees. The reimbursement amount was capped at $1500, so some Fellows did not receive the total amount they requested, but all that have submitted have stated it has been incredibly helpful during these uncertain times.


Table 1. Key People Involved

NamePosition /
Institution
Co-Principal
Investigator
Steering
Committee
Selection
Committee
Aruna BalasubramanianStony Brook Universitymember
Emery BergerUniversity of Massachusetts Amherstmember
Andrew BernatCRA Director co-PIex officio
Elizabeth BradleyCCC Chair & University of Colorado, Boulderco-PIco-chairco-chair
Ann Schwartz DrobnisCRA’s CCC Directorco-PIex officio
Mark D. HillCCC Chair (Emeritus) &
Univ. of Wisconsin
co-PIco-chairco-chair
Jessica HodginsCarnegie Mellon Universitymember
Anita JonesCCC Founder & University of Virginia (Emeritus)member
Leslie Pack KaelblingMassachusetts Institute of Technologymember
Sampath KannanUniversity of Pennsylvaniamember
Benjamin KuipersUniversity of Michiganmember
Richard E. LadnerUniversity of Washington (Emeritus)member
Jelani NelsonUniversity of California Berkeleymember
Stefan SavageUniversity of California, San Diegomember
Katie Siek CCC Council Member & Indiana Universitymember
Ellen ZeguraCRA Board Chair &
Georgia Tech
PIco-chairco-chair

Table 2. Headings for Selection Spreadsheet

Column NameColumn Purpose/Values
ID
Submission ID from Cadmium
Submitter First Name and Submitter Last NameTwo separate columns
Research Area 1Research area 1 applicants identified on application (reviewer pool group)
Primary AffiliationWhere the applicant received their PhD or is completing post doc
Gender FlagPoint for diversity if applicant was a female
Race/Ethnicity FlagPoint for diversity if applicant is not white or Asian
Citizenship FlagPoint if the applicant is a citizen or permanent resident
Disability FlagPoint if applicant marked a disability on their application
Flag ScoreTotal of diversity points/flags (4 possible points)
CS Education FlagFlag if the application was CS Ed (qualifies for EWF funding)
BPC FlagFlag if the application had Broadening Participation elements (qualifies for EWF funding)
MSI FlagFlag if applicant coming from or going to an MSI (based on Carnegie Listings)
Geography FlagNSF Epscor states (used as a guide)
Mentor OrganizationApplicant’s host institution
Double AppIf the applicant submit two applications, the other application ID
App PreferenceIf the applicant preferred one application over the other
Submission Reviews AVG (Total Technical Score)Average of the sum of reviewer scores over five questions (5-20)
Technical Score + Flag ScoreTotal technical score plus diversity points
Decision Marked pre accepts, if the applicant’s other application was accepted, if the applicant was declined due to max 2, pre declines and accepts and declines
Count1 if chosen, 0 if not (added up for running total of chosen applicants at the bottom)
CommentsGeneral comments