Your correspondent has been meaning to write a post explaining a little of what goes on behind the scenes in the FT program, showing what participants see as they go through the process, and giving a couple of tips for writing successful FT proposals. Here, finally, is that post.
1) The process
Our proposal deadlines are at midnight HST on the last day of each month, and at this point the FT software gathers the proposals from the server and creates a web page showing what has been received. In the morning, members of the FT support team check that each proposal meets the criteria to be a valid FT proposal: that its PI is from a participating partner, the target RAs are within the acceptable range, etc. We also manually check for conflicts with current queue programs, which wouldn’t have been found by the archive search carried out by the Phase I Tool before the proposal was submitted. We check a button to include or exclude each proposal (as shown by the screenshot below), and then click a button to generate The Matrix…
The Matrix shows us which proposals have been assigned to each reviewer. The software matches proposals to reviewers based on the keywords in the proposals, as far as possible. Those are the numbers shown in the cells in the Matrix. The software also looks for obvious (potential) conflicts, making sure that reviewers aren’t assigned proposals on which they are PI or co-I (red cells) and flagging proposals with targets in common (yellow cells). By clicking on the cells in the Matrix the FT team can select, deselect, and veto any of the assignments. This allows us to ensure that, say, reviewers aren’t assigned proposals for very similar observations of the same non-sidereal object, which wouldn’t always be caught by the automatic target comparisons. In general, though, we interfere as little as possible in this step.
Once each reviewer has been assigned their proposals (up to 8 each), we simply tell the software to send out standard emails to each reviewer giving them instructions about how to proceed. This usually happens within about 12 hours of the deadline. When a reviewer clicks on the link in their email, they are first asked to set a password. The next step is to agree that they will behave ethically and follow the rules of the program:
Having agreed to these terms, the reviewer is then shown the list of proposals that have been selected for them – title, investigator list, and abstract – and asked to declare whether or not they are able to provide an unbiased review of each one. When a proposal is declined, a replacement is offered (when sufficient proposals have been received).
Only at this stage do the reviewers proceed to the review form and gain access to the full proposals. The review form presents the assessment criteria and requests a numerical score and brief written review of each proposal. It also asks the reviewer to state their own view of their knowledge of the subject area of each proposal on a scale of 0-2. This is currently just one of the tools we use to evaluate the program; it is not used to weight the reviewers’ scores.
The reviews must be completed by the 14th of the month, at which point access to the forms is removed and the team is notified that the review cycle has closed. The Matrix page now shows us a list of proposals ranked by their mean score, with various tabs in which we can see individual reviews and other information. We figure out (currently using our brains and a whiteboard) which programs are likely to fit in the available time, always trying to stick as closely as possible to the reviewers’ ranking. Any proposal must obtain a mean score of at least 2 (on a 0-4 scale) to be considered, no matter how much time is available.
Over the next couple of days we check that the provisionally-accepted programs are technically feasible. If any are not, we re-examine the list and replace the problem proposals as appropriate. Once we have the final list, we mark each proposal as accepted or rejected, upload our technical assessments, and tell the software to notify the directorate of the selection. While we await their formal approval, we manually create skeleton programs in the Observing Tool. By the 21st of the month everything is in place to generate the emails that notify the PIs and reviewers of the outcome of the proposal cycle. Everyone receives their mean score, all the individual reviews of their proposals, and the technical assessment if one was done.
The PI immediately has access to their program and can begin to set up their observations. Starting in 2015B we will be merging FT programs into the queue rather than using separate FT nights. FT observations will then be available for scheduling as soon as they have been prepared, which is hopefully an incentive to set them up ASAP…
2) How to write a successful FT proposal
This really comes down to just a couple of things:
- First, make sure your science is accessible to a broad audience. Although the keyword-matching algorithm tries to match reviewers to proposals in similar or overlapping areas, the pool of potential reviews is small enough that your idea is going to be judged by people outside your field. It’s really helpful if the quasar person can understand why your brown dwarf observation is worth doing.
- Second, make the case for why your proposal is a good candidate for the FT program. We at Gemini believe that FT time is for any kind of good science, regardless of whether the target is fading or about to disappear behind the sun. This is clearly stated on the FT web pages and review form, and we now include this in our initial instructions to reviewers as well. However, we have observed a tendency for reviewers to give some weight to the “urgency” of a program and its need for rapid response. If you simply have a good idea that you want to pursue right now, then we’d encourage you to explain in the proposal that this is super exciting and you’ll get on the data right away.
3) One final thing
We received 7 proposals at the July deadline and the cycle is running as usual. We now eagerly await your August proposals.