Selection Criteria and Scoring System

Selection Criteria

Proposals reviewed by external panelists are subject to a single-phase review; proposals reviewed by the virtual topical panels are subject to a two-stage review process: 1) preliminary grading and triage; and 2) the review meeting. In all cases, panelists use the same scoring system.

Proposals will be assessed on an absolute scale against three primary criteria described in the Call for Proposals with a separate grade given for each:

  • The scientific merit of the program and its contribution to advancement of knowledge – how does the proposed investigation impact our knowledge with the specific sub-field?
  • The program’s impact for astronomy in general – are there implications for other science areas and/or insights into larger-scale questions?
  • A demonstration that the unique capabilities of HST are required to achieve the science goals – suitability for HST; how much of an advantage does HST data offer over other facilities? This applies to both GO and AR proposals; Theory proposals should have broad applicability to HST observational programs.

AR and GO calibration proposals are required to provide an analysis plan; reviewers should also consider the strength of the analysis plan in assessing the first two criteria.

Descriptions of additional criteria by type of proposal are given in the Proposal Selection Procedures section of the Call for Proposals.

While reviewing the proposals if you notice and/or identify any issues with the proposal template formatting, page limit violations or resource request issues, please contact SPG to discuss before downgrading that proposal.

Scoring System

Preliminary Grades and External Grading

The full set of criteria to apply in assessing different types of proposals are described in the Proposal Selection Procedures section of the Call for Proposals. Those criteria should be taken into account when grading each proposal.

The preliminary scoring and the grades from external panelists should be on an absolute scale with the framework set by the following criteria.

Grade

Impact within the sub-field

Out-of-field impact

Suitability

1

Potential for transformative results.

Transformative implications for one or more other sub-fields.

Science goals can only be achieved by observational or theoretical analysis of HST data.

2

Potential for major advancement.

Major implications for one or more other sub-fields.

Analysis of HST data offers major advantages over data from other facilities.

3

Potential for moderate advancement.

Some implications for one or more other sub-fields.

Analysis of HST data offers some advantages over data from other facilities.

4

Potential for minor advancement.

Minor impacts on other sub-fields.

Analysis of HST data offers minor advantages over data from other facilities.

5

Limited potential for advancing the field.

Little or no impact for other sub-fields.

Analysis of HST data offers little or no advantage over other facilities or the advantages of analysing HST data are unclear.

Each topical panel covers a very broad science category, and each science category contains a number of narrower sub-fields. Ideally a proposal will be impactful to both the narrow sub-field of the proposal and to other sub-fields within the science category or in other science categories. We suggest that reviewers consider the following guidelines in assessing these criteria:

  • Impact within the sub-field: Will the proposed program improve our understanding of the objects, classes of object, or specialist topics under study in the proposal? By how much? How relevant is the proposed work to the immediate sub-field of the proposal?
  • Out-of-field impact: Will the proposed program improve our understanding of other objects, classes of object, or specialist topics in science areas beyond the immediate sub-field of the proposal? How broad and how significant is this new understanding?

Reviewers may submit grades in decimal form, but please limit to one decimal place.

The following examples aim to give guidance in applying these rubrics to grading proposals; reviewers should use their best judgement.

Case 1: UV observations of gas in young stars

In field

Highly significant improvement in our understanding of gas flow in young stars.

1-2

Out of field

Potential for significant changes in our understanding of gas flows in a wide range of other environments.

1-2

Suitability

UV observations are essential to achieve the science goals and can only be acquired through HST observations.

1


Case 2:  Analysis of archival near-IR imaging of a nearby galaxy for stellar population investigations

In field

Major advance in understanding stellar populations in that galaxy.

2

Out of field

Some implications for stellar populations and stellar evolution in other galaxies.

3

Suitability

The increased spatial resolution offered by HST provides some advantages over other facilities in addressing the science goals. The analysis offers significant improvements and/or additional value with respect to the original use of the data.

2-3


Case 3: Optical/near-IR spectroscopy of an emission-line galaxy

In field

Moderate increase in understanding of prevalence of star formation in that galaxy.

3

Out of field

Minor implications for the properties of other galactic systems, but no wider impact.

4

Suitability

Only optical data are required for the science case; limited gains in performance at near-IR wavelengths as compared with larger ground-based facilities

4-5


Case 4: Developing theoretical tools to characterize gas and dust in Galactic star-forming regions

In field

Potential significant increase in understanding of chemical composition in dusty environments.

1-2

Out of field

Results have significant implications for interpreting dust composition in other galaxies.

2

Suitability

The theoretical analysis will enable and support additional HST observational programs.

2


The grades for each virtual panelist are normalized by SPIRIT to a mean value of 3.0 and a standard deviation of 1.0. The reviewer's grade for that proposal is the average of the three normalized grades.

The preliminary grade for each proposal is determined by averaging the overall grade from each reviewer. The preliminary grades are used to create a rank order list for each panel and the lowest-ranked proposals (typically ~40%) are triaged from further discussion.

Virtual Panel Meetings

In the virtual panel discussion, the panelists should use the following expanded scale for their grading. This takes account of the elimination of the triaged proposals and provides more dynamic range to grade the remaining proposals.

Grade

Impact within the sub-field

Out-of-field impact

Suitability

1

Potential for transformative results

Transformative implications for one or more other sub-fields

Science goals can only be achieved with HST

3

Potential for major advancement

Major implications for one or more other sub-fields

Major advantages in using HST over other facilities

5

Potential for moderate advancement

Some implications for one or more other sub-fields

Some advantages in using HST over other facilities

Reviewers may submit grades in decimal form but please limit to one decimal place.

As with the preliminary grades, the grades are normalized. The overall grade for each reviewer is the straight average of the three individual grades. The preliminary grade for each proposal is determined by averaging the overall grade from each reviewer. Once the grading is complete for all proposals, the rank order list is created.

Final Ranking of Proposals

Each panel has a nominal allocation of N orbits, which will be communicated by SPG. The number of orbits is different for each panel and the allocations are determined by the relative proposal pressure and orbit pressure across the panels. Panel members should review the rank order list to determine whether the highly-ranked proposals above the nominal cutoff line ("the 1N line") provide an appropriate science balance for the panel. There may be a consensus that some science areas have been unduly favored. There may also be cases where the chair identifies highly ranked proposals that have a science overlap with proposals highly ranked by another panel. In those cases, the panel members can make a consensus decision to re-rank (but not re-grade) proposals to provide an appropriate reflection of the science topics reviewed by the panel. Whenever two proposals are being discussed together, panelists conflicted on either proposal may not be present for the discussion.

In re-ranking proposals, panels may directly compare proposals, irrespective of their relative ranking, that are judged to have very similar science to the extent that the panel may recommend executing only one proposal. Panelists conflicted with either proposal may not vote on the re-ranking. If the panels choose to do only one proposal, the other proposal is moved to the 2N line and the Proposal Feedback Comments for that proposal are adjusted to reflect the discussion.

In all other cases, panels may only compare and vote on adjacent proposals. This is to minimize conflicts. Thus, if a panelist advocates raising a proposal in position 14, it must be compared and voted against proposal 13 - panelists with conflicts on #14 and #13 may not vote. If it is raised to position 13, it can be compared against proposal #12 – again, panelists conflicted on either proposal may not vote. And so forth until the ranking is established. Note that the exact ranking is most important close to the 1N line for each panel.

Each panel is allocated a separate pool of orbits for Medium proposals based on Medium orbit pressure. Only Medium proposals above the 1N line are eligible for this allotment. Any orbits from the Medium pool that are not allocated may not be used for Small proposals. Panels should not inflate the rank of a Medium proposal in the case where they would not naturally use up this allocation. If a panel has more Medium proposals above the 1N line than can be filled by the Medium orbit allocation, remaining orbits may be allocated from the panels allocation of orbits for Small proposals, but this is not required. Similarly, the panel can keep additional Medium proposals in their ranked list, but this is not required. Any Mediums kept in the ranked list should have their ranks adjusted by pairwise comparison with other proposals; however, in making those comparisons the panelists are free to use valid considerations beyond the individual merits of each proposal, such as ensuring a scientific balance of the approved program.

Panelists are asked to rank proposals all the way down to twice the orbit allocation (the 2N line); this is in case any changes need to be made to the top-ranked proposals (for example, an approved proposal in another panel proposes the same observations, thus making the proposal in the panel a duplication). Panelists are also asked to set a do-not-approve line, if they deem it appropriate.

Complexity

The complexity estimate is NOT required for Cycle 29 proposals.



Next: Dual Anonymous Proposals Guide for Reviewers