PDA

View Full Version : Deciding what Hubble should do next year - the blog



ngc3314
2006-Apr-13, 03:42 AM
For those who have ever wondered how they decide what Hubble looks at - here's a report from this year's proposal review meeting, recently concluded at a scenic hotel next to Baltimore-Washington Airport. Looking back at my curriculum vitae, I find that this is my fifth time to be involved, going all the way back to the bad old days of aberrated images in 1991. I like the new images better.

Although this is a very personal judgment, I found this year's crop of proposals (more particularly, the set we looked at dealing with galaxies near and far) to be at least as exciting as the last couple of round I worked on. Although the capabilities of the observatory are somewhat circumscribed by the loss of STIS and the move to a longer-lasting pointing technique, Hubble can now reap the harvest of objects discovered by the Chandra and Spitzer observatories, the GALEX ultraviolet sky survey, and increasingly sophisticated winnowing of the results of the Sloan Digital Sky Survey. And, of course, we can now follow up things known only from previous round of observations using Hubble itself.

In the two months or so leading up to the review, the proposals were distributed to reviewers and the Institute prepared for the onslaught. This year, each of us got a CDROM instead of huge stacks of paper (although some parts of the review still seem to work better if there is paper in front of you). The reviewers are asked to remain anonymous until after the review results are announced. This prevents subtle forms of jury tampering such as gifting them with envelopes stuffed full of preprints of someone's latest unpublished results, or hearing subtle reminders of a colleague's brilliance over coffee at a meeting.

Avoiding conflicts of interest in proposal review is a Big Deal for several reasons. First, it's just basic honesty, and good science, to secure the most unbiased review possible. On top of that, there are government regulations with legal teeth that apply, since US-based investigators receive grant funding to cover expenses of analyzing the data (which may pay grad students, postdocs, summer faculty salary, meeting travel...). And since it is not unknown for disappointed proposers to complain to members of Congress that they have been unfairly treated, many problems can be avoided by worrying beforehand. The Hubble operation has been under a public microscope for a long time and, in addition to making reviewers with personal or institutional conflicts of interest recuse themselves, tries hard to minimize the number of times that this happens. Much of this review structure serves well enough that it has been taken over for Chandra, and in relevant part by other facilities as well.

Panel members, chairs, and additional at-large members of the final Telescope Allocation Committee (TAC) are selected. As they read proposals, there are some cases in which technical issues are called to the attention of instrument scientists. Can this observation really be done in 45 minutes for something so faint in that filter? Is it a problem to get a deep ACS galaxy image looking so near a bright star? How fast can the camera cycle between pictures at adjacent places in different filters? Panel members don't have the authority to decide whether something is feasible or not; this decision gets handed off to people who work with operating the instruments constantly.

The European Space Agency has been an important partner throughout the whole Hubble project (and JWST as well). Under agreement with NASA, ESA scientists are supposed to get at least 15% of the observing time on Hubble. The system tracks how many investigators on each proposal work at institutions in ESA member states; so far, getting the 15% has never been a problem, and has never required juggling the peer-review results. This adds a prominent European presence to the review as well; three of nine members on our panel flew the Atlantic for this.

The first stage of this review includes eleven panels - one for solar-system proposals, and five each for galactic and extragalactic topics. Topics with many proposals are twinned, with the topic split between two panels so that conflicts of interest (for which panelists have to leave the room, or for more indirect cases, refrain from voting) are minimized.

This year there were 733 proposals, requesting almost 15,000 orbits of telescope use (the "orbit" being the normal unit of telescope allocation - Hubble can look at most areas of the sky for about 50 minutes during each 94-minute orbit). That means that only 1 in 5 can be scheduled; we're guaranteed to go home disappointed that some interesting projects simply won't fit. These numbers, as daunting as they are to proposers, are down a bit from the peak values each time new instruments are installed in the telescope - in some years, there have been as many as 1298 proposals. The main reason the number was a bit low the last two years was the failure of the STIS electronics, which left Hubble without a dedicated spectrograph and all the scientific flexibility that went with it. Any spectral observations it does now have to use one of the cameras with a diffraction grating or thin prism inserted, smearing every object into a spectrum and letting them overlap and interfere where they may. The available instruments now are the venerable Wide Field Planetary Camera 2 (WFPC2), installed in the first refurbishment at the end of 1993, the Near Infrared Camera and Multiobject Spectrometer (NICMOS), and Advanced Camera for Surveys (ACS). The Fine-Guidance Sensors can also be used for science measurements, but our panels didn't see any such programs.

Each proposal is assigned a primary and a secondary reviewer, who are charged with presenting it to the rest of the panel (who should all have read it as well). They should summarize the science goals and specific approach, highlight special features or difficulties, and generally lead its discussion. The reviewers send in preliminary ratings about a week in advance - first this makes sure they actually read them ahead of time, and next it allows the discussion to be more focussed. After (as best I can tell) a certain amount of trepidation, the Institute found that the discussion could be made more productive by introducing a form of triage (something which has also been adopted by other facilities as well). The idea is that, when the oversubscription is this high, there is no point in spending a lot of time detailing the shortcomings of a proposal which ranks in the bottom 25% of the preliminary grades, and which no panel member wishes to bring up for discussion (we did see a few proposals pulled out of the initial triage pile for that reason). This made sense once several years' worth of data were in hand to show that proposals starting this low essentially never made the cut to be scheduled. Reviewers could easily spend time discussing the flaws of a poorly-rated proposal in great detail, and contributing nothing to the outcome of proposals actually scheduled.

Following discussion of the proposals, new numerical grades are worked out by averaging individual grades via secret ballot (the point of that being to avoid anyone being influenced by how they see someone else vote). Our panel reviewed about 70 proposals, which meant that each one brought up could be discussed for no more than 15 minutes (which is one more reason that it was important to read them ahead of time!). Very few proposals were not scientifically worth doing - there used to be a few clown proposals that had no reason to be submitted, showing basic misunderstanding of the instrument or a scientific point, but they have been worked out of the system. We usually have to make an effort to downgrade proposals for even fairly minor shortcomings, simply to be able to make some distinctions at all. We grouped proposals for similar science to be talked about in sequence, since there are often several proposals that are so similar that they should be compared in some detail.

The proposal reviews used to be at the Space Telescope Science Institute in Baltimore, bringing everything but operations to a screeching halt for a week. With broadband connections routinely supported, the review can now be more conveniently hosted at an airport hotel, with instrument scientists only a call or email away for details and documents (and we can also quickly look at archived ourselves if needed). There are downsides to everyone being connected; last time I was here, the committee chairwoman had to call down panelists for watching CNN on their laptops (because that was the first day of ground fighting in Iraq war). With everyone taking the hotel shuttle bus, we could play a round of "spot the astronomer" - my wife has always claimed that you could spot astronomers in an airport, even if they were well dressed. Worked pretty well this time, but then I already knew most of the folks I spotted.

[This is set up like a blog, but is showing up several weeks after the event for several reasons. First, I had to verify with the STScI folks that it was approrpiately purged of individually identifiable information. Then it took me a while to fill out some of the details, and finally I had some discussion with the BA on how to post this. Each day of deoiberations gets it own post.]

ngc3314
2006-Apr-13, 03:44 AM
The meeting begins:

Day 1

The review starts with general information on the state of the telescope and instruments, recent results, and prospects for one more servicing mission (which would not affect the year we're working on). STScI director Matt Mountain reminded us of the value of the observatory, especially in this era of budgetary earthquakes at NASA, and showed off some science highlights (including one on extrasolar planets which has yet to appear in print...)

Duccio Macchetto, in charge of science selection as well as representing ESA, set out the parameters of the review - how many orbits were requested versus available, the default distribution among panels, how time for large proposals is discussed among the panels and an overall allocation committee.

After a quick break, the eleven panels got themselves organized. Three of our 9 panelists were from ESA countries. I wound up as assistant chair almost by default; my travel plans would let me stay around if needed to finish the paperwork while the chair went on to meetings of the overall TAC on day 3. Was I happy that this in fact took no extra time?

Without being specific about proposals (and in fact I had this checked by one of the STScI management to keep me out of trouble), I saved a few quotations each day to capture some of the flavor of discussion:

"You use what you have and look at what you can see"

"Please don't make us fish for your meanings!"

"Like the science, but it's a bad way to do it" - this is probably the most common complaint. The science goal is timely and important, but the specific of the observations proposed could be of more telling objects, better connected to other data, or set up in a more informative way. We're specifically not supposed to rewrite a proposal, as tempting as that might be.

We sometimes needed to juggle discussion of proposals so as not to sit waiting on technical review comments.

With eleven panels plus support astronomers from STScI, this review would make a substantial astronomical meeting in its own right. Dinner the first night was full of greeting people we only see at such meetings.

ngc3314
2006-Apr-13, 03:47 AM
The deliberations continue:

Day 2

We finished discussion of our assigned proposals, as well as some of the larger proposals which are really decided by the final allocation committee but for which the panel's opinion is solicited. We realized at this point that we had to pick reviewers for these proposals and adjourned briefly to reread them.

For today's proposals, it proved very helpful to have a Spitzer expert on the panel, as well as having Chandra and Spitzer staff on call. We also started getting very specific in criticizing, for example, inclusion of spiral galaxies in a study of elliptical galaxies. In context, it didn't seem too nerdish to hear an NGC number followed by a burst of laughter.

Even before finishing this round of reviews, we looked at the average grades to make sure we were accidentally concentrating on particular topics? (The answer, without even working hard at it, seemed to be "no").

Today's memorable lines:

"A reason more to like the proposal less"

"I cannot think of a better use of HST time"

"If you like it, we can arrange something"

"Theorists are not very good at explaining why what they do is relevant to anyone"

"Full of flowing phrases, how beautiful the data would look..."

Over and over, we always wonder whether we should penalize a good idea (perhaps a better idea than the proposer realizes) for being poorly explained. If we don't know all the proposers personally, it's not quite fair to guess when they know better but didn't word something well, much less balance that against the value of having data in the archive and available to other researchers even if its first analysis may not be as complete as it could be. On top of this, there is certainly a range in scientific style, and it may not always be easy to appreciate someone who insists on working to different standards than my own.

This experience brought to mind a line that once showed up in Usenet postings from UK programmer Mike Taylor: "Peer review is a lot like the USA's distinctive constitution. The current implementation sucks, but the idea is still good and noble and worth fighting for. And even the current implementation is way better than most of the alternatives." I have to agree... My own experience on both sides of the process demonstrates that there are certainly year-to-year variations in how the same proposal is rated. Maybe there's a correlation with what they serve for breakfast each review...


By the end of the day, we had our first ranked list of all proposals. Some were zeroed out here - for example, the lower-ranked of two very similar proposals wouldn't be done even of time were available, simply to avoid pointless duplication of observations. This step gave us a chance to ask whether we thought our voting on various categories of proposal was scientifically consistent. We compared three kinds of proposals. Most numerous are ordinary proposals for new HST observations. In addition, there are a significant number of archival proposals, for funding to analyze data already in the Hubble database. Anyone can do such analysis, so the decision here is what is valuable enough for NASA to pay for. Finally, there is similar financial support available for theoretical work which is important to the interpretation of Hubble results. Trying to rate these competitively is sometimes like comparing apples, power tools, and British band music.

ngc3314
2006-Apr-13, 03:51 AM
Wrapping it up:

Day 3

The job's not over until the paperwork's done. Today we made sure that feedback comments were entered via secure Web form, to be sent back to the proposers. Ideally, these should be useful in submitting an improved version next year (although with different people each year, there is no guarantee). The comments also should reflect the likely outcome of each proposal, which is an informed guess on our part at this point - the final decisions get made by the STScI director, almost completely following the recommendations of the panels and TAC for larger proposals. This was also our last time to make sure we were happy with the relative ordering of proposals on different topics and of different kinds. It helped that we had management present to give some details on the funding decisions for archival and theory proposals - funding for analysis of new data comes later, and is a decision I'm happy not to be involved with.

Finally, our panel chair wrote a summary of the panel deliberations, documenting the pattern of discussion and the way in which grading and ranking was done. This was also our turn to comment on how throughly the hotel's wireless network could be jammed when all 100 or so reviewers were trying to use the web forms at once. Our chairwoman passed this summary to me for a second opinion (where I added some suggestions for readability of the proposal forms on screen as opposed to paper), and headed off to the final round of deliberations of the TAC. This final Telescope Allocation Committee was largely charged with evaluating the merits of large proposals each of which requests hundreds of orbits and may cut across subfields, and making sure that the various panels' rankings could be compared with each other. This committee includes the chairs of the earlier panels, plus a few additional astronomers of wide experience who come to this round fresher than the rest of us who'd already been through 2-1/2 days of staring at proposal forms.

Over and over, for most of the proposed projects, the most common negative discussion points amounted to either "great science, somewhat suboptimal approach" or "this is a pet rock". The proposals that will likely get scheduled from our panel alone should give exciting results on the history and content of galaxies, which makes me feel pretty good despite the large oversubscription (and hence the many promising proposals that just won't fit).


But I'll still feel a bit nervous about this review for another couple of months. I wrote or co-wrote four proposals, and of course I have no idea of how they fared...

{Postscript: yep, whether I was principal investigator or a co-author with various colleagues, my name was enough to bring a proposal down in flames this year. Deep breath, make some notes for next round and cheer for long life for gyros and batteries...]

antoniseb
2006-Apr-13, 01:50 PM
Thanks! This was great to read. It's nice to see a description of the process.

ToSeek
2006-Apr-13, 02:05 PM
Yes, thanks for sharing.

jlhredshift
2006-Apr-13, 03:35 PM
Excellant report!

Could you give us a sense, in generalities, of what kind of science was liked and approved versus what was disapproved?

Did personalities truly play a role in the decision making, precluding important science from being done (subjectivity is understood)?

ngc3314
2006-Apr-13, 05:35 PM
Excellant report!

Could you give us a sense, in generalities, of what kind of science was liked and approved versus what was disapproved?

Did personalities truly play a role in the decision making, precluding important science from being done (subjectivity is understood)?

With the understanding that these are from the slice of galaxy work that we saw:

Star clusters have taken on new importance as ways to trace a galaxy's stellar content and history, even when it's too far away to distinguish many individual stars. The color and absolute magnitudes of star clusters striongly constrain when they formed and from what metallicity of gas. many galaxies have multiple populations of globular clusters, all old but with different ranges of metal abundance, and these correlate with galaxy properties in ways suggesting that they result from more than one burst at formation plus subsequent mergers. Likewise, there was a lot of interest in the diffuse "faint fuzzy" star clusters revealed by earlier HST imaging, a cluster population that our galaxy does not have.

Dwarf galaxies are hot, with tie-ins to formation and evolution of the whole galaxy population. What triggers their star formation, how many are there, are the ones in clusters original or tidal shreds?

Galaxy transformation was also quite current - identifying galaxies whose spectra indicate a recent burst or frosting of star formation atop an older underlying population. Hubble images can show whether the galaxy is merging, might have acquired external gas, or managed to do this trick all by itself. Statistics from the Sloan survey indicate that many galaxies go through such a phase - once. And, the oddest recent result to me is that this one-time transition is associated with nuclear activity. Still trying to wrap my head around that one.

There is a selection effect in these topics - with STIS non-functional, HST has only limited spectroscopic capabilities (all involving inserting a grating or prism in front of a camera and letting the spectra overlap where they may), so there are a lot of topics in nuclear black-hole masses and quasar absorpion lines from the intergalactic medium which were just missing this time around.

Object classes that could be identified only from the Sloan survey spectra, or Spitzer surveys, or the GALEX UV data, also did quite well. Especially that classes not previously thought to exist at various redshifts, or that were turning out to be in unexpected kinds of galaxies.

(I'm being a little bit coy with some, because if they didn't end up being approved, everything in the proposal remains confidential - approved proposals and abstracts enter the HST archive).

(Backlit dust in galaxies, or in the Crab Nebula, definitely seemed not to be hot this year!)

This time, I didn't see any of the kinds of conflict that suggested that individual personalities played a big role in selecting the science. I could tell stories about other committees elsewhere, though - you'd probably be shocked. Shocked.

Romanus
2006-Apr-14, 03:48 PM
No FGS proposals? A shame. :( I could think of a few worthy astrometric targets.

ngc3314
2006-Apr-14, 04:07 PM
No FGS proposals? A shame. :( I could think of a few worthy astrometric targets.

Probably not extragalactic, though - the stellar people (including extrasolar planetary systems) still see them. I've noticed several long-running data sets to get reflex motions of stars with Doppler-detected planets, to really tie down the orbital inclination and mass. And folks pretty much think they know the all-sky quasar-based inertial frame accurately enough that single quasar measurements could be affected by jet emission. (Amazing observation - a know in the inner jet of M87 has flared in the last few years to outshine the core both in visible light and in the X-ray.)

dgavin
2006-Apr-14, 08:50 PM
*perks*

Any proposals out there to study the few non galactic core blackholes (stellar BH's) that have been found in the last few years?

Nereid
2006-Apr-14, 09:19 PM
Wow! Thanks a million! :)

Some questions: to what extent was there consideration of the value of the data, archived, from a particular proposal vs a (narrow-minded?) assessment of the observation for the science in the proposal? (e.g. (pre-Key Project) time to look at Cepheids in M81 to refine the P-L relationship, rather than establish a rung on the distance ladder)* how about 'there's already more than enough data in {insert your favourite virtual observatory here} to answer the proposer's science. Denied'? How many 'multi-observatory' proposals were there? (these require time on, say, XMM-Newton, Hubble, ASTRO-F, and H.E.S.S. to produce a good science result). How could these be assessed? To what extent does 'serendipity' figure in any proposal? any review? I'm thinking of (for example) the 'deep M31 outskirts' observations - while they have been fantastic for all sorts of studies wrt M31 etc, I'm sure no one would be surprised to learn that - perhaps a year or three from now - a study of the data from that observation turned up something really quite exciting ... about objects not in M31 at all! To what extent would you say that VLT/Keck/Subaru/Gemini/etc + adaptive optics + appropriate instrumets could substitute for the HST proposals your team looked at? Leave out all those that included UV (that no ground-based observatory could ever do); ignore the fact that it might take a year or ten before these ground-based telescopes get the appropriate adaptive optics capability).*[Edit: of course, I made this (absurd) example up; by turning up the contrast (is that an understatement, or what?), to illustrate a general point - a proposal for time may not be particularly worthy in its own right, but the observations (if taken) would be great to have, for lots and lots of other projects.]

trinitree88
2006-Apr-14, 10:40 PM
[QUOTE=ngc3314]

"Theorists are not very good at explaining why what they do is relevant to anyone"
:D
Very good read ngc3314...insightful into the process for Hubble. :clap: It is always helpful to understand the particulars. As a theorist, I love the quote, too! Pete.:lol:

Harvestar
2006-Apr-15, 01:36 AM
Thanks so much for the great insight! As someone that had 2 proposals in that process (one as PI, one as a collaborator), I found it quite useful to read your thoughts. (and also, quite helpful, since I may one day be in that position too)

Our team was greatful to have our large-ish project accepted (finally! :) after 5 years of submission and resubmission). I know there was a lot of work done to rewrite much of it this year, and it helped that we have data taken with Spitzer of these objects. I was quite pleased that the TAC thought the collaboration was a great strength of the proposal. (not that they meant me, I'm the low person on the totem pole, but it's a great group of people in the field!)

I was still disappointed in my own proposal - we've been submitting it several times now. I was a little more disappointed this year since we redid the entire motivation section and I felt it was much better than years past. But perhaps next year! :) (*additional hoping for gyros and batteries to last*)

Nereid
2006-Apr-15, 03:38 AM
[snip]

I was still disappointed in my own proposal - we've been submitting it several times now. I was a little more disappointed this year since we redid the entire motivation section and I felt it was much better than years past. But perhaps next year! :) (*additional hoping for gyros and batteries to last*)Without asking, in any way, what your proposal(s) is, to what extent could the observations be obtained with a ground-based 'scope (plus instruments), equipped with adaptive optics (and blessed with especially good seeing)?

ngc3314
2006-Apr-15, 04:17 AM
Some questions:to what extent was there consideration of the value of the data, archived, from a particular proposal vs a (narrow-minded?) assessment of the observation for the science in the proposal?

A lot. For the very large "Legacy" proposals, that's an explicit part of the criteria for ranking them. It's a perennial issue - I've talked about it in reviews for Hubble, Chandra, ROSAT, Astro-2, and probably more I've forgotten. One proposal for one of those missions basically amounted to "Do you realize that (two hot objects) have never been observed with this instrument, and that it would be criminal for the mission to end without these data in the archive". The committee's collective response was <slap our own heads> Why didn't we realize that? We quite agree". Though in some cases, we always seem to worry about a proposal for data which has several brilliant uses, none of which seems to have occurred to the proposers...


how about 'there's already more than enough data in {insert your favourite virtual observatory here} to answer the proposer's science. Denied'?

Pretty common. The favorite proposal I wrote this time got slapped pretty hard for failing to absolutely demonstrate in the proposal why existing WFPC2 data do not have sufficient S/N and freedom from contamination to answer the question. Not addressing existing data is a serious weakness right away, at least to show why such data are not sufficient and perhaps how they let you frame your question more precisely. For ground-based observaries, there are some programs where you won't be taken seriously unless your proposal makes it very clear why you couldn't gather the requisite data in an hour of querying the SDSS database - and by the same token, make you look like a genius if you make clever use of it to refine your sample.


How many 'multi-observatory' proposals were there? (these require time on, say, XMM-Newton, Hubble, ASTRO-F, and H.E.S.S. to produce a good science result). How could these be assessed?

Maybe 10%, but that may be larger in panels dealing with time-variable phenomena. We were asked to specifically grade the HST science, but also to consider whether the coordinated observations were in fact essential to the results (the ground rule for approving the secondary instrument as well). iI so, the secondary request would pretty much stand or fall with the Hubble part of the proposal. Introducing this category of proposal was, in my view, a big step toward making some kinds of science more rational to get done - there are some kinds of study which require coordinated observations to get the result, and it's silly to have to go through double (triple...) jeopardy trying to impress multiple independent review panels under conditions of string oversubscription, when (say) the HST data by themselves don't get you there. There are (if I remember) joint arrangements whereby HST panels can award small fractions of Chandra, Spitzer, Kitt Peak/Cerro Tololo, and NRAO time, and likewise for Chandra and Spitzer. In fact, there was a specific notice emailed around, reminding us that for any project which really needs coordinated Deep-IR, optical/UV, and X-ray data, we are now in a unique window of uncertain length, so propose now - reviewers are standing by!


To what extent does 'serendipity' figure in any proposal? any review? I'm thinking of (for example) the 'deep M31 outskirts' observations - while they have been fantastic for all sorts of studies wrt M31 etc, I'm sure no one would be surprised to learn that - perhaps a year or three from now - a study of the data from that observation turned up something really quite exciting ... about objects not in M31 at all!

That's almost the same as question 1. It was in fact discussed as various instruments were being proposed in the first place (with assorted definitions of the "discovery space" opened up by combinations of quantum efficiency, pixel scale, and field of view). Another example - some of us gathered 60 orbits' worth of 2-color data for a microlensing study. Once we get it through the new and (we hope) improved drizzling routines, the stacked background field, looking through a nearby galaxy cluster, will be noticeably deeper than either HDF and not far behind the Ultra-Deep Field. And what object was this centered on? Do you really have to ask?


To what extent would you say that VLT/Keck/Subaru/Gemini/etc + adaptive optics + appropriate instrumets could substitute for the HST proposals your team looked at? Leave out all those that included UV (that no ground-based observatory could ever do); ignore the fact that it might take a year or ten before these ground-based telescopes get the appropriate adaptive optics capability).

This is an issue for NICMOS. There have been some amazing results from AO (for me, the orbits of stars at the galactic center set the coolness standard), but there were problems at one point with a few people making very strong claims about what their AO systems were about to deliver. As a result, there is an official White paper (from the STScI WWW site) on what we were to assume as the capabilities of ground-based AO observations for deciding whether HST was required. Keck/Gemini/VLT (and mabe Subaru, I haven't kept up) win on FWHM of the PSF at JHK bands, but lose bit on sky background and stability of the PSF, especially its wings, so that fine detail in complex fields (inner parts of QSO host galaxies, for example) is still best done with NICMOS (sometimes using its coronagraph). And the official rules on whether it matters that a sort-of competitive capability exists on a single telescope with no public access have changed at least once.

There is a more subtle issue that we couldn't always judge (and gave the benefit of the doubt except in really obvious cases). Sometimes there are very different observational paths to the same conclusion, some of which might require HST and some not. For example, high spectral resolution can sometimes trade for high angular resolution in learning about source structures. In a toy example, you could look for planets around nearby stars either via high-dispersion spectroscopy (reflex Doppler motion) or high angular resolution for the barycentric motion seen sideways - ad note which has been dramatically more fruitful so far. Or you might imagine approaching the age of a globular cluster through the main-sequence turnoff on the HR diagram (more or less possible from the ground with adequate accuracy in many clusters) or through the bottom of its white-dwarf cooling track (essentially impossible from the ground). I leave aside in these samples the scientific desirability of confirming important conclusions via multiple paths.

(quotations snipped to remove unexpected interaction between list and quote tags)

Harvestar
2006-Apr-15, 05:46 AM
Without asking, in any way, what your proposal(s) is, to what extent could the observations be obtained with a ground-based 'scope (plus instruments), equipped with adaptive optics (and blessed with especially good seeing)?

Well, I can get information about the objects from the ground, but size information is particularly important for this project and only HST can give us that resolution. Since these data are in the optical/UV wavelengths, no AO system yet works in those bands. (and from my class on the subject, it seems pretty far off for that to happen - the atmosphere changes too rapidly at shorter wavelengths)

Nereid
2006-Apr-16, 02:25 AM
Once again, thanks very much ngc3314 and Harvestar! :clap:

How were 'JD-critical' proposals judged? By this I mean ones that the supporting science required observing something at a certain time (the only examples I can think of are solar system - occultations, ring plane crossings, ... - and galactic - orbital phase, occultations, pulsation phase, ...). I imagine that if you simply wanted to gather 10k secs of photons from {region or object}, say, it'd be different than if you wanted to observe a half-dozen X-ray binaries throughout their orbits. (this is slightly different than 'I gotta have time to observe distant SNe, whenever they are identified'. I also note that you talked about "panels dealing with time-variable phenomena" - your panel was 'blind' to these?)

From your previous TAC involvement ngc3314, what would you say were among the most surprising proposals (that subsequently got accepted, and completed)?

To what extent did y'all consider the track record of similar types of proposals, in terms of the fecundity of the science that subsequently followed? An example, perhaps ... whatever the original proposal for taking a good gander at (nearby) Seyferts was, it sure has produced a flood of downloads of the archived observations, and a similar flood of papers, of many different kinds. To what extent could you even identify 'Seyfert-like' proposals, let alone give them higher rankings? (and the flip-side ... 'we've approved a dozen of these types of proposal, and none of them have taken astronomy forward much at all').

Did something like 'the PI and team couldn't possibly finish analysing the data within the proprietary period! Denied.' ever arise?
(Amazing observation - a know in the inner jet of M87 has flared in the last few years to outshine the core both in visible light and in the X-ray.)Relativistic MHD in thick SMBH accretion disks, jet interactions, and the footprints of SUSY decays ... let the fun begin! :D

Did you get to vote a 'contingency' list of proposals (just in case X doesn't get scheduled, for some reason, we recommend Y)?

(and I won't be using [ list ] any more ... thanks for the pointer!)

ngc3314
2006-Apr-16, 03:56 AM
How were 'JD-critical' proposals judged? By this I mean ones that the supporting science required observing something at a certain time (the only examples I can think of are solar system - occultations, ring plane crossings, ... - and galactic - orbital phase, occultations, pulsation phase, ...). I imagine that if you simply wanted to gather 10k secs of photons from {region or object}, say, it'd be different than if you wanted to observe a half-dozen X-ray binaries throughout their orbits. (this is slightly different than 'I gotta have time to observe distant SNe, whenever they are identified'. I also note that you talked about "panels dealing with time-variable phenomena" - your panel was 'blind' to these?)

Just because we dealt with galaxy proposals, the closest we saw were a few making specialized use of Cepheids, in which the proposers had to verify at the outset that the requisite relative timing and sequence duration was possible. In the 2-gyro pointing mode, some of the attitude information comes from two star trackers which point at about 135 degrees to the telescope. The need to have both HST and the trackers pointing clear of Earth and Sun avoidance zones gives much more restricted times of availability for most places in the sky (and some combinations of pointing and roll angle are impossible). Generally, time-critical but predictable observations are a limited resource because the rest of the schedule has to be bulit around them; whether a particular time can be scheduled depends on how the changing orbit strobes against target visibility, something which can't be predicted with perfect accuracy long ahead of time because Sun-induced changes in the extreme upper atmosphere change the rate of orbital decay.

Short-notice observations (targets of opportunity - bright supernovae, newly discovered comets, planets 10 and up, GRB counterparts, flaring quasars) are highly disruptive to the overall schedule, and are therefore very limited as to how many can be approved in total.



From your previous TAC involvement ngc3314, what would you say were among the most surprising proposals (that subsequently got accepted, and completed)?

For something that specific, the surprising thing would be if I didn't get into very deep trouble for violation of nondisclosure documents...


To what extent did y'all consider the track record of similar types of proposals, in terms of the fecundity of the science that subsequently followed? An example, perhaps ... whatever the original proposal for taking a good gander at (nearby) Seyferts was, it sure has produced a flood of downloads of the archived observations, and a similar flood of papers, of many different kinds. To what extent could you even identify 'Seyfert-like' proposals, let alone give them higher rankings? (and the flip-side ... 'we've approved a dozen of these types of proposal, and none of them have taken astronomy forward much at all').

I don't remember this kind of discussion, which may mean that it was always rather more specific for each proposal. We all wish we could sniff out these kinds of proposals. (Though there was one proposal at an earlier review which had us all saying "No fair! Can they do that? Why didn't we think of this?", which has done very well in the archive retrieval statistics.)



Did something like 'the PI and team couldn't possibly finish analysing the data within the proprietary period! Denied.' ever arise?

Not much - these days, it is a rare, tightly focussed proposal which has all the data analysis wrapped up within its proprietary period. Me, I have a habit of trying to grok the data more fully, which may be why Tim Heckman continues to do very well at writing my next paper for me. It is not completely unheard-of for reviewers to suggest that an archival proposal really needs somewhat more in resources than requested and recommend increasing the funding (but this does raise the question of why the proposers didn't figure this out to begin with).

On the other hand, until this cycle, each proposal included a list of the PI's previous successful HST proposals and the status of the data. "Had 10 successful proposals early on and a total of one paper so far" is not a high recommendation for the odds of analyzing more data.



Did you get to vote a 'contingency' list of proposals (just in case X doesn't get scheduled, for some reason, we recommend Y)?

Sort of - we were asked to rank almost twice as many proposals as would fit, to provide a ready pool if some proved impossible to schedule after higher-ranked ones were filled in, or (cross fingers) in case of an instrument failure - which is how they managed not to convene panels again when STIS failed.

At one Chandra review, the chairs of three panels all dealing with AGN got together before the final TAC meeting to figure out how to handle large and fairly similar proposals - basically to make sure that the science got done and that the individual panel recommendations (each panel only saw a subset) didn't get rock-paper-scissor'ed into oblivion compared to other topics.

Nereid
2006-Apr-16, 01:41 PM
What happens next? What further hoops do the recommended proposals have to jump through before the Hubble instruments start collecting photons?

From your own experience, how have proposals changed over the years (other than to reflect the changing operational constraints)? There have been fewer (so the oversubscription rate has fallen), but has the task of ranking become harder, as the overall quality has improved (for example)? (there are some comments on this in the earlier posts).

Other than the following year's TAC meeting, is there any other kind of PIR (post-implementation review)? How is the process itself evaluated (and improved)?

What proportion of the proposals would you say are 'part of a program' proposals (the proposed observations form just one part of series of observations, possibly involving many kinds of telescopes/instruments, but not narrowly definable in advance, if only because later observations would depend heavily on what was found in the ones earlier in the series), as opposed to 'new ideas' (which may, or may not, subsequently become programs), and 'self-contained' (the proposed observations are the entire program)? I appreciate that these are somewhat arbitrary distinctions.

Were you, and other TAC members, able to determine the range in ranking votes? If so, were there many proposals for which the ranks given by TAC members varied widely?

ngc3314
2006-Apr-18, 10:10 PM
What happens next? What further hoops do the recommended proposals have to jump through before the Hubble instruments start collecting photons?

There is an electronic "Phase II" process, in which each exposure (or linked set in a mapping pattern) has to be specified as to filter, exposure, precise pointing and constraints in time. This information goes more or less directly to the scheduling software. At the same time, PIs at US institutions get a notification of how much accompanying grant funding they will be offered and put together a matching budget (perhaps negotiating a bit if they feel this number does not adequately reflect the complexity of the analysis they ened to do).


From your own experience, how have proposals changed over the years (other than to reflect the changing operational constraints)? There have been fewer (so the oversubscription rate has fallen), but has the task of ranking become harder, as the overall quality has improved (for example)? (there are some comments on this in the earlier posts).

The major difference from the earliest cycles is that many proposals now are absolutely enormous compared to the pitiful few orbits reviewers argued about to begin with. Many of the easier things have been done and led to more elaboratie followup, plus such projects as the Hubble Deep Fields have led to a real appreciation that there is science which can be done aonly with a widespread community effort to obtain massive uniform data sets. So now there are successful proposals for 200 orbits or so. The mean number of investigators per proposal may be going up as well, as the work becomes more involved and may need disparate specialties represented.

I have the impression (supported by a few statistics but not based on a complete review) that oversubscription bounces around - astronomers only have time to write so many proposals. So a lot of pent-up demand shows up when new instrumental capabilities (STIS, NICMOS, Chandra, Spitzer) are first available. Likewise the oversubscription on HST dropped a bit when STIS went offline, because most spectroscopic science had to go elsewhere or be deferred.


Other than the following year's TAC meeting, is there any other kind of PIR (post-implementation review)? How is the process itself evaluated (and improved)?

They worry a lot about the process - there are people whose job is program selection. They solicit (in fact require) written feedback on the process from each panel. And there are some proposers who require no encouragement at all to comment at length... although reviewers probably have a more complete perspective. I just learned that statistics of success by country, state, congressional district, PI gender, are all tracked and worried over. A few times I've been bored enough sitting in the hotel to track institutional affiliations, and found that the process does spread the wealth more widely than I might have at first expected.


What proportion of the proposals would you say are 'part of a program' proposals (the proposed observations form just one part of series of observations, possibly involving many kinds of telescopes/instruments, but not narrowly definable in advance, if only because later observations would depend heavily on what was found in the ones earlier in the series), as opposed to 'new ideas' (which may, or may not, subsequently become programs), and 'self-contained' (the proposed observations are the entire program)? I appreciate that these are somewhat arbitrary distinctions.

Most proposals are in spirit part of a larger effort (if only "A major theme in contemporary astronomy is unravelling the evolution of galaxies"). One has to be unusually clever or fortunate by now to have a Hubble project which stands alone (at least given the body of previous observations). Unfortunately, the adjective "incremental" becomes a pejorative one when the oversubscription is large.


Were you, and other TAC members, able to determine the range in ranking votes? If so, were there many proposals for which the ranks given by TAC members varied widely?
In the initial rankings, we did see the standard deviations, because a large range may indicate either that some of us didn't properly understand the proposal, or that there is a matter of basic disagreement which ought to at least be discussed.

(Reply delayed by travel)