PDA

View Full Version : Is the value of the Hubble constant locked down?



Pages : [1] 2

dgruss23
2007-Jul-16, 02:31 PM
In this thread (http://www.bautforum.com/questions-answers/61643-astronomy-cosmology-science-tinkering-fiddling-cheating-4.html) the question was touched upon as to whether or not the Hubble constant (H0)could be as high as 84 km s-1 Mpc-1 rather than the currently preferred value of ~72 km s-1 Mpc-1 as determined by the Hubble Key Project (http://adsabs.harvard.edu/abs/2001ApJ...553...47F).

The purpose of this thread is to look at some of the reasons why it is still possible that the value of H0 could in fact be as large the mid 80's.

The HKP final report has been cited ~1100 times since being published in May 2001 so it is an extremely influential paper and important reason why most researchers have accepted H0=~72. This acceptance has been bolstered by the WMAP (http://arxiv.org/abs/astro-ph/0603449) results.

However, the extragalactic distance scale has numerous pieces (or rungs on the ladder) and there are a number of ways that the HKP final result could be incorrect. First it should be noted that the difference between H0=72 and H0=84 only requires a systematic 0.33 mag shift in the distance scale. For most distance indicators we're talking about a 1-2 sigma shift.

The HKP determined that H0=72 from 5 methods: The I-band Tully-Fisher relation (I-TFR -->spirals), surface brightness fluctuation method (SBF -->ellipticals - mostly), Fundamental plane (FP-->ellipticals), Type Ia Sn, Type II SN. The value of H0 was determined for each of these methods independently and then combined for a final value of H0. One of the reasons for the acceptance of their final result is that 5 methods were used.

One of the rungs underlying these distance methods is the Cepheid variable distance scale - which must be used to fix the zero point of the relations used for the 5 secondary distance indicators listed above.

The Cepheid distance scale is then one place where a systematic shift in the zero points of all 5 distance indicators could take place. Sandage has long argued for a lower value of H0 and recently recalibrated the Cepheid distance scale and concluded H0=62 (http://arxiv.org/abs/astro-ph/0603647). However, more recently van Leeuwen et al (http://arxiv.org/abs/0705.1592) showed problems with the Sandage et al Cepheid PL relation slope and also showed that the HKP Cepheid scale should be revised so that distances are closer and the value of H0 would then shift to 76.

Looking at the HKP final analysis reveals some other avenues for caution in accepting H0=72 as the final word:


One of the methods they used (the FP) actually gave a Hubble constant of 82.
Only 4 galaxies were used for the Type II SN H0 estimate and only 3 calibrators with Cepheid distances were available for calibration of the zero point.
Only 6 galaxies in 6 clusters were used for the SBF analysis - and the number of cepheid calibrators was the same size - 6.
While there were 36 Type Ia SN in the analysis, there were only 6 galaxies for calibrating the zero point.
The I-TFR distances tend to overestimate distances relative to other methods - including methods presented in their own paper for some clusters. For example, the FP distance to Abell 3574 (Table 9) is 51.6 Mpc while the I-TFR distance in Table 7 is 62.2 Mpc. The Centaurus 30 cluster I-TFR distance is 43.2 Mpc (Table 7) whereas a Cepheid distance to NGC 4603 in the same cluster is 33.3 Mpc and the SBF method from the large study of Tonry et al (2001) gives a distance of ~33 Mpc (same as the Cepheid distance). For Antlia the HKP I-TFR distance is 45.1 Mpc whereas the Tonry et al SBF distance is ~33 Mpc. Tully&Pierce (2000) (http://adsabs.harvard.edu/abs/2000ApJ...533..744T) found H0=77 from the I-band TFR, but they note that it might be more appropriate to use the maser distance to NGC 4258 to fix the zero point of the Cepheid distance scale rather than the traditionally used Large Magellanic Cloud distance. If the maser distance is used, then they would find H0=86 rather than 77. Using the maser distance would ripple through the distance indicators used by the HKP as well raising H0 above 80.

antoniseb
2007-Jul-16, 02:44 PM
I think that the Cepheid distances are about as accurate as our knowledge about the distance to the LMC. We have observed and studies a very large number of Cepheids there, and so the statistics about their brightness is pretty solid. Our distance measurement to the LMC is not so far off as to allow the kind of error bars you are asking about.

In about ten years when the Gaia data is in, we'll have very accurate direct trigonometric measurements of the distances of quite a few local Cepheids. This will also help nail things down.

Ken G
2007-Jul-16, 03:17 PM
In looking at the HKP paper linked to in the OP, the quoted uncertainty range is from H_o = 64 to 80. I didn't look at the paper in detail, so I'm wondering if that range is intended to be so-called "1 sigma" errors, or if they are "3 sigma". If the former, that allows H_o in the mid 80s without even contradicting the HKP. Also note that the error is almost entirely systematic, because their statistical errors would seem to average to a lower value, perhaps +/-3 or less, so the +/-8 they quote must be largely due to exactly the effects dgruss23 is talking about. As it is much easier to have a significant systematic error than a significant statistical error (statistics are much better understood than systematics-- the latter change with every major new discovery), it seems that H_o =80 is completely plausible, and even 85 would seem to be pretty hard to completely rule out. Even the HKP authors might well agree with that, I would expect.

Tim Thompson
2007-Jul-16, 05:08 PM
See the webpage The Hubble Constant (http://cfa-www.harvard.edu/~huchra/hubble/), maintained by John Huchra, at the Harvard-Smisthonian Center for Astrophysics (http://cfa-www.harvard.edu/). He has compiled all reported values of the Hubble Constant (H0), from 1924 to 2004. You can download the text data file, or just look at his plots. Most notably, see the two plots near the bottom of the page, which show H0 from 1970-2001, and again from 1996-2005.

We can see from the data that a value of 84 km/sec/Mpc would lie significantly outside the indicated range of reported values (the lines drawn on the plot are clearly not 1-sigma uncertainties, and look more like 3-sigma uncertainties, but are probably neither). It certainly appears, based on these plotted data, that 84 km/sec/Mpc is an unreasonably high value, and significantly unlikely.

Also keep in mind that all published values of H0 are here, not just those determined by the Cepheid P-L relationship. I think it is significant that the several methods used all agree in more or less ruling out a value of H0 that high.

dgruss23
2007-Jul-16, 06:20 PM
I think that the Cepheid distances are about as accurate as our knowledge about the distance to the LMC. We have observed and studies a very large number of Cepheids there, and so the statistics about their brightness is pretty solid. Our distance measurement to the LMC is not so far off as to allow the kind of error bars you are asking about.

In about ten years when the Gaia data is in, we'll have very accurate direct trigonometric measurements of the distances of quite a few local Cepheids. This will also help nail things down.

The Cepheid distances provide zero point calibration for the other distance indicators. There are now ~30-40 galaxies with Cepheid distances ... but only a handful of those can be used to calibrate any given secondary distance indicator as I noted in the bullets in the OP. It doesn't take a very large systematic error in the calibration of the zero point of the Cepheid distances in combination with a systematic error intrinsic to the calibration of the secondary distance indicator and small numbers of Cepheid calibrators (3 for the Type II SN, 6 for the SBF and Type Ia SN) to push H0 to 85.

I derived H0=84 using the TFR and 2MASS Ks-band photometry for a sample of 318 spirals that met very strict selection criteria designed to eliminate galaxies that were likely to have large distance errors. The calibrator sample was 26 galaxies with distances based upon the same Cepheid distance scale the HKP used.

dgruss23
2007-Jul-16, 06:39 PM
We can see from the data that a value of 84 km/sec/Mpc would lie significantly outside the indicated range of reported values (the lines drawn on the plot are clearly not 1-sigma uncertainties, and look more like 3-sigma uncertainties, but are probably neither). It certainly appears, based on these plotted data, that 84 km/sec/Mpc is an unreasonably high value, and significantly unlikely.

Each research group will target a certain approach as a test of H0. The fact that multiple teams find H0 values ~ 70 doesn't rule out higher values. Tully&Pierce discussed this in their paper. They found H0=77 and asked why their lonely value on the higher side should be trusted and then went on to present their reasons why they believe their value is better. And they noted that if the Cepheid distance scale was calibrated to the geometric maser distance to NGC 4258 that they would get H0=86.

If you look at the approaches used it becomes a question of whose assumptions are better. The Sandage et al team assumes a significant impact from Malmquist, cluster population incompleteness and other biases throughout their analysis - and apply their correction methods they find H0 ~60.

Other research groups don't agree that the large bias corrections are needed and they find a "shorter" distance scale and thus larger H0 values.

Other methods such as time delays of gravitational lenses require assumptions about the distribution of dark matter in order to derive H0. For example, Kochanek and Schechter (http://adsabs.harvard.edu/abs/2004mmu..symp..117K) find H0=48 for isothermal mass distributions, but the same lenses give H0=71 if they eliminate the DM halo and assume constant M/L ratios.

So it is very difficult to look at a list of H0 values and derive any conclusion other than the conclusion that the various methods utilized in the last 20 years give H0 values in the range of 40-90 km s-1 Mpc-1. Which methods are the best and lead to the most reliable values? That requires scrutiny of each study.


Also keep in mind that all published values of H0 are here, not just those determined by the Cepheid P-L relationship. I think it is significant that the several methods used all agree in more or less ruling out a value of H0 that high.

But - for example, if the geometric maser distance is adopted for NGC 4258, the distance scale shifts such that Tully&Pierce would get H0=86. In fact Tully&Pierce said there were arguments to be made for preferring the maser distance, but they kept their calibration based upon the LMC distance because that is what everyone else does and it would therefore be easier to compare their results with other studies.

Ken G
2007-Jul-17, 12:19 AM
Yes, I think there is a potential pitfall in just looking at previously published values. There's a kind of "blind leading the blind" element to systematic errors-- one person makes a plausible assumption, and in the absence of any contrary evidence, everyone else follows along. At some point along the way, the nature of that assumption kind of gets lost to history, replaced by a source of systematic error that is easy to overlook. Perhaps one single systematic error would cause us to go back and recreate all of those same plotted results, changing nothing but that one assumption, and we would find they were all higher as a result. That's the concept of "systematic" error taken to the logical conclusion of being systematically applied by everyone. Perhaps Tim is saying there is no error source that is really that systematic over such a wide array of methods, I can't really speak to that-- I merely point out that such would be an essential piece of that type of argument.

Nereid
2007-Jul-17, 02:38 AM
The Freedman et al. final HKP paper does a good job, IMHO, of laying out the case for the value they conclude with.

Maybe it's worth going over it, and looking at certain sections in some detail?

For example, we could clarify what the +/- numbers refer to (1 sigma? 3 sigma? something else??); the extent to which the secondary methods are independent; dive into the various chains of observations that lead to the stated systematics; consider the importance (or not) of the stated good agreement and consistency; examine the (statistical) approaches to combining different kinds of estimates; and discuss the two quite independent methods (lensing and the SZE).

I think it would also be of considerable interest to compare this paper with the Spergel et al. WMAP 3-year paper ("Implications for Cosmology (http://map.gsfc.nasa.gov/m_mm/pub_papers/threeyear.html)" - link is to the WMAP site, click on the appropriate link there to get the paper), and the extent to which a value of 84 is consistent with what's reported in this WMAP team paper.

Tim Thompson
2007-Jul-17, 04:46 AM
Perhaps Tim is saying there is no error source that is really that systematic over such a wide array of methods, I can't really speak to that-- I merely point out that such would be an essential piece of that type of argument.
That's the general idea. This thread is devoted to only one measurement, singled out for special treatment. But how does one go about the task of figuring the probability that any one particular measurement is "right"? It can only be done by comparing the one measurement to a population of like measurements. In this case, the population of measurements given by Huchra serves that purpose. Because they are all derived from different methods, they don't all share the same systematics. So the spread of measurements is a fair representation not only of random uncertainties, but systematic uncertainties.

It only needs to be shown that the given value (84) is significantly high compared to the population, to argue that it is unlikely. Note that I am only saying that it is improbable, not impossible. Just consider the inverse argument. If you are going to claim that this one measurement must be likely correct, then how do you explain all the others being unlikely? It seems unreasonable to me that they would be.

dgruss23
2007-Jul-17, 12:04 PM
That's the general idea. This thread is devoted to only one measurement, singled out for special treatment.

Not true, this thread is devoted to the question of whether the Hubble Constant is actually a resolved matter or whether it is possible that H0 could be as high as 85. It makes sense to focus first on the HKP final report because it has been so influential.



But how does one go about the task of figuring the probability that any one particular measurement is "right"? It can only be done by comparing the one measurement to a population of like measurements.

But you have to look at the methods utilized by the different research groups, not just their final result. For example, Ekholm et al 1999 (http://adsabs.harvard.edu/abs/1999A%26A...347...99E) find H0=53 from the TFR whereas Tully&Pierce 2000 (http://adsabs.harvard.edu/abs/2000ApJ...533..744T) find H0=77 from the TFR. Their methods are different.

It is not good enough to compare the final H0 values, you must compare the distance estimates for clusters that are in common. You must look at the calibration procedures, sample sizes and other factors. Ekholm et al use large bias corrections whereas Tully&Pierce find bias corrections are small.


In this case, the population of measurements given by Huchra serves that purpose. Because they are all derived from different methods, they don't all share the same systematics. So the spread of measurements is a fair representation not only of random uncertainties, but systematic uncertainties.

As noted above and in previous posts, even utilizing the same distance indicator the studies will show a wide range of H0 values. I just noted such a case with the TFR. As another example, using Type II Sn Hamuy 2003 (http://xxx.lanl.gov/abs/astro-ph/0301281) find H0=81 but Leonard et al 2003 (http://adsabs.harvard.edu/abs/2003ApJ...594..247L) find H0=57.


It only needs to be shown that the given value (84) is significantly high compared to the population, to argue that it is unlikely. Note that I am only saying that it is improbable, not impossible. Just consider the inverse argument. If you are going to claim that this one measurement must be likely correct, then how do you explain all the others being unlikely? It seems unreasonable to me that they would be.

Examples of the types of things that must be looked at are mentioned throughout my posts on this thread - starting with the bullets in the OP. The HKP used four Type II SN with 3 cepheid calibrators. They used 6 SBF distances to find H0. Their I-band TFR to A3574 is 62 Mpc whereas their Fundamental plane distance to the same cluster is 51 Mpc. They find a SBF distance to Coma of 102 Mpc whereas their I-TFR and FP distances are 86 Mpc to Coma.

You can dissect each study this way if you want. You'll find many studies or distance estimates are based upon small numbers of calibrators. Many studies have small sample sizes such as the HKP Type II SN and SBF estimates, this paper (http://xxx.lanl.gov/abs/astro-ph/0212262), the type II SN papers I linked to above, and the estimates from lensing studies. Many studies are based upon assumptions that have not been resolved among the specialists (again the lensing studies). Many studies are basically a repeat of an earlier analysis with small changes by the same group of researchers. For example I linked to the TFR study of Ekholm et al who found H0=53 in 1999. The same group found H0=53-57 in 1997 (http://adsabs.harvard.edu/abs/1997A%26A...322..730T) using largely the same methods and data sets. So these are not two completely independent H0 estimates.

And finally it should not be forgotten that most of the H0 values are based upon the same Cepheid distance scale - so a systematic error in the zero point of the Cepheid distances is going to affect every one of those studies. As simple a change as adopting the maser distance to NGC 4258 would make Dr. Huchra's list look very different and H0=84 would no longer be on the high end but closer to the middle.

Cougar
2007-Jul-17, 03:50 PM
Is the value of the Hubble constant locked down?
Apparently, the simple answer is no. According to Harvard's Water Maser Cosmology Project.... (http://www.cfa.harvard.edu/wmcp/)

Today, the best estimate of the Hubble constant is uncertain by perhaps 10% when all things are considered.
Of course, in traditional astronomical terms, "uncertain by 10%" is astoundingly accurate.

Water masers do appear to have potential for modifying and giving us a more accurate distance scale. One interesting article I haven't seen mentioned is: A Revised Cepheid Distance to NGC 4258 and a Test of the Distance Scale (http://www.journals.uchicago.edu/ApJ/journal/issues/ApJ/v553n2/52946/52946.html?erFrom=298907136342109038Guest) by Jeffrey A. Newman, et al.

[Edit: Hmm. If that article link doesn't work for you, try this one. (http://www.journals.uchicago.edu/ApJ/journal/issues/ApJ/v553n2/52946/52946.html)]

dgruss23
2007-Jul-18, 12:44 PM
Apparently, the simple answer is no. According to Harvard's Water Maser Cosmology Project.... (http://www.cfa.harvard.edu/wmcp/)
Today, the best estimate of the Hubble constant is uncertain by perhaps 10% when all things are considered.Of course, in traditional astronomical terms, "uncertain by 10%" is astoundingly accurate.

Thanks for the link cougar!

One thing to keep in mind when they quote the 10% accuracy is that it really applies to the uncertainty of the individual H0 estimates given the assumptions and methods applied in each study. The actual value of H0 could still be more than 10% different from any of the individual studies that have attempted to estimate H0. For example, Tully&Pierce noted the potential for a 12% change in H0 with the maser distance to NGC 4258.

Ekholm et al find H0=53 in one study. So 10% sets an upper limit of 58 from their methods - consistent with their reported uncertainty. Tully&Pierce find H0=77 and 10% sets a lower limit of 69 - again consistent with their reported uncertainty. So the 10% accuracy doesn't lead to an overlap in the results of the two studies.

My point is just that while the data and calibrators now in principle allow a determination of H0 to 10% accuracy. That H0 value is only accurate to within 10% of the true value of H0 if the assumptions and methods of the study are in fact valid.

Jerry
2007-Jul-18, 05:50 PM
The Freedman et al. final HKP paper does a good job, IMHO, of laying out the case for the value they conclude with.

Maybe it's worth going over it, and looking at certain sections in some detail?

For example, we could clarify what the +/- numbers refer to (1 sigma? 3 sigma? something else??); the extent to which the secondary methods are independent; dive into the various chains of observations that lead to the stated systematics; consider the importance (or not) of the stated good agreement and consistency; examine the (statistical) approaches to combining different kinds of estimates; and discuss the two quite independent methods (lensing and the SZE).

I think it would also be of considerable interest to compare this paper with the Spergel et al. WMAP 3-year paper ("Implications for Cosmology (http://map.gsfc.nasa.gov/m_mm/pub_papers/threeyear.html)" - link is to the WMAP site, click on the appropriate link there to get the paper), and the extent to which a value of 84 is consistent with what's reported in this WMAP team paper.
On significant source of error worth revisiting is the distance at which the 'Hubble flow' dominates the observed motion - in 2001, there was considerable debate on this.

It would also be of value to 'penciling in' the magnitudes of distant supernova Ia observed since 2001 to the Hubble Key Project graph. In 2001 the magnitudes fell roughly into line. Is that still true?

Nereid
2007-Jul-19, 02:01 AM
From another BAUT Q&A thread (http://www.bautforum.com/1031823-post151.html): this 2007 paper [...]: Cepheid parallaxes and the Hubble constant (http://adsabs.harvard.edu/doi/10.1111/j.1365-2966.2007.11972.x):
Revised Hipparcos parallaxes for classical Cepheids are analysed together with 10 Hubble Space Telescope (HST)-based parallaxes. In a reddening-free V, I relation we find that the coefficient of logP is the same within the uncertainties in our Galaxy as in the Large Magellanic Cloud (LMC), contrary to some previous suggestions. Cepheids in the inner region of NGC4258 with near solar metallicities confirm this result. We obtain a zero-point for the reddening-free relation and apply it to the Cepheids in galaxies used by Sandage et al. to calibrate the absolute magnitudes of Type Ia supernova (SNIa) and to derive the Hubble constant. We revise their result for H0 from 62 to 70 +/- 5kms-1Mpc-1. The Freedman et al. value is revised from 72 to 76 +/- 8kms-1Mpc-1. These results are insensitive to Cepheid metallicity corrections. The Cepheids in the inner region of NGC4258 yield a modulus of 29.22 +/- 0.03 (int.) compared with a maser-based modulus of 29.29 +/- 0.15. Distance moduli for the LMC, uncorrected for any metallicity effects, are 18.52 +/- 0.03 from a reddening-free relation in V, I; 18.47 +/- 0.03 from a period-luminosity relation at K; 18.45 +/- 0.04 from a period-luminosity-colour relation in J, K. Adopting a metallicity correction in V, I from Macri et al. leads to a true LMC modulus of 18.39 +/- 0.05.

dgruss23
2007-Jul-19, 01:11 PM
From another BAUT Q&A thread (http://www.bautforum.com/1031823-post151.html): this 2007 paper [...]: Cepheid parallaxes and the Hubble constant (http://adsabs.harvard.edu/doi/10.1111/j.1365-2966.2007.11972.x):

You've been ToSeek'd since I linked to that paper in the OP of this thread and post #131 (http://www.bautforum.com/1029417-post131.html) of the thread you linked to.

Nereid
2007-Jul-19, 01:22 PM
You've been ToSeek'd since I linked to that paper in the OP of this thread and post #131 (http://www.bautforum.com/1029417-post131.html) of the thread you linked to.Oops! :o

So do you think it's worth going over the Freedman et al. HKP paper in some detail? And/or the Spergel et al. one?

Not all of either, but the parts which lead to the final estimate and the estimated systematic uncertainties.

More generally, maybe it's worth looking at what's known about the stated systematic uncertainties in the main independent methods used to get to an estimate of H0?

dgruss23
2007-Jul-19, 04:52 PM
Oops! :o

So do you think it's worth going over the Freedman et al. HKP paper in some detail?

And probably the related papers. The HKP final report doesn't present all the details of the calibration of the secondary distance indicators. Other papers were devoted to that.


And/or the Spergel et al. one?

Perhaps at some point.


Not all of either, but the parts which lead to the final estimate and the estimated systematic uncertainties.

More generally, maybe it's worth looking at what's known about the stated systematic uncertainties in the main independent methods used to get to an estimate of H0?

And cross comparisons between methods reveal some inconsistenties as I noted in the OP.

Nereid
2007-Jul-21, 06:14 PM
Measure the Hubble Constant (aka Freedman et al. (2000)).

From the Introduction (section 1):
The previous 29 papers in this series have provided the distances to individual galaxies based on the discovery and measurement of Cepheids, discussed the calibration of the data, presented interim results on the Hubble constant, and provided the calibration of secondary methods, and their individual determinations of the Hubble constant. A recent paper by Mould et al. (2000a) combines the results for secondary methods (Gibson et al. 2000; Ferrarese et al. 2000a; Kelson et al. 2000; Sakai et al. 2000) with a weighting scheme based on numerical simulations of the uncertainties. In this paper, we present the final, combined results of the Key Project.
[...]
Establishing plausible limits for the Hubble constant requires a careful investigation of systematic errors. We explicitly note where current limits in accuracy have been reached. We intend this paper to provide an assessment of the status of the global value of H0.For our purposes, the Mould et al. (2000a) paper is (very likely) worth reading; but let's start with §2 and §3, a summary of the method and determination of Cepheid distances. After that, we could consider the van Leeuwen et al (2007) paper as it relates to Cepheid distances, before moving on to §4 and §5 (where Freedman et al. apply a nearby flow field correction and "compare the value of H0 obtained locally with that determined at greater distances").

Then we could take a look at the secondary methods (§6 and §7, plus Mould et al. (2000a), van Leeuwen et al (2007) again, and some of the other papers mentioned in the OP). If we're still on track after that, or perhaps in conjunction, §8 should become highly pertinent ("The remaining sources of uncertainty in the extragalactic distance scale and determination of H0").

OK?

StupendousMan
2007-Jul-21, 07:17 PM
Measure the Hubble Constant (aka Freedman et al. (2000)).


Nereid is very probably referring to a paper published in 2001, "Final Results from the Hubble Space Telescope Key Project to Measure the Hubble Constant", ApJ 553, 47.

http://adsabs.harvard.edu/abs/2001ApJ...553...47F

Go to this URL and click on the "arXiv e-print" link, then click again on the "PDF link in the upper right to grab a copy of the text in PDF (unless you have access to electronic ApJ).

I suspect that section 2 (mostly details of observations and reductions) will be of less interest to most readers here than section 3.

dgruss23
2007-Jul-22, 03:57 PM
Measure the Hubble Constant (aka Freedman et al. (2000)).

From the Introduction (section 1):For our purposes, the Mould et al. (2000a) paper is (very likely) worth reading; but let's start with §2 and §3, a summary of the method and determination of Cepheid distances. After that, we could consider the van Leeuwen et al (2007) paper as it relates to Cepheid distances, before moving on to §4 and §5 (where Freedman et al. apply a nearby flow field correction and "compare the value of H0 obtained locally with that determined at greater distances").

Then we could take a look at the secondary methods (§6 and §7, plus Mould et al. (2000a), van Leeuwen et al (2007) again, and some of the other papers mentioned in the OP). If we're still on track after that, or perhaps in conjunction, §8 should become highly pertinent ("The remaining sources of uncertainty in the extragalactic distance scale and determination of H0").

OK?

Any discussion relevant to the topic of this thread is fine with me. My primary point is that this general belief that the value of the Hubble Constant has been largely finalized (at a value of ~70) by the HKP final results is incorrect. It is still possible that the value of H0 could be into the 80's - even if the HKP cepheid calibration was unchanged by newer results such as that of van Leeuwen. In fact the van Leeuwen study only increases the possibility that H0 is actually in the 90's.

So anything you want to comment on is fine. I pointed to a few items of interest in the OP.

dgruss23
2007-Jul-22, 04:01 PM
In fact the van Leeuwen study only increases the possibility that H0 is actually in the 90's.


I tried to fix this typo but the save function on changes has not been working for me the last few days. That is supposed to be 80's, not 90's.

parejkoj
2007-Jul-23, 03:46 AM
Greetings everyone!

I've been away from the BABB since before it became the BAUT. When I was last here, I was working for the Ion/Neutral Mass Spectrometer on the Cassini mission to Saturn. Now I'm an astrophysics grad student, studying low-luminosity AGN.

I started lurking again a couple weeks ago, and decided to jump in after noticing something on astro-ph that ya'll might find interesting: "A comprehensive study of Cepheid variables in the Andromeda galaxy. Period distribution, blending and distance determination" by F. Vilardell, C. Jordi and I. Ribas

http://arxiv.org/abs/0707.2965

The examine 281 Fundamental Mode Cepheids in Andromeda and look at some of the systematics of the sample. Rather an interesting paper in its own right, but also relevant, I think, to the discussion at hand.

I'll be in and out... Hopefully my own ignorance won't drag things down too much here!

Ken G
2007-Jul-23, 02:31 PM
Welcome back. If you've read it, can you summarize it's salient features for the lazier ones among us, present company included?

StupendousMan
2007-Jul-23, 05:05 PM
Welcome back. If you've read it, can you summarize it's salient features for the lazier ones among us, present company included?

The most interesting portion of the paper to me was the authors' analysis of the degree to which crowding (light from fainter stars near the target Cepheids) contaminated the Cepheid measurements, and how they decided to address it (they used the period-amplitude relationship to pick samples with less crowding).

No really big news in it, I would say. The abstract does a good job of explaining their results.

Nereid
2007-Jul-23, 06:36 PM
For the purposes of this thread, here are the parts of this section of the HKP Final Results paper that have particular relevance (IMHO):
Since each individual secondary method is likely to be affected by its own (independent)
systematic uncertainties, to reach a final overall uncertainty of ±10%, the numbers of calibrating galaxies for a given method were chosen initially so that the final (statistical) uncertainty on the zero point for that method would be only ~5%. (In practice, however, some methods end up having higher weight than other methods, owing to their smaller intrinsic scatter, as well as how far out into the Hubble flow they can be applied - see §7).Which tells us what random uncertainty the team was aiming at.
The calibration of Type Ia supernovae was part of the original Key Project proposal, but
time for this aspect of the program was awarded to a team led by Allan Sandage.So what is the equivalent (Sandage team) "Final Results HKP" paper re calibration of Type 1a supernovae?
To summarize the total Cepheid calibration sample, as part of the Key Project, we have surveyed and analyzed data for 18 galaxies, in addition to reanalyzing HST archival data for 8 galaxies observed by other groups. When these distances are combined with those for 5 very nearby galaxies (M31, M33, IC 1613, NGC 300, and NGC 2403), it results in a total 31 galaxies, subsets of which calibrate individual secondary methods, as shown in Table 2.Which doesn't seem to be many galaxies, even if there are many Cepheids in each galaxy.

An interesting exercise might be to duplicate a subset of the HKP team's work, using new observations of Cepheids in galaxies other than those 31 ... if there are, in fact, any such observations.

The determination of accurate distances carries with it a requirement for an accurate, absolute photometric calibration. Ultimately, the uncertainty in the Hubble constant from this effort rests directly on the accuracy of the Cepheid magnitudes themselves, and hence, systematically on the CCD zero–point calibration.And the team did find it very difficult to beat down the uncertainties in the absolute photometric calibration (see Section 2.5) ... so the value of the Hubble constant cannot be locked down tighter than these absolute photometric calibration uncertainties ...

dgruss23
2007-Jul-24, 12:45 PM
So what is the equivalent (Sandage team) "Final Results HKP" paper re calibration of Type 1a supernovae?

Here (http://adsabs.harvard.edu/abs/2006ApJ...653..843S). But it is difficult to directly compare their result with the HKP result because they also use a different Cepheid calibration. The van Leeuwen paper pointed out that the Sandage group uses a steeper slope for the Cepheid P-L relation which van Leeuwen et al was not able to confirm. Rather they found a slope almost unchanged from that used by the HKP>




Which doesn't seem to be many galaxies, even if there are many Cepheids in each galaxy.

Yes, that was one of my points in the OP. Several of the secondary indicators have 6 or less zero point calibrators which increases the chances of a systematic error in the zero point.


An interesting exercise might be to duplicate a subset of the HKP team's work, using new observations of Cepheids in galaxies other than those 31 ... if there are, in fact, any such observations.

There have been a couple of papers that have come out since the HKP final report that have looked at the Cepheid's in other galaxies. The Cepheid sample hasn't been significantly enlarged beyond the HKP sample though.

dgruss23
2007-Jul-31, 01:52 AM
An, Terndrup, & Pinsonneault (http://xxx.lanl.gov/abs/0707.3144) have used open clusters to derive the Galactic Cepheid P-L. Applying their P-L relation to NGC 4258 they find distance modulus of 29.28 +/-0.10 which agrees with the maser distance. For the LMC they find a distance modulus of 18.34 +/- 0.06. The LMC distance modulus is 0.16 mag smaller than the distance modulus adopted by the HKP and results in a larger value of the Hubble constant.

With the new LMC distance the HKP value of H0 would be revised to 77.5 and the Tully&Pierce value of H0 would be revised to 83.

Jerry
2007-Jul-31, 02:00 PM
Greetings everyone!
"A comprehensive study of Cepheid variables in the Andromeda galaxy. Period distribution, blending and distance determination" by F. Vilardell, C. Jordi and I. Ribas

http://arxiv.org/abs/0707.2965


The analysis of the P-L relationship for the FM Cepheids
reveals a large scatter, which is not explained solely through
the effects of interstellar absorption and metallicity. Although
additional efforts are needed to reduce the obtained uncertainties,
a new method to compute the effect of blending is presented...

...The effect of blending has been shown to be larger than 0.09
mag in the distance modulus to M31, thus having an effect
as important as the metallicity correction. Therefore, blending
should always be taken into account when obtaining extragalactic
distance determinations with Cepheids.
These numbers are very important for establishing the zero point of the hubble flow. I don't have a feel for how much effect a ~0.1 magnituded error in Cepheid distance scaling would have. The 'Cepheid' slope to nearby galaxies is critical. Anybody want to weigh-in on this before I work out an estimate?

dgruss23
2007-Jul-31, 08:23 PM
These numbers are very important for establishing the zero point of the hubble flow. I don't have a feel for how much effect a ~0.1 magnituded error in Cepheid distance scaling would have. The 'Cepheid' slope to nearby galaxies is critical. Anybody want to weigh-in on this before I work out an estimate?

A 0.10 mag error in the Cepheid scale would change H0 by ~3.3 km s-1 Mpc-1.

In this particular case the value of H0 would be reduced rather than increased, because the effects of blending cause Cepheid distances to be less than the actual distance. However, as the authors note - other studies have demonstrated that the blending effect us only important for very local galaxies.

StupendousMan
2007-Jul-31, 09:56 PM
A 0.10 mag error in the Cepheid scale would change H0 by ~3.3 km s-1 Mpc-1.

In this particular case the value of H0 would be reduced rather than increased, because the effects of blending cause Cepheid distances to be less than the actual distance. However, as the authors note - other studies have demonstrated that the blending effect us only important for very local galaxies.

Could you provide the exact quotation from the paper to support your last sentence, please? I just re-read sections of the paper and could not find such a statement. Moreover, it doesn't seem to make sense: the most distant a galaxy, the larger the area (and volume) covered by a seeing disk, so the LARGER the number of stars which are blended together with any particular Cepheid. The blending problem should become _worse_ with distance, it seems, not less important, as you imply.

dgruss23
2007-Jul-31, 10:24 PM
Could you provide the exact quotation from the paper to support your last sentence, please? I just re-read sections of the paper and could not find such a statement. Moreover, it doesn't seem to make sense: the most distant a galaxy, the larger the area (and volume) covered by a seeing disk, so the LARGER the number of stars which are blended together with any particular Cepheid. The blending problem should become _worse_ with distance, it seems, not less important, as you imply.

They discuss this in the introduction of the paper and cite Gibson et al (2000) (http://adsabs.harvard.edu/abs/2000ApJ...530L...5G). Specifically, Gibson et al did not find the predicted amount of blending influence in the residuals of type Ia SN and Tully-Fisher residuals. Gibson et al argue that the high stellar background of the LMC and M-31 fields is not representative of the more distant galaxies. Also see Ferrarese et al (2000) (http://adsabs.harvard.edu/abs/2000PASP..112..177F).

StupendousMan
2007-Aug-01, 01:43 PM
Ah, I see. Thank you.

What Vilardell et al. have done is to define "blending" as the mixing of light from a target Cepheid and any gravitationally bound companion stars, while using the term "crowding" to refer to the mixing of light from a target Cepheid and "unrelated" stars which appear in the same general area. Using these terms, "blending" is important only in nearby galaxies, because it is only in (VERY) nearby galaxies that we can hope to resolve most of the "unrelated" stars from a target Cepheid.

It's not clear to me that the distinction is very important. After all, the major effect of both "blending" and "crowding" is to diminish the amplitude of a Cepheid's apparent variation in light.

dgruss23
2007-Aug-02, 01:00 AM
Ah, I see. Thank you.

What Vilardell et al. have done is to define "blending" as the mixing of light from a target Cepheid and any gravitationally bound companion stars, while using the term "crowding" to refer to the mixing of light from a target Cepheid and "unrelated" stars which appear in the same general area. Using these terms, "blending" is important only in nearby galaxies, because it is only in (VERY) nearby galaxies that we can hope to resolve most of the "unrelated" stars from a target Cepheid.

It's not clear to me that the distinction is very important. After all, the major effect of both "blending" and "crowding" is to diminish the amplitude of a Cepheid's apparent variation in light.

It is really not correct to say blending is not important either. What Gibson et al really demonstrated was that the effects of blending/crowding ultimately did not have a global influence on the distance scale.

I also have my doubts about their claim that the NGC 4603 Cepheid distance may be underestimated by > 1.0 mag because of extreme crowding effects. They make this statement based upon the H-band TFR distance the HKP found for the Centaurus cluster. However, the SBF distance to the Centaurus cluster (Tonry et al) is the same distance as the Newman et al Cepheid distance. I also find that the TFR distance is consistent with the SBF distance using 2MASS Ks band magnitudes. My own investigation suggests that the HKP I-band TFR distances have overestimated cluster distances.

folkhemmet
2007-Aug-02, 11:06 AM
It is interesting how the spread in values for the hubble parameter has been shrinking over the years. Few groups are still getting results of Ho in the 50s and 80s, but many more groups are getting results in the 60s and 70s. Even Sandage et al, an adamant proponent of a lower value for Ho, recently published a paper which put Ho at 62. So I think that, unless some unlikely trick of nature or coincidence is at work, the Hubble constant is almost surely between 60 and 80-- and probably right around 70. Perhaps one could do a statistical analysis of the narrowing of the spread of Ho over time and then extrapolate to determine roughly when Ho will be "nailed down." Lastly, as antoniseb touched on, the GAIA mission will significantly improve measurements of the cosmic distance ladder thereby ending the debate over the exact value of Ho. Here is a paper which describes the missions like impact on this very important part of astrophysics:
GAIA and the Extragalactic Distance Scale or http://arxiv.org/abs/astro-ph/0208178

dgruss23
2007-Aug-02, 12:53 PM
It is interesting how the spread in values for the hubble parameter has been shrinking over the years. Few groups are still getting results of Ho in the 50s and 80s, but many more groups are getting results in the 60s and 70s. Even Sandage et al, an adamant proponent of a lower value for Ho, recently published a paper which put Ho at 62. So I think that, unless some unlikely trick of nature or coincidence is at work, the Hubble constant is almost surely between 60 and 80-- and probably right around 70.

This is my reason for starting this thread. It doesn't require any tricks of nature for the current value of H0 to be underestimated. Recall some points I made in the OP:


Looking at the HKP final analysis reveals some other avenues for caution in accepting H0=72 as the final word:


One of the methods they used (the FP) actually gave a Hubble constant of 82.
Only 4 galaxies were used for the Type II SN H0 estimate and only 3 calibrators with Cepheid distances were available for calibration of the zero point.
Only 6 galaxies in 6 clusters were used for the SBF analysis - and the number of cepheid calibrators was the same size - 6.
While there were 36 Type Ia SN in the analysis, there were only 6 galaxies for calibrating the zero point.
The I-TFR distances tend to overestimate distances relative to other methods - including methods presented in their own paper for some clusters. For example, the FP distance to Abell 3574 (Table 9) is 51.6 Mpc while the I-TFR distance in Table 7 is 62.2 Mpc. The Centaurus 30 cluster I-TFR distance is 43.2 Mpc (Table 7) whereas a Cepheid distance to NGC 4603 in the same cluster is 33.3 Mpc and the SBF method from the large study of Tonry et al (2001) gives a distance of ~33 Mpc (same as the Cepheid distance). For Antlia the HKP I-TFR distance is 45.1 Mpc whereas the Tonry et al SBF distance is ~33 Mpc.

This is not me making stuff up - it comes right from the HKP final report and the other papers mentioned. Everybody likes to think "The HKP used 5 methods to find H0=72. They've nailed H0."

But wait - only 4 Type II SN were used - with only 3 zero point calibrators. The Type II SN result is irrelevant. And only 6 galaxies with SBF distances were used - again the result is irrelevant.

That leaves 3 methods: Fundamental Plane, SN Ia, and the I-band TFR. The Fundamental Plane gives H0=82 - not 72, but you never hear people talk about that.

The SN Ia give H0=~71, but only 6 zero point calibrators were available. The I-TFR also gives H0=~71, but where comparisons are available, their cluster distances severely overestimate the distances relative to the distances derived from other methods as I noted in the last bullet above. If you overestimate distances, you underestimate H0.

And none of the above touches upon the recent results of van Leeuwen et al (linked to in a number of earlier posts) and other groups that find a downward revision of the Cepheid P-L zero point which would systematically reduce the distances of all the Cepheid calibrators and the resulting zero points of the secondary distances indicators.

And as for Sandage, he gets a low value of H0 by advocating huge bias corrections under the assumption of much larger intrinsic scatter in the distance indicators than is observed empirically. In fact, van Leeuwen et al showed that he adopted a Cepheid P-L slope too steep which makes his value of H0 meaningless.


Perhaps one could do a statistical analysis of the narrowing of the spread of Ho over time and then extrapolate to determine roughly when Ho will be "nailed down."

The studies that find H0 adopt different assumptions and methods. Such an analysis would not really tell you anything. It would be much like what Lyndon Ashmore does when he adopts H0=64 because it is the average of numerous studies in the literature.


Lastly, as antoniseb touched on, the GAIA mission will significantly improve measurements of the cosmic distance ladder thereby ending the debate over the exact value of Ho. Here is a paper which describes the missions like impact on this very important part of astrophysics:
GAIA and the Extragalactic Distance Scale or http://arxiv.org/abs/astro-ph/0208178

Here is the key point from their abstract:


The main source of systematic errors are therefore the shape and the zero point of the P-L relation of Cepheids and its possible dependence on metallicity. GAIA will essentially eliminate these error sources.

Gaia may lead to a refinement of the slope and zero point of the Cepheid P-L relation, thus reducing the statistical uncertainty of those factors, but that may only result in a negligible change in the distance scale. You then must apply that to the secondary distance indicators to get a value of H0 - and we're right back to looking at fixing the problems with the HKP results.

Jerry
2007-Aug-02, 07:34 PM
Even Sandage et al, an adamant proponent of a lower value for Ho, recently published a paper which put Ho at 62.

Sandage was getting beat over the head and shoulders about how far his surface brightness methology diverged from relativistic predictions. He adapted, because luminosity evolution became the 'most reasonable' explanation, not because his raw technique produced higher values for Ho.

To any degree that the convergence around a consensus value is driven by the desire for an uncontroversial consensus, confidence in this trend is misplaced. A better question is why surface brightness studies require luminosity evolution that is not consistent with large scale metallicity trends?

RussT
2007-Aug-03, 01:20 AM
I just found this, which would appear to be right in the middle of this entire subject area.

http://www.seds.org/messier/m/m064.html

Here are a couple of questions if someone wants to take a stab at them.

1. Is this the first/only galaxy that has been found to have the inner stars rotating in the opposite direction of the outer disc stars...IF that is really the case!?

2. What is the farthest galaxy where we can accurately determine the galaxy rotation curves of one side of the galaxy being redshifted moving away from us, and MORE IMPORTANTLY, actually 'measure' the 'Blue-shifted' stars moving toward us???

folkhemmet
2007-Aug-04, 05:42 AM
Dgruss23 said: "The studies that find H0 adopt different assumptions and methods. Such an analysis would not really tell you anything. It would be much like what Lyndon Ashmore does when he adopts H0=64 because it is the average of numerous studies in the literature."

Maybe I am reading him/her wrong, but I get the impression that dgruss23 thinks the Hubble constant is 80 or above, or, its actual value will forever remain elusive due to....

It's just not fair.
There's no way of getting at what's out there.
There's no objective truth.
Don't fool yourself mortal, forget about proof.
All analyses have their flaws.
Careful thought will only run up against nature's impervious walls!

There is a conspiracy afoot to suppress all alternatives.
It is easier to criticize than to produce your own results.
We are now really no closer to understanding anything about the large scale Universe than what we were during the neolithic.
But if astrophysics is akin to literary critical theory, then what is the point of engaging in expensive astrophysical research?
The cosmos is shrouded in mystery.
Let's be reluctant cosmologists and radical skeptics.
Cosmology has no direct benefit to humanity even close to research in the life sciences.
And if there really is some fundamental reason why we will never be close to understanding anything about the large scale properties of the Universe, then why should we trust the assumptions/methods behind the ATM Universe-students?

Or, alternatively, we could accept that astrophysicists are making significant progress assembling a giant "Universe story jigsaw" and some unforeseen technological breakthrough which may benefit life will come from the practice of astrophysics.

Jerry,

Jerry and his fellow ATMers should be careful not to continue to engage in inevitable divergence from a consensus value because of their unceasing desire for controversy as the consensus!

There is no such thing as an uncontroversial analysis, including Jerry-like ATM analyses which inevitably exhibit the same flawed methodology:

Jerry et al boldly try to lift away the proverbial fog (all the while making assumptions of his own) and set us straight in an I am holier than thou manner--hardly the kind of humility one might expect from a true radical skeptics. Jerry et al view is that most of modern physics is still up for grabs, very little of what is said to be known is actually known, life is full of mystery, etc. Essentially, Jerry et al seem to hold a philosophical assumption which is, at least in my opinion, pretty nihilistic and gloomy--that is, they assume that the Universe is inherently unknowable and impervious to human understanding. Hence, Nereid's "Jerry radical skepticism" phrase is quite apt. Yet time and again, very strangely, Jerry et al DESIRE to have it both ways-- they act like so much is unknown or unknowable about the Universe; however, uncannily, precisely enough stuff is known so they can use at least something as a foundation for their ATM idea. Thus, Jerry et aL, like most ATMers, hold a philosophical assumption/methodology which they themselves persistently violate, time after time after time...

dgruss23
2007-Aug-05, 01:14 AM
Dgruss23 said: "The studies that find H0 adopt different assumptions and methods. Such an analysis would not really tell you anything. It would be much like what Lyndon Ashmore does when he adopts H0=64 because it is the average of numerous studies in the literature."

Maybe I am reading him/her wrong, but I get the impression that dgruss23 thinks the Hubble constant is 80 or above,

Well, I've stated that the reason I started the thread is to show how it is still possible for H0 to be above 80.


or, its actual value will forever remain elusive due to....

It's just not fair.
There's no way of getting at what's out there.
There's no objective truth.
Don't fool yourself mortal, forget about proof.
All analyses have their flaws.
Careful thought will only run up against nature's impervious walls!

There is a conspiracy afoot to suppress all alternatives.
It is easier to criticize than to produce your own results.
We are now really no closer to understanding anything about the large scale Universe than what we were during the neolithic.
But if astrophysics is akin to literary critical theory, then what is the point of engaging in expensive astrophysical research?
The cosmos is shrouded in mystery.
Let's be reluctant cosmologists and radical skeptics.
Cosmology has no direct benefit to humanity even close to research in the life sciences.
And if there really is some fundamental reason why we will never be close to understanding anything about the large scale properties of the Universe, then why should we trust the assumptions/methods behind the ATM Universe-students?

Or, alternatively, we could accept that astrophysicists are making significant progress assembling a giant "Universe story jigsaw" and some unforeseen technological breakthrough which may benefit life will come from the practice of astrophysics.

Is this the best you can do? I've made very specific points on this thread - as always backed up by journal citations and explanation. You could respond to those points. I really don't understand what your tantrum is all about. I'm trying to have an intellectual discussion here. If you're uninterested or incapable then please feel free to ignore this thread. Your little dramatic fit deserves no further comment.

Zahl
2007-Aug-06, 12:54 AM
The Fundamental Plane gives H0=82 - not 72, but you never hear people talk about that.

Why single out one method, especially one that has the largest systematic errors? The FP involves the measurement of three observables with an intrinsic scatter of 10-20%, an indirect Cepheid calibration (since there are no Cepheids in fundamental plane ellipticals), assumptions that M/L ratios always scale with galaxy structural parameters in the same way and that early type galaxies always have similar stellar populations for a given galaxy mass, etc.

So it is not surprising that the FP is a particularly error prone method and is seldom used. Add to this the fact that the HKP used only 3 targets to calibrate the FP (Leo I group, Virgo cluster, Fornax cluster) and - according to the most recent determinations in the literature - may have underestimated their distance.

dgruss23
2007-Aug-06, 02:20 AM
Why single out one method, especially one that has the largest systematic errors?

Except that I did not single out one method. If take another look at my posts, you'll see I've commented on all 5 methods the HKP used. In fact, the extent of what I've said about the FP is that the HKP found H0=82 with it. Why single out my comments on one method only?



The FP involves the measurement of three observables with an intrinsic scatter of 10-20%, an indirect Cepheid calibration (since there are no Cepheids in fundamental plane ellipticals), assumptions that M/L ratios always scale with galaxy structural parameters in the same way and that early type galaxies always have similar stellar populations for a given galaxy mass, etc.

So it is not surprising that the FP is a particularly error prone method and is seldom used. Add to this the fact that the HKP used only 3 targets to calibrate the FP (Leo I group, Virgo cluster, Fornax cluster) and -

I'm well aware of all this. You'll note that I have not argued the Hubble constant could be in the 80's based upon the FP. I've made other points that support my contention.

But it seems then, that - if you find the 3 target clusters problematic, you would also find the dearth of calibrators for the Type II SN (3); Type Ia SN (6) and SBF method (6) problematic?


according to the most recent determinations in the literature - may have underestimated their distance.

What references would you be referring to in this instance? I did an ADS search, but I'm not sure if you're referring to the FP itself, or the 3 clusters used to calibrate the FP.

Zahl
2007-Aug-06, 10:38 AM
Except that I did not single out one method. If take another look at my posts, you'll see I've commented on all 5 methods the HKP used. In fact, the extent of what I've said about the FP is that the HKP found H0=82 with it. Why single out my comments on one method only?

You dismissed other methods, even going as far as calling SBF "irrelevant" even though it actually has smaller systematic and random errors than the FP result that you accepted at face value - then paraded the 82 figure multiple times in the thread. You drew attention to the number (6) of SN Ia calibrators used by the HKP, but not even mentioned that the FP used only 3 and that they were indirect. You completely ignored the very numerous and complicated sources of error in the FP method that I summarized in my previous post, then asked why you never hear people talk about that result. Well... Either you were not aware of all this or your personal bias is obvious.

dgruss23
2007-Aug-06, 01:50 PM
You dismissed other methods, even going as far as calling SBF "irrelevant" even though it actually has smaller systematic and random errors

I did not call the SBF irrelevant. I stated that the HKP result with the SBF was irrelevant because they only used 6 galaxies in their analysis. That's a big difference in meaning. And I don't ignore the smaller systematic errors in the SBF method, but the HKP only used 6 galaxies and therefore their determination of H0 from this method is irrelevant. The sample is too small.



than the FP result that you accepted at face value - then paraded the 82 figure multiple times in the thread.

You're drawing the conclusion that I "accept the FP result at face value" on very meager information. Here is the sum total of what I've said about the FP on this thread:


One of the methods they used (the FP) actually gave a Hubble constant of 82 (one bullet of 5 from post#1)

and

That leaves 3 methods: Fundamental Plane, SN Ia, and the I-band TFR. The Fundamental Plane gives H0=82 - not 72, but you never hear people talk about that. (post #34)

How do you conclude that I accept the FP H0 result at face value from that? I mentioned it twice and expanded on those mentions zero times.

My point from the start on this thread has been that H0 could be in the 80's. Everybody likes to think that the HKP used 5 methods to get H0 = 72 and because of the use of 5 methods the 80's is not possible. Sure they used 5 methods, but only 4 of those methods led to H0=~70-72. The FP result did not support the rest. And I maintain that the samples are too small for the SBF and Type II SN samples used by the HKP. So now that 5 methods has been trimmed down to 2 methods: the Tully-Fisher Relation and Type Ia SN.

As I said before, if you go back through this thread and look at my comments you'll see that the primary support I've offered for the possibility that H0 could be in the 80's involves the NGC 4258 maser distance, van Leeuwen et al study, and the Tully&Pierce (2000) I-band Tully Fisher result (Luminosity-Linewidth actually - Dr. Tully never uses the name "Tully-Fisher" in his papers). Please take notice that I have not argued that H0 could be in the 80's because the FP gave H0=82.


You drew attention to the number (6) of SN Ia calibrators used by the HKP, but not even mentioned that the FP used only 3 and that they were indirect. You completely ignored the very numerous and complicated sources of error in the FP method that I summarized in my previous post,

As I stated above - and I realize I didn't clearly explain my meaning on this point when it was mentioned - but my reason for stating the HKP FP result is that only 4 of the 5 methods the HKP used gave H0=72. The calibration limitations with the FP are not relevant to that point. The fact is that one method did not give H0=72.

Now if I was arguing H0=82 based upon the FP result (which I have not and if you don't believe me carefully re-read the thread) - then this point you've made would be important.


then asked why you never hear people talk about that result. Well... Either you were not aware of all this or your personal bias is obvious.

As I told you I was aware of the limitations with the FP. As I said to folkhemmet, I'm trying to have an intellectual discussion here on the possibility that H0 could still be in the 80's despite the HKP results.

You've responded to nothing that I've stated except two sentences on the fundamental plane and then tell me I have a personal bias.

BTW, you never did answer these questions:


But it seems then, that - if you find the 3 target clusters problematic, you would also find the dearth of calibrators for the Type II SN (3); Type Ia SN (6) and SBF method (6) problematic?

What references would you be referring to in this instance? I did an ADS search, but I'm not sure if you're referring to the FP itself, or the 3 clusters used to calibrate the FP.

I ask questions when I'm genuinely interested in the answer, not to make my posts longer.

Edited to add:

I just remembered I did have one other mention of the FP in the fifth bullet from the first post:


The I-TFR distances tend to overestimate distances relative to other methods - including methods presented in their own paper for some clusters. For example, the FP distance to Abell 3574 (Table 9) is 51.6 Mpc while the I-TFR distance in Table 7 is 62.2 Mpc. The Centaurus 30 cluster I-TFR distance is 43.2 Mpc (Table 7) whereas a Cepheid distance to NGC 4603 in the same cluster is 33.3 Mpc and the SBF method from the large study of Tonry et al (2001) gives a distance of ~33 Mpc (same as the Cepheid distance). For Antlia the HKP I-TFR distance is 45.1 Mpc whereas the Tonry et al SBF distance is ~33 Mpc.

neilzero
2007-Aug-06, 02:59 PM
We seem to stay with a value of Hubble constant for a few years then change it by a few percent, so we should not be surprised if the Hubble constant changes a few more times. Neil

Jerry
2007-Aug-06, 04:41 PM
Jerry and his fellow ATMers should be careful not to continue to engage in inevitable divergence from a consensus value because of their unceasing desire for controversy as the consensus!

The historical context of how the consensus value for the Hubble Constant has been established is very important.

The slippery-ness of the zero-point in the calibration of the Hubble flow is not trivial: This is where the baton is passed between local and cosmic scaling, and a careful reading of HKP reveals this is where there is the greatest descrepancy in the methods used to establish the consensus Hubble value!

Like it or not, since 2001, the widening family of light-curves observed has eroded the prior confidence placed in our understanding of the absolute magnitude of supernova-like events:

http://arxiv.org/PS_cache/astro-ph/pdf/0612/0612198v1.pdf


The comparison of SN theory and observation faces new and interesting challenges once multi-dimensional models are considered. Given the viewing angle dependence, the predictions of aspherical models bear an intrinsic multiplicity. Model validation can then no longer be limited to the traditional exercise of matching synthetic light curves and spectra to individual SN observations. Rather, we must also study the probability distributions and dispersion levels characterizing various model observables (e.g., peak magnitudes, decline rates, line velocities and polarization levels) along with the internal correlations relating different sets of such observables.


http://arxiv.org/PS_cache/astro-ph/pdf/0512/0512574v1.pdf


The effects on the photometric parameters and spectral features are also discussed. In particular, for the case of circumstellar dust, {Light echos} are found to introduce an apparent relation between the post-maximum decline rate and the absolute luminosity which is most likely going to affect the well known Pskowski-Phillips relation.


Jerry et al boldly try to lift away the proverbial fog (all the while making assumptions of his own) and set us straight in an I am holier than thou manner--hardly the kind of humility one might expect from a true radical skeptics.

The fog is scientific uncertainty. What is not known, is if the types of errors identified by the researchers quoted above, and other yet-undetermined errors, shift the baseline enough that the real nature of the the Hubble flow is obscured. Looking at the failure of the HKP to resolve a solid zero point for the Hubble flow, the results can be just as easily be interpreted as evidence one or more of the methodologies used to determine the Hubble constant must be flawed.


Jerry et al view is that most of modern physics is still up for grabs...
Absolutely. It is not a convergence of opinions and/or theories that governs the scientific method, it is a convergence of observations, and the universe keeps getting bigger.

For more than a century it was considered 'scientifically' unsafe to swim after eating. Someone finally sorted through all of the observational data, and concluded there was no scientific bases for this long-held consensus theory propagated by thousands of scientists within the medical community. It can happen again.

R.A.F.
2007-Aug-06, 04:56 PM
It can happen again.

Sure it could. Problem is that you have not provided evidence that you're the one who is going to make "it" happen.

Why does this read like an ATM thread??

dgruss23
2007-Aug-06, 05:00 PM
Why does this read like an ATM thread??

It was certainly not my intention for this to be an ATM thread. I'm not promoting an ATM theory in this thread. But I suppose - given that most people have accepted H0=72, that it will sound ATM if someone suggests that H0 could still be in the 80's.

But I would not classify that argument as ATM. We're just talking about the value of the Hubble Constant. Even Tully&Pierce pointed out that they would get H0=86 from their Luminosity-Linewidth analysis if the maser distance to NGC 4258 was used to fix the Cepheid P-L zero point rather than the LMC distance.

StupendousMan
2007-Aug-06, 06:22 PM
It was certainly not my intention for this to be an ATM thread. I'm not promoting an ATM theory in this thread. But I suppose - given that most people have accepted H0=72, that it will sound ATM if someone suggests that H0 could still be in the 80's.


Actually, most astronomers I know -- including myself -- are well aware that the Hubble constant could range anywhere between, say, 60 and 85 km/s/Mpc. We understand that significant systematic errors are possible, even likely.

For convenience, however, when we publish quantities which depend on the value of the Hubble constant -- the luminosity of a sample of galaxies between z=0 and z=0.2, for example -- we often quote the results for one particular value of the Hubble constant; in many cases, H0 = 70. Why? So that it's easier to compare these results against others.

Sometimes astronomers publish quantities with a little factor of "h" included; that stands for "the value of the Hubble constant, divided by 100 km/s/Mpc." By inserting one's favorite value into the "h" factor, and carrying out whatever computations are indicated, one can then convert the published value to the equivalent for another choice of H0.

In this case, I think that the astronomical community as a whole isn't as dogmatic as the press may portray it. We're not always so stupid ...

Zahl
2007-Aug-06, 07:29 PM
I'm not interested to play word games with you, dgruss23. If it was not due to ignorance, you deliberately misled BAUT readers by dismissing the Hubble Key Project SBF result as "irrelevant" even though it has smaller systematic AND random errors than the FP result. It is completely laughable to claim that a result with +/- 5 random and +/- 6 systematic errors (the SBF result) is "irrelevant" and then present another without any discussion of systematic or random errors that both are larger. If this was deliberate, it was incredibly rude and offensive to anyone who understands these things. As for the water maser distance, it has recently been determined by many authors that the maser and cepheid distances now agree and that the resulting H0 value is still in the 70s. If there was a contemporary case to be made for H0 in the 80s, somebody would make it but nobody has. It is possible, but unlikely.

For Leo I, Fornax and Virgo distances, search the ADS.

dgruss23
2007-Aug-06, 08:50 PM
Actually, most astronomers I know -- including myself -- are well aware that the Hubble constant could range anywhere between, say, 60 and 85 km/s/Mpc. We understand that significant systematic errors are possible, even likely.

For convenience, however, when we publish quantities which depend on the value of the Hubble constant -- the luminosity of a sample of galaxies between z=0 and z=0.2, for example -- we often quote the results for one particular value of the Hubble constant; in many cases, H0 = 70. Why? So that it's easier to compare these results against others.

Sometimes astronomers publish quantities with a little factor of "h" included; that stands for "the value of the Hubble constant, divided by 100 km/s/Mpc." By inserting one's favorite value into the "h" factor, and carrying out whatever computations are indicated, one can then convert the published value to the equivalent for another choice of H0.

In this case, I think that the astronomical community as a whole isn't as dogmatic as the press may portray it. We're not always so stupid ...

Thank you, but I'm well aware of all this. I don't know where I fit into this when you talk about "astronomers". As an independent researcher - using previous published data - I've published research in The Astrophysical Journal and Astrophysics&Space Science. So I understand much more than you might think - although I wouldn't necessarly consider myself an "astronomer".

dgruss23
2007-Aug-06, 09:25 PM
I'm not interested to play word games with you, dgruss23.

I don't play word games. I choose my words carefully. I cannot help it if you choose to interpret meanings that I did not state and thus put me in the position of needing to explain the subtlety of the English language for you. You would do well to actually ask me what I mean if you suspect I have made an error rather than assume I have some deceptive intent.

Like it or not, there is a difference between saying that the SBF distance method is irrelevant and saying that the HKP SBF estimate of H0 is irrelevant because they had too small a sample size. That is not playing word games. That is distinguishing between different meanings. You interpreted what I said incorrectly and then want to get all offended and cranky about it.


If it was not due to ignorance, you deliberately misled BAUT readers by dismissing the Hubble Key Project SBF result as "irrelevant" even though it has smaller systematic AND random errors than the FP result. It is completely laughable to claim that a result with +/- 5 random and +/- 6 systematic errors (the SBF result) is "irrelevant" and then present another without any discussion of systematic or random errors that both are larger. If this was deliberate, it was incredibly rude and offensive to anyone who understands these things.

Did you actually read what I wrote???? There are no word games here. I was very clear when I responded to you:


I did not call the SBF irrelevant. I stated that the HKP result with the SBF was irrelevant because they only used 6 galaxies in their analysis. That's a big difference in meaning. And I don't ignore the smaller systematic errors in the SBF method, but the HKP only used 6 galaxies and therefore their determination of H0 from this method is irrelevant. The sample is too small.

I stand by what I've stated. The HKP used 6 galaxies for their SBF analysis. That is not enough galaxies to determine H0. I once had a referee tell me that a sample of ~ 240 galaxies I was using was too small and you want me to take an H0 estimate from 6 SBF distances seriously?

The small systematic uncertainty of the SBF distances does not change this situation. If they had even used 3 galaxies per cluster instead of one the situation would be changed. The HKP's single SBF distance to the Coma cluster galaxy NGC 4881 gives a distance of 102.3 Mpc whereas they (the HKP again) get 85.8 Mpc from the I-band TFR (using 28 galaxies)- in line with what Tully&Pierce (2000) got with the TFR (86.3 Mpc - using 28 galaxies). The SBF distance to NGC 4881 can be bang on and it doesn't make it an accurate estimate of the distance to the Coma cluster (and therefore H0). A single galaxy could be on the backside of the cluster and not representative of the mean cluster distance. And that is why I say their SBF determination of H0 is irrelevant.

You want to accuse me of misleading the BAUT readers??? That is something I take offense to!!! People may disagree with me and as with everybody I'm occasionally wrong or unware of other relevant results. But I do not appreciate being accused of intentionally misleading.

You seem to be claiming expertise here - surely you must know that a single distance estimate to a cluster is very risky because the galaxy in question may be on the front or backside of the cluster? Any H0 estimate from a single galaxy in a cluster should not be trusted - no matter how small the systematics of the method of finding distance. Should I claim you are misleading BAUT readers by failing to note/acknowledge this flaw in the HKP SBF sample size?


As for the water maser distance, it has recently been determined by many authors that the maser and cepheid distances now agree and that the resulting H0 value is still in the 70s. If there was a contemporary case to be made for H0 in the 80s, somebody would make it but nobody has. It is possible, but unlikely.

Now that is part of what we're discussing. The van Leeuwen study is one example of what you've mentioned here. They find a much closer agreement between the maser distance and the cepheid distance than the HKP found. But they revise the cepheid zero point which pushes the HKP H0 estimate to 76 and would push the Tully&Pierce H0 estimate to 81.

There is also the result of An et al (http://xxx.lanl.gov/abs/0707.3144) that suggests a slightly larger downward revision in the distance scale than found by van Leeuwen et al.


For Leo I, Fornax and Virgo distances, search the ADS.

Did you have specific studies in mind? You're the one that made the statement. Why should I try and guess which study(s) you meant? If you want to contribute something substantive to the discussion how about linking to the papers.

dgruss23
2007-Aug-06, 09:44 PM
An ADS search from 2001 to present using the Keywords "Virgo cluster distance" gives over 200 hits. It looks like one of those is a SBF distance study that finds a mean distance of 16.5 Mpc.

The same search for Fornax gives 85 hits. For Leo I there is a handful of hits but nothing relevant to our discussion that I noted. Again it would be helpful if you would provide specific studies.

Jerry
2007-Aug-06, 10:20 PM
However, we note that the uncertainty in the distance to the LMC is one of the largest remaining uncertainties in the overall error budget for the determination of H0.

We note that if the distance modulus to the LMC is 18.3 mag, there will be a resulting 10% increase in the value of H0 to 79 km/sec/Mpc.

And from the latest paper submitted upon LMC distance:

THE DISTANCES TO OPEN CLUSTERS FROM MAIN-SEQUENCE FITTING. IV.
GALACTIC CEPHEIDS, THE LMC, AND THE LOCAL DISTANCE SCALE

http://arxiv.org/PS_cache/arxiv/pdf/0707/0707.3144v1.pdf


We derive distances to NGC 4258, the LMC, and M33 of (m-M)0 = 29.28±0.10, 18.34±0.06, and 24.55±0.28, respectively, with an additional systematic error of 0.16 mag in the P-L relations. 18.34±0.06, and 24.55±0.28, respectively, with an additional systematic error of 0.16 mag in the P-L relations. The distance to NGC 4258 is in good agreement with the geometric distance derived from water masers [delta(m-M)0 = 0.01±0.24]; our value for M33 is less consistent with the distance from an eclipsing binary [delta(m-M)0 = 0.37±0.34]; our LMC distance is moderately shorter than the adopted distance in the HST Key Project, which formally implies an increase in the Hubble constant of 7%±8%.

Keep in mind this is an absolute error in the Hubble value driven by baseline constraints. There are also possible systemics in the Supernovae observations I mentioned above, which would be additive, and beyond the 0.15m limits upon supernova luminosities used in error estimates by HKP.


As for the water maser distance, it has recently been determined by many authors that the maser and cepheid distances now agree and that the resulting H0 value is still in the 70s. If there was a contemporary case to be made for H0 in the 80s, somebody would make it but nobody has. It is possible, but unlikely.
DT&P claims to be consistent with the maser results, and pushes the value of Ho to 79 without even touching the supernovae-derived curve. Whether it could be even higher or not is not limited by current observationally-derived limits, but theoretical challenges imposed by higher numbers.

(I don't have a preferred value for what Ho is or should be, but I will admit I like theoretically challenging observational data!)

StupendousMan
2007-Aug-07, 02:34 AM
First dgruss wrote:



I'm not promoting an ATM theory in this thread. But I suppose - given that most people have accepted H0=72, that it will sound ATM if someone suggests that H0 could still be in the 80's.


I attempted to address his claim that "most people have accepted H0=72":



Actually, most astronomers I know -- including myself -- are well aware that the Hubble constant could range anywhere between, say, 60 and 85 km/s/Mpc. We understand that significant systematic errors are possible, even likely.


The reply:


Thank you, but I'm well aware of all this. I don't know where I fit into this when you talk about "astronomers".

??

I guess you're more interested in semantic games than science.

dgruss23
2007-Aug-07, 04:12 AM
I attempted to address his claim that "most people have accepted H0=72":

I guess you're more interested in semantic games than science.

Actually, I hate semantic games. I'd much prefer to talk about the science. And I think I've addressed more science than anyone on this thread. Your comment had nothing to do with the science I've discussed. Of all the points I've made dealing with the HKP final results and other papers, you choose to respond to none of that, but a statement I made about what people think - most "people" having accepted H0=72? And then you tell me I'm not interested in science??

Do you understand why I responded as I did to your post? Apparently not so let me be more specific.

Your explanation about the use of "h" and the adoption of H0=70 for ease of comparison is nothing new to me. I've read enough papers to be familiar with both those points and published my own work in ApJ. Your comments about "h" and the like are very basic applications that anyone who has published research on the distance scale should be familiar with. So I was attempting to save you the trouble of lecturing me about such basics by letting you know that I have enough background to be familiar with that type of information. However, as an independent researcher I wouldn't want to be so bold as to call myself an "astronomer". After all, astronomers have training I don't have and access to resources I don't have access to. I didn't want the fact that I've published a few papers to lead to an incorrect inference that I've got a PhD in astrophysics or am claiming to have expertise I don't have.

So what you are characterizing as a "semantic game" was actually my attempt to give you a little more background about myself so you have a better frame of reference when you respond to my posts. I'm not employed as an astronomer, but I'm not your typical poster here either. I thought it might save you some time to have that information ... that's all.

Seriously, I hate the semantic games. It frustrates me that I attempt to discuss evidence and zahl tells me I'm attempting to mislead BAUT members and then you tell me I'm playing semantic games. However, if some people wish to play semantic games, I'll do so in the interest of trying to move the discussion back to the science. Sometimes you have to get people past the semantics before you can discuss the science.


Actually, most astronomers I know -- including myself -- are well aware that the Hubble constant could range anywhere between, say, 60 and 85 km/s/Mpc. We understand that significant systematic errors are possible, even likely.

Ok, let me try again. I'm sorry that you feel I didn't respond appropriately to this part of your comments. And I'm being serious, not sarcastic in what follows:

That's great to know! So if we take your experience as a correct representation of what most astronomers would think, then the problem would seem to be that this understanding is lost in translation when the information is communicated to laymen? We have two populations here - researchers and laymen.

The comments so far on this thread suggest that most people that have responded think that it is unlikely that H0 could be in the 80's based upon the fact that most studies (or at least reported in popular literature I guess) find H0 ~ 70.

In this thread I've pointed to a few reasons to be cautious about the HKP final results and why it is still a viable possibility that H0 could be in the 80's. You can see my earlier posts for those reasons. But then based upon what you're saying, if evidence was presented that H0 is in the 80's most astronomers would not simply brush that aside by assuming that the researchers in question must have done something wrong because most studies point to lower H0 and WMAP results and concordance cosmology ... They would at least look carefully at the analysis?

Zahl
2007-Aug-07, 10:42 AM
If it was not due to ignorance, you deliberately misled BAUT readers by dismissing the Hubble Key Project SBF result as "irrelevant" even though it has smaller systematic AND random errors than the FP result. It is completely laughable to claim that a result with +/- 5 random and +/- 6 systematic errors (the SBF result) is "irrelevant" and then present another without any discussion of systematic or random errors that both are larger. If this was deliberate, it was incredibly rude and offensive to anyone who understands these things.
Did you actually read what I wrote???? There are no word games here. I was very clear when I responded to you:


Originally Posted by dgruss23

I did not call the SBF irrelevant. I stated that the HKP result with the SBF was irrelevant because they only used 6 galaxies in their analysis. That's a big difference in meaning. And I don't ignore the smaller systematic errors in the SBF method, but the HKP only used 6 galaxies and therefore their determination of H0 from this method is irrelevant. The sample is too small.

I stand by what I've stated. The HKP used 6 galaxies for their SBF analysis. That is not enough galaxies to determine H0. I once had a referee tell me that a sample of ~ 240 galaxies I was using was too small and you want me to take an H0 estimate from 6 SBF distances seriously?

The small systematic uncertainty of the SBF distances does not change this situation. If they had even used 3 galaxies per cluster instead of one the situation would be changed. The HKP's single SBF distance to the Coma cluster galaxy NGC 4881 gives a distance of 102.3 Mpc whereas they (the HKP again) get 85.8 Mpc from the I-band TFR (using 28 galaxies)- in line with what Tully&Pierce (2000) got with the TFR (86.3 Mpc - using 28 galaxies). The SBF distance to NGC 4881 can be bang on and it doesn't make it an accurate estimate of the distance to the Coma cluster (and therefore H0). A single galaxy could be on the backside of the cluster and not representative of the mean cluster distance. And that is why I say their SBF determination of H0 is irrelevant.

You want to accuse me of misleading the BAUT readers??? That is something I take offense to!!!

Yes. In fact, what you write above is ignorant nonsense. When finding h0 with the Surface Brightness Fluctuations method a distance is determined to a galaxy (not cluster) by measuring SBF in that galaxy and finding a Cepheid in that galaxy for calibration. Redshift is then found and h0 calculated. There is no need for the SBF galaxy to be representative of the mean cluster distance. Now, anybody reading this and wondering what to believe - ask yourselves if the authors and the referee of one of the most important and heavily cited papers in the business had "irrelevant" results published because of a trivial error ("only 6 galaxies for their SBF analysis") or if some poster on an internet forum doesn't know what the heck he is talking about.

Zahl
2007-Aug-07, 11:07 AM
DT&P claims to be consistent with the maser results, and pushes the value of Ho to 79

77, not 79. And it was +7%±8% so the increase is not even statistically significant.

dgruss23
2007-Aug-07, 01:49 PM
Yes. In fact, what you write above is ignorant nonsense.

My comments are for everyone following this discussion. And first, if you do a search on Zahl's posting history you'll note that he is quite liberal with telling people they're ignorant or don't know what they're talking about - even KenG (http://www.bautforum.com/821474-post31.html) who has demonstrated himself to be very knowledgeable.

For the sake of clarity, here is what Zahl is claimed I've said that is ignorant nonsense:


I stand by what I've stated. The HKP used 6 galaxies for their SBF analysis. That is not enough galaxies to determine H0.

The point I've made is that for the SBF and Type II supernova H0 estimates, the HKP used a limited number of galaxies (6 and 4 respectively) - which is not enough to determine H0.

However, zahl's understanding of what the HKP did is different:


When finding h0 with the Surface Brightness Fluctuations method a distance is determined to a galaxy (not cluster) by measuring SBF in that galaxy and finding a Cepheid in that galaxy for calibration. Redshift is then found and h0 calculated. There is no need for the SBF galaxy to be representative of the mean cluster distance.

So zahl claims here that the HKP did not use one galaxy per cluster as I have stated. He is wrong. Here is what the HKP did:

1. They derived the SBF zero point by using local calibrators with cepheid distances.

2. They utilized SBF measurements to the brightest cluster galaxy (BCG) in 6 clusters assuming that the BCG is at the mean cluster distance.

3. They used a larger sample of galaxies within the cluster to determine the mean cluster redshift.

4. They calculated H0 by dividing the mean cluster redshift by the SBF distance to the BCG.

As I said - 6 galaxies were used in 6 clusters. Now unlike zahl, I'll actually link to papers to support what I state instead of just lazily accusing people of being ignorant.

Here (http://adsabs.harvard.edu/abs/2000ApJ...529..745F) is the paper in which the HKP initially presented their SBF analysis. The calibration of the SBF method is discussed in section 6 and its application to H0 is discussed in section 8. For those that don't want to read the technical discussion in the paper you need look no farther than Table 4 in the paper. The table is titled "Sample of F814W-SBF Galaxies for Deriving H0". The first data column is titled "Cluster". The second column is titled "Galaxy ID". Note there are only 6 galaxies listed in Table 4 -- one galaxy for each cluster.

Now the HKP did here the next best thing to have a sample of multiple galaxies per cluster. They selected the BCG - which might fairly be assumed to be at the cluster center. However, we can see that Zahl was flat out wrong - the HKP did use a single galaxy to represent the mean cluster distance for 6 clusters.

The HKP final results are presented in this paper (http://adsabs.harvard.edu/abs/2001ApJ...553...47F). If you look at sections 6.4 and Table 10 you'll see the same 6 galaxies with SBF distances listed as Table 4 of the previous paper. In section 6.4 the following is stated:


With HST, this method (SBF) is now being extended to larger distances (Lauer et al 1998); unfortunately, however, only six galaxies beyond the fornax cluster have published surface brightness fluctuation distances, with only four of them accurate enough to be of interest for cosmology.

I'm not sure how Zahl missed this. If he wants to tell me I'm "ignorant" one would think he would have read the relevant papers closely enough to be certain about that. One should not be so careless when leveling accusations of "ignorance". It makes one look quite foolish when they are so easily shown to be wrong in their proclamation.

I stand by what I've said in this thread. The HKP sample sizes for the SBF method and Type II SN are too small.


Now, anybody reading this and wondering what to believe - ask yourselves if the authors and the referee of one of the most important and heavily cited papers in the business had "irrelevant" results published because of a trivial error ("only 6 galaxies for their SBF analysis") or if some poster on an internet forum doesn't know what the heck he is talking about.

I think the lesson for BAUT readers is that you should not trust someone that so willingly accuses others of ignorance when they refuse to link to a single paper. We can now see that in fact the referee's did allow a SBF analysis with only 6 galaxies to be published. I'm not saying they were wrong to allow it to be published. My point is that the sample is too small to be truly relevant support for H0=70. And it is not as if the HKP workers are not aware that more work is needed. Ferrarese et al (2000 - linked to above) noted that more SBF work is needed at the end of section 6 of their paper:


To conclude, we must stress that the calibration of F814W SBF is likely to be significantly improved in the near future when more F814W SBF measurements will be peformed for galaxies in the HST archive. In particular, it would be desirable to derive the slope of the color dependence using a large sample of galaxies belonging to different groups and clusters as was possible for I-SBF (Tonry et al 1997).

Speaking of the slope of the color dependence they said this in the preceding paragraph:


We conclude that the present amount of data is too limited to allow an empirical determination of the slope of the color dependence, and we prefer to impose the I-band slope of 4.5 +/- 0.30 on MF814W ... to account for the 0.05 mag difference between the theoretical calibrations of the two bands.

StupendousMan
2007-Aug-07, 02:05 PM
... then the problem would seem to be that this understanding is lost in translation when the information is communicated to laymen? We have two populations here - researchers and laymen.


Yes, I agree that this is the main issue. It is difficult to write a successful article for the popular press without simplifying, or over-simplifying.



The comments so far on this thread suggest that most people that have responded think that it is unlikely that H0 could be in the 80's based upon the fact that most studies (or at least reported in popular literature I guess) find H0 ~ 70.

In this thread I've pointed to a few reasons to be cautious about the HKP final results and why it is still a viable possibility that H0 could be in the 80's. You can see my earlier posts for those reasons.


I agree with you. It is possible that H0 could be in the 80s. Unlikely, in my opinion, with the current weight of evidence against it, but possible.



But then based upon what you're saying, if evidence was presented that H0 is in the 80's most astronomers would not simply brush that aside by assuming that the researchers in question must have done something wrong because most studies point to lower H0 and WMAP results and concordance cosmology ... They would at least look carefully at the analysis?

Yes.

However, let me point out that there are two ways one might "present evidence that H0 is in the 80s".

One way is to use a single method --- say, surface-brightness fluctuations --- to measure the distance to a small set of galaxies, calculate the value of H0 based on those distance and radial velocities, and claim "H0 is 82". This will sway very few scientists, because it will be a small bit of evidence for a high value of H0, whereas there exists a much larger body of evidence for a smaller H0.

Another way is to find an important systematic error in one of the earlier steps on the distance ladder. For example, if the distance modulus to the LMC could be shown to be much smaller than 18.50, due to (this is just an example) some kind of previously undetected anamolous extinction between it and the Milky Way, then _that_ would probably cause more astronomers to take the idea seriously.

Jerry
2007-Aug-07, 03:00 PM
77, not 79. And it was +7%±8% so the increase is not even statistically significant.
A shift in the baseline is always significant. In this case, it 'formally' rules out values of the Hubble constant less than 69, and 'formally' increases the range of likely Hubble values to 85.

This is also significant because there have been several attempts to determine the Hubble constant, such as lensing an SZ effects, (not included in the HKP) that places the value in the mid to low sixties, and are not dependent upon LMC scaling.

TomT
2007-Aug-07, 05:29 PM
Quote:
Originally Posted by Zahl
77, not 79. And it was +7%±8% so the increase is not even statistically significant.


A shift in the baseline is always significant. In this case, it 'formally' rules out values of the Hubble constant less than 69, and 'formally' increases the range of likely Hubble values to 85.

This is also significant because there have been several attempts to determine the Hubble constant, such as lensing an SZ effects, (not included in the HKP) that places the value in the mid to low sixties, and are not dependent upon LMC scaling.

For the sake of us in the "peanut gallery" trying to follow this, would you folks clarify your numbers please.

Zahl : Does +7%+/-8% mean -1% to +15% or something else? And what is the base number you apply this to, 70 or 72 or something else?

Jerry : How did you get 69 and 85?

Thanks for any clarification.

TomT

Jerry
2007-Aug-07, 08:03 PM
The HKP (Hubble Key Project) Used the Distance to the Cephieds in the Small Magellenic Cloud (SMC) to create a basic ruler. Change the distance to the cloud and the ruler length changes proportionately.

The Distance Modulus to the SMC used by HKP was 18.5+/-0.1. The 'new' modulus is 18.34+/-0.06. It is a little difficult to extrapolate this value to a new Hubble Value (Changing the distance to the SMC changes both the slope and offset of the Hubble flow), but Zahl's 7% figure is probably more correct than my eyeballed 9% estimate.

10% is the total, formal error estimated in HKP from all sources. If the new distance to the SMC is more certain, the formal error should be less as well. HKP states that the uncertainty in the distance to the SMC is 5%; and the total quadratic error from all sources results in the 10% figure. I don't know where Zahl's 8% figure comes from, but it is a reasonable number, if the error estimated in the distance to the SMC has been reduced, which it has (From 0.1 to 0.06).


77+/- 8%, which is ~71 to ~83; or if the formal error is left at 10%, ~69 to ~85 m/s/kps.

Nereid
2007-Aug-07, 08:44 PM
For the sake of us in the "peanut gallery" trying to follow this, would you folks clarify your numbers please.

Zahl : Does +7%+/-8% mean -1% to +15% or something else? And what is the base number you apply this to, 70 or 72 or something else?

Jerry : How did you get 69 and 85?

Thanks for any clarification.

TomTHere is an extract from the Summary (section 11) of the Freedman et al. HKP paper:
The relative Cepheid distances are determined to ~±5%.

Calibrating 5 secondary methods with these revised Cepheid distances, we find H0 =72 ± 3 (random) ± 7 (systematic) km s−1 Mpc−1, or H0 = 72 ± 8 km s−1Mpc−1, if we simply combine the total errors in quadrature.Although I'm not certain, it seems the "±" numbers are 1 sigma (see section 3.4). For more details, re Cepheid distances, "further discussion of errors can be found in Madore et al. 1999; Ferrarese et al. 2000b".

So, the "simply combine the total errors in quadrature" value of H0, from the Freedman et al. HKP paper is [64, 80] (1 sigma), [56, 88] (2 sigma), [48, 96] (3 sigma).

However, as I think has been made very clear in this thread, reducing the results of many projects which set out to determe H0 to only "H0 = 72 ± 8 km s−1Mpc−1" can be misleading.

The Freedman et al. HKP paper, in Section 9, briefly discusses two independent methods of determining H0 - the SZ effect and time delays from gravitational lenses. These two methods are, potentially, hugely valuable, if only because they are free of all the systematics of all the other methods discussed (in Freedman et al.) Unfortunately, each has its own systematic errors to wrestle with. To date, the good news is that the results are consistent with "H0 = 72 ± 8 km s−1Mpc−1". For example:
Published values of H0 based on the SZ method have ranged from ~40 - 80 km/sec/Mpc (e.g., Birkinshaw 1999). The most recent two–dimensional interferometry SZ data for well-observed clusters yield H0 = 60 ± 10 km/sec/Mpc. The systematic uncertainties are still large, but the near–term prospects for this method are improving rapidly [...]

Zahl
2007-Aug-08, 12:28 AM
if you do a search on Zahl's posting history you'll note that he is quite liberal with telling people they're ignorant or don't know what they're talking about

Indeed. When self-proclaimed "researchers" make obviously incorrect assertions it is a prudent thing to expose that nonsense. From my posting history you will find [url=http://www.bautforum.com/astronomy/57127-local-features-wmap-map-3.html]this[/url thread where a preprint by Gerrit Verschuur was discussed. He argued that the small-scale structure in the "WMAP data" (later changed to "WMAP ILC data" in the refereed v2 preprint) and HI are related. I refuted some of the nonsense in that preprint while some misguided individuals such as yourself claimed that I had misunderstood Verschuur's argument. Later I summarized the numerous changes in v2 of the preprint, showing that the content I had criticized had been duly removed.


So zahl claims here that the HKP did not use one galaxy per cluster as I have stated. He is wrong.

What a pathetic straw man. I did not dispute the fact that they used 6 galaxies in 6 clusters for the SBF result, I disputed your crazy claim that this makes their SBF result "irrelevant".


Here is what the HKP did:

1. They derived the SBF zero point by using local calibrators with cepheid distances.

2. They utilized SBF measurements to the brightest cluster galaxy (BCG) in 6 clusters assuming that the BCG is at the mean cluster distance.

3. They used a larger sample of galaxies within the cluster to determine the mean cluster redshift.

4. They calculated H0 by dividing the mean cluster redshift by the SBF distance to the BCG.

As I said - 6 galaxies were used in 6 clusters. Now unlike zahl, I'll actually link to papers to support what I state instead of just lazily accusing people of being ignorant.

Note how dgruss23 does not quote the papers. Nonsense is still nonsense even if it is written in red. Let's see what the HKP paper (http://arxiv.org/abs/astro-ph/0012376) really says:

"As part of the Key Project, Ferrarese et al. (2000a) applied an HST Cepheid calibration to the 4 Lauer et al. (1998) SBF galaxies, and derived H0 = 69 ± 4r ± 6s km/s/Mpc. The results are unchanged if all 6 clusters are included. The largest sources of random uncertainty are the large–scale flow corrections to the velocities, combined with the very sparse sample of available galaxies. Most of the systematic uncertainty is dominated by the uncertainty in the Cepheid calibration of the method itself(Ferrarese et al. 2000a, Tonry et al. 2000). These three factors account for the 10% difference between the SBF-based values of H0 derived by the KP and that by Tonry et al. (2000). Flow–corrected velocities, distances, and H0 values for the 6 clusters with SBF measurements are given in Table 10. Applying our new calibration, we obtain H0 = 70 ± 5r ± 6s km/s/Mpc applying a metallicity correction of –0.2 mag/dex, as described in §3."

The small sample size objection is specifically addressed. While it is one of the larger sources of error, the total error is still less than in the Fundamental Plane result that dgruss23 promoted without any mention of its error budget. Note that there is no mention whatsoever of this alleged error rising from "assuming that the SBF galaxy is at the mean cluster distance" that supposedly renders the SBF result "irrelevant".

Zahl
2007-Aug-08, 12:42 AM
A shift in the baseline is always significant.

As I said, it is not statistically significant. This is because +/-0% is within 1 sigma.


In this case, it 'formally' rules out values of the Hubble constant less than 69, and 'formally' increases the range of likely Hubble values to 85.

I don't know what you mean by "likely", but if we take 77, 80+ has a probability of about 1/3. I would call that possible, but unlikely. Still, 77 is on the high side of recent measurements.

Zahl
2007-Aug-08, 12:50 AM
For the sake of us in the "peanut gallery" trying to follow this, would you folks clarify your numbers please.

Zahl : Does +7%+/-8% mean -1% to +15% or something else? And what is the base number you apply this to, 70 or 72 or something else?

+7%+/-8% (from this paper: http://arxiv.org/PS_cache/arxiv/pdf/0707/0707.3144v1.pdf) means that the "true correction" to HKP's "best value" for H0 is -1% to +15% (68% confidence). Not all figures within this interval are equally likely, their "best value for correction" being +7%.

Zahl
2007-Aug-08, 12:51 AM
And 72*1.07=77.04

Nereid
2007-Aug-08, 01:38 AM
Here (http://adsabs.harvard.edu/abs/2001ApJ...553...47F) is the ADS webpage for Final Results from the Hubble Space Telescope Key Project to Measure the Hubble Constant, by Wendy Freedman and 14 others.

From this webpage, you can get the published 2001 ApJ paper (PDF), as well as the arXiv e-print (arXiv:astro-ph/0012376).

You will also find links to the 1115 papers that cite this HKP one.

dgruss23
2007-Aug-08, 12:43 PM
What a pathetic straw man. I did not dispute the fact that they used 6 galaxies in 6 clusters for the SBF result, I disputed your crazy claim that this makes their SBF result "irrelevant".

Your words zahl:


When finding h0 with the Surface Brightness Fluctuations method a distance is determined to a galaxy (not cluster) by measuring SBF in that galaxy and finding a Cepheid in that galaxy for calibration. Redshift is then found and h0 calculated. There is no need for the SBF galaxy to be representative of the mean cluster distance.

Perhaps what you wrote was not what you meant, but how else should one interpret the above other than to conclude that you thought there were more than 6 SBF galaxies used in the H0 determining sample? How does one measure the distance to a single galaxy in a cluster (which you now claim you always understood that they did - despite the above quote) and not have that single galaxy represent the cluster distance?

Oh - and by the way, what you wrote above is not even a correct description of what they did. What you wrote states that they find the distance to a galaxy, then find a cepheid distance to that galaxy for calibration (zero point calibration), then measure the redshift, and then calculate H0. For someone that is telling me I'm ignorant, pathetic, intentionally misleading Baut readers ... this is a very profoundly incomplete description of the process to the point of fault because it leaves out important steps. You left out that part about how they extend the local calibration in which Cepheids are used to more distant galaxies for which Cepheids are not used. As you wrote it you make it sound like every SBF galaxy used to calculate H0 also has a Cepheid distance. Even the 4 step process I provided in my previous post leaves a lot out - but at least it it doesn't give an incorrect perception about the basic procedure.



Note how dgruss23 does not quote the papers. Nonsense is still nonsense even if it is written in red. Let's see what the HKP paper (http://arxiv.org/abs/astro-ph/0012376) really says:

"As part of the Key Project, Ferrarese et al. (2000a) applied an HST Cepheid calibration to the 4 Lauer et al. (1998) SBF galaxies, and derived H0 = 69 ± 4r ± 6s km/s/Mpc. The results are unchanged if all 6 clusters are included. The largest sources of random uncertainty are the large–scale flow corrections to the velocities, combined with the very sparse sample of available galaxies. Most of the systematic uncertainty is dominated by the uncertainty in the Cepheid calibration of the method itself(Ferrarese et al. 2000a, Tonry et al. 2000). These three factors account for the 10% difference between the SBF-based values of H0 derived by the KP and that by Tonry et al. (2000). Flow–corrected velocities, distances, and H0 values for the 6 clusters with SBF measurements are given in Table 10. Applying our new calibration, we obtain H0 = 70 ± 5r ± 6s km/s/Mpc applying a metallicity correction of –0.2 mag/dex, as described in §3."

What????? I didn't quote the paper???? I would like everybody to note that in yesterday's response to Zahl I quoted from both papers I linked to - including the very paragraph that precedes the one Zahl has just quoted. Specifically this quote:


Originally Posted by Freedman et al
With HST, this method (SBF) is now being extended to larger distances (Lauer et al 1998); unfortunately, however, only six galaxies beyond the fornax cluster have published surface brightness fluctuation distances, with only four of them accurate enough to be of interest for cosmology.




The small sample size objection is specifically addressed.

Could you explain further how it is addressed? They state that most of the systematic uncertainty is due to the Cepheid calibration, they don't explain how they arrive at that conclusion - do they? They do not specifically explain how they can be certain that the galaxies with SBF distances are actually at the mean cluster distance.

For example, - as I've already noted - they find a distance of 102.3 Mpc for the Coma cluster using NGC 4881, but using 28 galaxies with I-band TFR distances they get 85.8 Mpc. So how do they know NGC 4881 is not on the backside - or that there is not a systematic error in the SBF distance?



While it is one of the larger sources of error, the total error is still less than in the Fundamental Plane result that dgruss23 promoted without any mention of its error budget.

Is this an intentional mischaracterization of what I've said earlier on this thread? I already responded to this in an earlier post:


My point from the start on this thread has been that H0 could be in the 80's. Everybody likes to think that the HKP used 5 methods to get H0 = 72 and because of the use of 5 methods the 80's is not possible. Sure they used 5 methods, but only 4 of those methods led to H0=~70-72. The FP result did not support the rest. And I maintain that the samples are too small for the SBF and Type II SN samples used by the HKP. So now that 5 methods has been trimmed down to 2 methods: the Tully-Fisher Relation and Type Ia SN.

As I stated above - and I realize I didn't clearly explain my meaning on this point when it was mentioned - but my reason for stating the HKP FP result is that only 4 of the 5 methods the HKP used gave H0=72. The calibration limitations with the FP are not relevant to that point. The fact is that one method did not give H0=72.

Now if I was arguing H0=82 based upon the FP result (which I have not and if you don't believe me carefully re-read the thread) - then this point you've made would be important.

You chose to call the above word games. The fact is I did not "promote" (talk about creating straw man arguments) the FP. I explained above the reason it was mentioned. It is there for everyone to judge for themselves. But it really doesn't surprise me that one who so liberally states people are ignorant, half-senile old men, don't know what they're talking about, pathetic ... would continue to repeat faulty mischaracterizations of what a person stated that have already been adequately explained.

Whether intentional or not Zahl, this is a very old debate tactic you're using - making your opponent respond to false charges. Happens in politics all the time. Unless you are ignorant as you stated I was, you should be capable of grasping my earlier explanation on the FP. Is there anybody besides Zahl following this discussion that would like to share your thoughts as to whether or not my earlier response to Zahl regarding the FP is a clear enough description of what I meant. Note that you would not be endorsing anything I've said by contributing in this regard - simply confirming that my explanation is or is not understandable.

Now zahl, keep in mind you have accused me of promoting FP which I have stated I did not. Since we're talking in this instance about my intentions in bringing up the FP - your choices are to either accept my explanation of my intentions and stop repeating your mischaracterization or reject my explanation. If you reject my explanation about my intentions then you are accusing me of lying. So which is it?


Note that there is no mention whatsoever of this alleged error rising from "assuming that the SBF galaxy is at the mean cluster distance" that supposedly renders the SBF result "irrelevant".

Right - they don't mention it. That is my point --- they take it to be a good assumption that the galaxies in question are good representatives of the average distance of the cluster. You seem to think that because they don't mention it, it doesn't need to be explained? How do you determine a cluster distance from a single galaxy without assuming that galaxy is at the average distance zahl? And if you do take that single galaxies distance as the mean cluster distance, how do you rule out that the galaxy is not on the backside or frontside of the cluster?

Nereid
2007-Aug-08, 02:05 PM
If you want to get into some published estimate (a distance, say), in a deep way, you have (IMHO) no choice but to learn how the authors of the relevant published papers came up with their estimate ... and the nature of the uncertainties and errors in all key parts of the chains that lead to those published estimates.

If, on the other hand, you want a nice 'sound bite' summary, but also want to avoid looking foolish by over-simplifying ... can you?

Let's take an example.

Suppose the widely used estimate ("the canonical distance") is 480 ± 80 parsecs. Suppose a new estimate, based on a single measurement, using a new technique, is published; suppose it is 389 +24 −21 parsecs. Suppose you wish to avoid looking foolish. Would the following be prudent "a luminosity error of 1.5 magnitudes [...] has the potential of requiring a major revision in distance scaling"? (Assume, for now, that a distance difference of ~90 pc translates to a luminosity difference of 1.5 mag.)

Clearly, the two distance estimates, at the 1 sigma level, overlap (480 - 80 < 389 + 24); in the future, perhaps the distance estimate will be refined, to settle on 405 ± 2 pc, so both the widely used (older, canonical) estimate AND the newer, single measurement estimate are consistent with this (possible, future) value. If this turns out to be the case, then your "luminosity error" statement would look exceedingly foolish.

And so on ... we could explore a number of possibilities (including unknown, or mis-estimated, systematics in the newer, single measurement; more nuanced uncertainties - including systematics - in the canonical estimate, nuances clearly stated in the original paper but dropped in subsequent references; and the errors quoted being something other than 1 sigma/68% confidence), but at the sound bite level, the fact that the 1 sigma distance estimates overlap should be sufficient to rule out anything that implies an inconsistency.

For those interested in digging deeper into this, you might look at how researchers come up with error/uncertainty estimates, both random and systematic. In particular, read up a bit on the differences between 'frequentist' and 'Bayesian' approaches (the former almost universally used by astronomers of yore; the latter now regarded as better, though not yet in universal use).

Relevance of the above to this thread? The Freedman et al. HKP paper, in Section 7, discusses how the authors combined five sets of estimates of H0, obtained using five different methods, into a single estimate (Section 8 looks at common systematics); it's a quite interesting read, and (among other things) illustrates well the pitfalls of:
"H0 = 72 ± 8 km s−1 Mpc−1".

Jerry
2007-Aug-08, 03:11 PM
And so on ... we could explore a number of possibilities (including unknown, or mis-estimated, systematics in the newer, single measurement; more nuanced uncertainties - including systematics - in the canonical estimate, nuances clearly stated in the original paper but dropped in subsequent references; and the errors quoted being something other than 1 sigma/68% confidence), but at the sound bite level, the fact that the 1 sigma distance estimates overlap should be sufficient to rule out anything that implies an inconsistency.

HKP makes it clear the distance to the SMC is a key parameter in the method they used to refine the value of the Hubble constant. A revised distance to the cloud automatically shifts the value of Ho. I don't see this as an inconsistancy, and would not, even if it changed the Hubble value more than one sigma, because it was known at the time of publishing that the distance to the SMC is still in limbo.

Reducing the distance to the SMC does cause a seperation between the HKP Ho value, and methods which do not depend upon the SMC distance; and predict a Ho value in the mid 60's. This included the gravitational lensing and supernova type II methods. (There was a paper within the last year that identified probable selection effects in supernova type II distance studies, I will try to find it.)


Relevance of the above to this thread? The Freedman et al. HKP paper, in Section 7, discusses how the authors combined five sets of estimates of H0, obtained using five different methods, into a single estimate (Section 8 looks at common systematics); it's a quite interesting read, and (among other things) illustrates well the pitfalls of:
"H0 = 72 ± 8 km s−1 Mpc−1".
I don't like this consensus approach, because it exudes unreasonable confidence in assumptions common to some or all of the methods. This could drag the Ho away from the 'true' value, whatever that might be.

A gross error in Ho remained undetected for a half a century because a pair of systemic errors within our own galaxy were nearly offsetting. There is a potential for an almost identical pair of errors to exist today: Dust extinction and Malmquist bias of supernovae on a cosmic scale. The irony here, is that the Ho value may be very close to correct, but the wider implications of selection bias and incorrect extinction estimates overlooked.

Zahl
2007-Aug-08, 04:05 PM
Originally Posted by Zahl
What a pathetic straw man. I did not dispute the fact that they used 6 galaxies in 6 clusters for the SBF result, I disputed your crazy claim that this makes their SBF result "irrelevant".

Your words zahl:


Originally Posted by Zahl
When finding h0 with the Surface Brightness Fluctuations method a distance is determined to a galaxy (not cluster) by measuring SBF in that galaxy and finding a Cepheid in that galaxy for calibration. Redshift is then found and h0 calculated. There is no need for the SBF galaxy to be representative of the mean cluster distance.

Perhaps what you wrote was not what you meant, but how else should one interpret the above other than to conclude that you thought there were more than 6 SBF galaxies used in the H0 determining sample? How does one measure the distance to a single galaxy in a cluster (which you now claim you always understood that they did - despite the above quote) and not have that single galaxy represent the cluster distance? Oh - and by the way, what you wrote above is not even a correct description of what they did. What you wrote states that they find the distance to a galaxy, then find a cepheid distance to that galaxy for calibration (zero point calibration), then measure the redshift, and then calculate H0. For someone that is telling me I'm ignorant, pathetic, intentionally misleading Baut readers ... this is a very profoundly incomplete description of the process to the point of fault because it leaves out important steps. You left out that part about how they extend the local calibration in which Cepheids are used to more distant galaxies for which Cepheids are not used. As you wrote it you make it sound like every SBF galaxy used to calculate H0 also has a Cepheid distance.

The above was refuting your mistaken claim that a single galaxy is used to represent the cluster distance that you have repeated several times. You have provided no quotes to that effect, because it is not what they have done. Such nonsense is a figment of your imagination and you have provided nothing but your deluded words to support that crazy assertion. Instead, the HKP paper clearly states that the SBF method uses distances to galaxies and their recession velocities to derive the H0 result, not clusters. Table 10, Surface Brightness Fluctuation Hubble Constant, documents this fact plainly.

http://www.journals.uchicago.edu/ApJ/journal/issues/ApJ/v553n1/52417/52417.html

Six galaxies with their measured recession velocities and distances with the calculated H0 values are given. There is no need for the SBF galaxy to be representative of the mean cluster distance, just like I wrote, because it is the distance to the galaxy they are after. The error budget is discussed elsewhere as I already noted and it does not support dgruss23's dreamed up errors. And that's all there is to it. I don't wish to dive any deeper into dgruss23's word games.

TomT
2007-Aug-08, 05:39 PM
The above was refuting your mistaken claim that a single galaxy is used to represent the cluster distance that you have repeated several times. You have provided no quotes to that effect, because it is not what they have done. Such nonsense is a figment of your imagination and you have provided nothing but your deluded words to support that crazy assertion. Instead, the HKP paper clearly states that the SBF method uses distances to galaxies and their recession velocities to derive the H0 result, not clusters. Table 10, Surface Brightness Fluctuation Hubble Constant, documents this fact plainly.

http://www.journals.uchicago.edu/ApJ/journal/issues/ApJ/v553n1/52417/52417.html

Six galaxies with their measured recession velocities and distances with the calculated H0 values are given. There is no need for the SBF galaxy to be representative of the mean cluster distance, just like I wrote, because it is the distance to the galaxy they are after. The error budget is discussed elsewhere as I already noted and it does not support dgruss23's dreamed up errors. And that's all there is to it. I don't wish to dive any deeper into dgruss23's word games.

A side comment: There seems to be an obvious difference of opinion or interpretation between dgruss23 and Zahl. Zahl, your use of such terms as "nonsense, figment of your imagination, deluded words, crazy assertions, word games, etc" isn't fitting for a forum where questions are asked by either seasoned researchers to clear up a point, or more inexperienced observers who want to understand. I am surprised the Moderators don't rein this in on a Q/A forum. I can understand somewhat how emotions get more in play on the ATM forum, but not here.
Anyway, such rhetoric isn't appreciated by those trying to follow what should be a scholarly discussion that should be free of this stuff.
TomT

dgruss23
2007-Aug-08, 05:56 PM
The above was refuting your mistaken claim that a single galaxy is used to represent the cluster distance that you have repeated several times. You have provided no quotes to that effect, because it is not what they have done.

Zahl, they did use a single galaxy per cluster. I have provided the quotes and reference to Table 4 of Ferrarese et al 2000:


Here (http://adsabs.harvard.edu/abs/2000ApJ...529..745F)is the paper in which the HKP initially presented their SBF analysis. The calibration of the SBF method is discussed in section 6 and its application to H0 is discussed in section 8. For those that don't want to read the technical discussion in the paper you need look no farther than Table 4 in the paper. The table is titled "Sample of F814W-SBF Galaxies for Deriving H0". The first data column is titled "Cluster". The second column is titled "Galaxy ID". Note there are only 6 galaxies listed in Table 4 -- one galaxy for each cluster.

Please look at Table 4 of Ferrarese et al. I believe that any rational individual can see from that description and a look at Table 4 that the HKP did in fact use one galaxy per cluster. However, I also provided in an earlier post the following quote from the Freedman et al HKP final report:



Originally Posted by Freedman et al
With HST, this method (SBF) is now being extended to larger distances (Lauer et al 1998); unfortunately, however, only six galaxies beyond the fornax cluster have published surface brightness fluctuation distances, with only four of them accurate enough to be of interest for cosmology.

So this quote establishes that the HKP in fact only used 6 galaxies for the SBF analysis - and Table 4 from Ferrarese et al 2000 shows which cluster each of those galaxies is in. Zahl, this point is irrefutable. It is what they did. Anybody following this thread can read it for themselves. And it seems that I have already in prior posts provided quotes you're saying I didn't provide.

But now you seem to be taking a different approach. Now you're saying that they used 6 galaxies only and the redshift from the galaxy was used, not the cluster -- so you're now taking the position that the redshifts used were not cluster redshifts, but individual galaxy redshifts:


Such nonsense is a figment of your imagination and you have provided nothing but your deluded words to support that crazy assertion. Instead, the HKP paper clearly states that the SBF method uses distances to galaxies and their recession velocities to derive the H0 result, not clusters. Table 10, Surface Brightness Fluctuation Hubble Constant, documents this fact plainly.

http://www.journals.uchicago.edu/ApJ/journal/issues/ApJ/v553n1/52417/52417.html

Six galaxies with their measured recession velocities and distances with the calculated H0 values are given. There is no need for the SBF galaxy to be representative of the mean cluster distance, just like I wrote, because it is the distance to the galaxy they are after. The error budget is discussed elsewhere as I already noted and it does not support dgruss23's dreamed up errors. And that's all there is to it. I don't wish to dive any deeper into dgruss23's word games.

There are two parts to this - (1) whether or not what you're proposing is what they actually did (one galaxy distance, redshift for said galaxy not the whole cluster, H0), and (2) whether or not what you're proposing they did would be a good idea if they did in fact did do it that way. I'll dissect each in turn.

1. Did the Hubble Key project use a mean cluster redshift or the redshift of the galaxy for which the SBF distance was determined when they found H0 from the SBF method?
Well, let's go by their own words. From Ferrarese et al (2000):


The last difficulty to overcome in our quest for H0 is the determination of the clusters' "cosmic" velocities. (snip) ... later in paragraph ... For comparison we also list in column (6) of the same table (Table 4 mentioned earlier) the heliocentric systemic velocity of the cluster (from the CfA redshift Survey; J.Chen et al, in preparation), ...

Note Ferrarese et al did not say the last difficulty was to find the "galaxies" cosmic velocities. They said "clusters". Why would they say "clusters" if they meant "galaxies" Zahl?

And from Freedman et al 2001 (the very paragraph you quoted earlier this morning zahl!):


Flow corrected velocities, distances, and H0 values are for the six clusters with SBF measurements are given in Table 10.

Again they use the term "clusters". But we've already established above that the SBF involved 6 galaxies and we've already established that there is one SBF in each clusters. And we know that the distance to the SBF galaxy is what they are using. So either the HKP used the term cluster incorrectly, or they really did caculate cosmic velocities for the cluster sample as a whole and applied it to the individual distances of the SBF galaxies as representations of the mean cluster distance.

However, there is another way to check this out. Table 4 of Ferrarese et al (2000) lists the heliocentric velocity of the Coma cluster as 6965 km s-1. Is that the mean cluster redshift or the redshift of the SBF galaxy NGC 4881?

Well here (http://nedwww.ipac.caltech.edu/cgi-bin/nph-datasearch?objname=ngc+4881&search_type=Redshifts&zv_breaker=30000.0&of=table) is the list of redshifts for NGC 4881 from NED. Note that all these redshifts except for one value are ~6750 km s-1 - over 200 km s-1 less than the redshift given in Table 4. This certainly supports the interpretation that the redshift in Table 4 was not the NGC 4881 redshift, but rather the Coma cluster redshift derived as the mean of multiple galaxies within the cluster.


(2) Would it be wise for the HKP to determine the value of H0 from individual galaxies in clusters using the individual redshift of the galaxy as Zahl is proposing they did?

The answer is absolutely not - it would be absurd and while I'm critical of the SBF and Type II SN sample sizes they used, I give them a lot more credit than to think they would be foolish enough to compound using a single galaxy's distance in a cluster with the redshift of that single galaxy.

Here is where Zahl's idea about what they did goes wrong. Galaxies in clusters have peculiar motions. If you take any of the other cluster samples the HKP used such as the I-band TFR or the Fundamental Plane cluster samples, they anywhere from ~ 7 to 80 galaxies in those clusters and they did not use the individual galaxy redshifts, they used mean redshifts of cluster members corrected for various gravitationally induced flows (discussed in their papers). Any individual galaxy might have a redshift as much as 1000 km s-1 larger or smaller than the cluster mean so you cannot use an individual galaxy's redshift to represent a cluster.

The lowest redshift members of the Coma cluster have redshifts of ~ 6000 km s-1 while the largest redshifts of cluster members are ~ 8000 km s-1. Such a range provides huge range of H0 values. So you take the mean of multiple members in the cluster and hope that the cluster is close to being at rest relative to the Hubble flow. If it is, then the mean of the cluster members will be the clusters cosmic velocity.

I simply do not believe that the HKP made that mistake of using an individual galaxy redshift and the quotes I've provided above support my position, not Zahl's.

What the HKP did do - which I've urged caution about, is with the SBF method they - due to data availability - used only the brightest cluster galaxy for each of the 6 clusters. What they are counting on when they do that is that the BCG is close to the cluster center - which is a very plausible possibility - although you have to watch out for significant substructure such as what we see in Virgo.

dgruss23
2007-Aug-08, 06:29 PM
Now I want to comment on the behavior exhibited in Zahl's comments:



Such nonsense is a figment of your imagination and you have provided nothing but your deluded words to support that crazy assertion. Instead, the HKP paper clearly states that the SBF method uses distances to galaxies and their recession velocities to derive the H0 result, not clusters. Table 10, Surface Brightness Fluctuation Hubble Constant, documents this fact plainly.

http://www.journals.uchicago.edu/ApJ...417/52417.html (http://www.journals.uchicago.edu/ApJ/journal/issues/ApJ/v553n1/52417/52417.html)

Six galaxies with their measured recession velocities and distances with the calculated H0 values are given. There is no need for the SBF galaxy to be representative of the mean cluster distance, just like I wrote, because it is the distance to the galaxy they are after. The error budget is discussed elsewhere as I already noted and it does not support dgruss23's dreamed up errors. And that's all there is to it. I don't wish to dive any deeper into dgruss23's word games.

It is possible to have polite debates. My tone tends to be a mirror of the tone directed toward me when someone responds to my posts. In the case of Zahl, I cannot fully mirror his tone because most of what I've highlighted in red is unnecessary and rude.

What is troublesome about this is that Zahl has been repeatedly shown to be wrong and each time I demonstrate this with quotes from papers and explanation he responds with more vitriol than his previous response. Let's take a look at all this when put together:

Accusation of personal bias:


Well... Either you were not aware of all this or your personal bias is obvious.

Accusation of word games and deliberately misleading BAUT readers:


I'm not interested to play word games with you, dgruss23. If it was not due to ignorance, you deliberately misled BAUT readers by dismissing the Hubble Key Project SBF result as "irrelevant" even though it has smaller systematic AND random errors than the FP result.

Statement that I'm ignorant:


Yes. In fact, what you write above is ignorant nonsense.

Flat out incorrect statement that I have not supported my points with quotes from the papers:


Note how dgruss23 does not quote the papers. Nonsense is still nonsense even if it is written in red.

And finally we come to the most recent post made by zahl which I've quoted above and in which he includes six rude and/or unnecessary descriptors highlighted in red.

Individually these comments might not be a big deal, but there is a pattern of poor behavior on Zahl's part. What makes it worse is that his justification for the various insults and characterization is based upon specific assertions he has made that I am mistaken - assertions he made that have been easily shown to be wrong - to the point he should be embarrassed that he would use such insults.

Frankly, I don't understand what all the rudeness is about Joshua. But it is no different than when you went by "JS Princeton" or "Astronomy" or "Q" on the Universe Today forum or "Science Apologist" in other venues.

Seriously, it is time for you to grow up and learn how to have debate with some dignity and respect. All the rude comments are unnecessary.

And Zahl, if you're not the latest incarnation of JS Princeton (as I believe you are), my apologies, but there is only one person I've ever encountered on these discussion forums that is as consistently and needlessly rude and insulting in debate as what you are exhibiting in this thread - JS Princeton.

ToSeek
2007-Aug-08, 07:33 PM
Zahl, as TomT and dgruss23 have both noted, your language and tone is in violation of Rule 2 (http://www.bautforum.com/about-baut/32864-rules-posting-board.html) ("Civility and Decorum") of this forum. You are welcome to disagree and to disagree strongly, but please confine yourself to addressing the specifics and don't make sweeping, dismissive statements that only serve to offend.

TomT and dgruss23, I understand your efforts to preserve decorum on this thread, but trying to handle Rule 2 violations on your own is in itself a violation of Rule 2. In future, please notify a moderator and leave the driving to us.

As for the claim that Zahl is JS Princeton, unless the latter has moved to Finland (where all of Zahl's IP addresses hail from), that seems unlikely. In any case, the user "JS Princeton" was unbanned as part of the general amnesty created by the BA-UT merger and could use that same account if he so desired.

dgruss23
2007-Aug-08, 07:54 PM
TomT and dgruss23, I understand your efforts to preserve decorum on this thread, but trying to handle Rule 2 violations on your own is in itself a violation of Rule 2. In future, please notify a moderator and leave the driving to us.

I'm sorry about that ToSeek. I considered I might be out of bounds with that last post, but I also know sometimes people defend themselves in regard to the personal attacks, so I felt the cummulative effect was worth pointing out. I did e-mail a moderator a few days ago as a warning that this discussion might go in this direction (under the assumption that Zahl was JS Princeton - which you corrected below), but never heard back so perhaps that mod is on vacation.


As for the claim that Zahl is JS Princeton, unless the latter has moved to Finland (where all of Zahl's IP addresses hail from), that seems unlikely. In any case, the user "JS Princeton" was unbanned as part of the general amnesty created by the BA-UT merger and could use that same account if he so desired.

I'll certainly accept that correction and apologize to Zahl for suggesting he/she is someone he/she is not. JS Princeton is still in the USA. The parallels in posting style are extremely similar in a number of ways - as is the frustration it creates. Now I guess I've observed that style not once - but twice.

Thanks ToSeek.

RussT
2007-Aug-08, 09:15 PM
Reguardless if Zahl is really Zahl or another entity, Zahl is well aware of this rule and has been advised by a Mod on this previously.

Here is that post and this same 'attitude' is evident throughout his postings there as well.

http://www.bautforum.com/astronomy/57127-local-features-wmap-map.html
The thread

http://www.bautforum.com/983980-post25.html
The warning from a Mod

ToSeek
2007-Aug-08, 10:22 PM
Reguardless if Zahl is really Zahl or another entity, Zahl is well aware of this rule and has been advised by a Mod on this previously.

Here is that post and this same 'attitude' is evident throughout his postings there as well.

http://www.bautforum.com/astronomy/57127-local-features-wmap-map.html
The thread

http://www.bautforum.com/983980-post25.html
The warning from a Mod

Please don't tell the moderators how to do their jobs. We have processes in place to keep track of just this sort of thing.

RussT
2007-Aug-08, 10:28 PM
Please don't tell the moderators how to do their jobs. We have processes in place to keep track of just this sort of thing.

My humble appologies ToSeek.

I was definitely not trying to suggest or 'tell' Mods anything.

This was strictly meant as support of dgruss' position and Everyone's concern for decorum on BAUT.

See the other thread for exactly the same concern from everyone.

Again, sorry if this was not appropriate.

rtomes
2007-Aug-09, 09:09 PM
The HKP (Hubble Key Project) ...
77+/- 8%, which is ~71 to ~83; or if the formal error is left at 10%, ~69 to ~85 m/s/kps.This comment is coming from way out in left field, so take it as you wish.

I have spent a lot of time studying cycles in many things. I have found that very often that cycles are accompanied by the presence of waves (well that bit isn't way out :whistle:) and that these two aspects are often observed in different ways and not linked up. An example is Kotov's observations in the Solar System of 160 minute oscillation in the Sun and the outer planets being at ~10 AU regular spacings which are therefore on the nodes of a 160 minute period wave.

There is a geological cycle that is reported as nearly 600 million years and a series of other ones at repeated halvings of that, and the period of which has been determined accurately by Prof S Afanasiev of Moscow University as 586 million years.

Some time ago a paper was published showing very regular walls of galaxies, see http://ray.tomes.biz/gallwallc.gif which has the graphic and the references. The stated periodicity is 128 Mpc which should also have one of those H adjustments. When H had been stated as ~71 km/s/Mpc a few years back, I did the calculation and found that this gave a wavelength of about 588 million light years. I know you astronomers don't use light years these days, but if you did you would increase your chances of noticing these things. :lol:

If the regular walls of galaxies are taken as being due to some wave phenomenon as has been suggested by others, then it is not surprising that it would also show up in geological phenomena, because the forces must be very large. Assuming that the geological period is accurately determined by Afansiev (he actually gives 586.24 million years), then with the redshift periodicity of the walls this can be used to calculate the Hubble constant accurately. The answer I get is 71.2 km/s/Mpc. This is limited to the accuracy of the 128 Mpc stated figure and I think within 1%. (Note that this figure is really a redshift periodicity not a distance).

The method could potentially be made more accurate and has the wonderful advantage of not being dependent on the whole distance scale ladder at all.

Nereid
2007-Aug-10, 12:48 AM
This comment is coming from way out in left field, so take it as you wish.

I have spent a lot of time studying cycles in many things. I have found that very often that cycles are accompanied by the presence of waves (well that bit isn't way out :whistle:) and that these two aspects are often observed in different ways and not linked up. An example is Kotov's observations in the Solar System of 160 minute oscillation in the Sun and the outer planets being at ~10 AU regular spacings which are therefore on the nodes of a 160 minute period wave.

There is a geological cycle that is reported as nearly 600 million years and a series of other ones at repeated halvings of that, and the period of which has been determined accurately by Prof S Afanasiev of Moscow University as 586 million years.

Some time ago a paper was published showing very regular walls of galaxies, see http://ray.tomes.biz/gallwallc.gif which has the graphic and the references. The stated periodicity is 128 Mpc which should also have one of those H adjustments. When H had been stated as ~71 km/s/Mpc a few years back, I did the calculation and found that this gave a wavelength of about 588 million light years. I know you astronomers don't use light years these days, but if you did you would increase your chances of noticing these things. :lol:

If the regular walls of galaxies are taken as being due to some wave phenomenon as has been suggested by others, then it is not surprising that it would also show up in geological phenomena, because the forces must be very large. Assuming that the geological period is accurately determined by Afansiev (he actually gives 586.24 million years), then with the redshift periodicity of the walls this can be used to calculate the Hubble constant accurately. The answer I get is 71.2 km/s/Mpc. This is limited to the accuracy of the 128 Mpc stated figure and I think within 1%. (Note that this figure is really a redshift periodicity not a distance).

The method could potentially be made more accurate and has the wonderful advantage of not being dependent on the whole distance scale ladder at all.From other recent posts of yours rtomes, it seems that you may not have caught up with the present Rules For Posting To This Board (http://www.bautforum.com/about-baut/32864-rules-posting-board.html), especially the one covering ATM ideas (my bold):
If you have some idea which goes against commonly-held astronomical theory, or think UFOs are among us, then you are welcome to argue it here. Before you do, though READ THIS THREAD FIRST (http://www.bautforum.com/against-mainstream/16242-advice-atm-theory-supporters.html). This is very important. Then, if you still want to post your idea, you will do so politely, you will not call people names, and you will defend your arguments. Direct questions must be answered in a timely manner.

People will attack your arguments with glee and fervor here; that's what science and scientists do. If you cannot handle that sort of attack, then maybe you need to rethink your theory, too. Remember: you came here. It's our job to attack new theories. Those that are strong will survive, and may become part of mainstream science.

Additionally, keep promotion of your theories and ideas to only those Against the Mainstream or Conspiracy Theory threads which discuss them. Hijacking other discussions to draw attention to your ideas will not be allowed.

If it appears that you are using circular reasoning, depending on long-debunked arguments, or breaking any of these other rules, you will receive one warning, and if that warning goes unheeded, you will be banned.

As with the other sections of the forum, we ask you to keep your topics about space and astronomy. We will close down any thread which doesn't have anything to do with space and astronomy immediately.Your post, which I am quoting, seems to be a promotion of the ATM idea that is the subject of one of the threads, in the ATM section, that you began recently.

Please keep promotion of your ATM ideas to only those Against the Mainstream threads which discuss them; please do not use Q&A threads to draw attention to your ideas.

folkhemmet
2007-Aug-10, 11:40 AM
Jerry said: “This is also significant because there have been several attempts to determine the Hubble constant, such as lensing an SZ effects, (not included in the HKP) that places the value in the mid to low sixties, and are not dependent upon LMC scaling.”

Actually, Jerry, just for reference, the most recent determination of the Hubble constant derived from SZ measurements is in the mid-70s rather than the mid to low sixties. Here is a link to the paper I am referring to: arXiv:astro-ph/0512349

Let’s assume that Jerry is correct and all of modern physics is up for grabs. Advocates of this view, at least until they, of course, enlighten the ignorant masses, (as ATM aficionados often tend to be as much or more dogmatic than the people they criticize) would be hard pressed to explain why we cannot do certain things in certain ways no matter how much we might want to do them. If all of modern physics is up for grabs, as Jerry clearly admitted to believing, then where are our light sabers, FTL travel and communication, routine time travel into the distant past, perpetual motion machines, etc? It’s just a brute fact that breakthroughs in our understanding of nature have led to technological advances. The fact that certain breakthroughs have not occurred and most likely never will occur is a strong sign that nature is telling us something about itself- namely that the Universe is almost surely constructed in a definite way that allows intelligent beings to accomplish a finite number of fundamental technical feats. It is “scientific uncertainty” that allows for the very remote possibility that light sabers, FTL technology, and the wrongness of the current cosmological model are allowed for—even if most people would bet against each of those possibilities. Lastly, one cannot simultaneously (without massively contradicting himself or herself), as Jerry clearly does, admit that all of modern physics is up for grabs and then shop around and cherry pick certain parts of modern physics to buttress his or her own views.

Nereid
2007-Aug-11, 02:27 AM
In this thread (http://www.bautforum.com/questions-answers/61643-astronomy-cosmology-science-tinkering-fiddling-cheating-4.html) the question was touched upon as to whether or not the Hubble constant (H0)could be as high as 84 km s-1 Mpc-1 rather than the currently preferred value of ~72 km s-1 Mpc-1 as determined by the Hubble Key Project (http://adsabs.harvard.edu/abs/2001ApJ...553...47F).

The purpose of this thread is to look at some of the reasons why it is still possible that the value of H0 could in fact be as large the mid 80's.

The HKP final report has been cited ~1100 times since being published in May 2001 so it is an extremely influential paper and important reason why most researchers have accepted H0=~72. This acceptance has been bolstered by the WMAP (http://arxiv.org/abs/astro-ph/0603449) results.

However, the extragalactic distance scale has numerous pieces (or rungs on the ladder) and there are a number of ways that the HKP final result could be incorrect. First it should be noted that the difference between H0=72 and H0=84 only requires a systematic 0.33 mag shift in the distance scale. For most distance indicators we're talking about a 1-2 sigma shift.

The HKP determined that H0=72 from 5 methods: The I-band Tully-Fisher relation (I-TFR -->spirals), surface brightness fluctuation method (SBF -->ellipticals - mostly), Fundamental plane (FP-->ellipticals), Type Ia Sn, Type II SN. The value of H0 was determined for each of these methods independently and then combined for a final value of H0. One of the reasons for the acceptance of their final result is that 5 methods were used.

One of the rungs underlying these distance methods is the Cepheid variable distance scale - which must be used to fix the zero point of the relations used for the 5 secondary distance indicators listed above.

The Cepheid distance scale is then one place where a systematic shift in the zero points of all 5 distance indicators could take place. Sandage has long argued for a lower value of H0 and recently recalibrated the Cepheid distance scale and concluded H0=62 (http://arxiv.org/abs/astro-ph/0603647). However, more recently van Leeuwen et al (http://arxiv.org/abs/0705.1592) showed problems with the Sandage et al Cepheid PL relation slope and also showed that the HKP Cepheid scale should be revised so that distances are closer and the value of H0 would then shift to 76.

Looking at the HKP final analysis reveals some other avenues for caution in accepting H0=72 as the final word:


One of the methods they used (the FP) actually gave a Hubble constant of 82.
Only 4 galaxies were used for the Type II SN H0 estimate and only 3 calibrators with Cepheid distances were available for calibration of the zero point.
Only 6 galaxies in 6 clusters were used for the SBF analysis - and the number of cepheid calibrators was the same size - 6.
While there were 36 Type Ia SN in the analysis, there were only 6 galaxies for calibrating the zero point.
The I-TFR distances tend to overestimate distances relative to other methods - including methods presented in their own paper for some clusters. For example, the FP distance to Abell 3574 (Table 9) is 51.6 Mpc while the I-TFR distance in Table 7 is 62.2 Mpc. The Centaurus 30 cluster I-TFR distance is 43.2 Mpc (Table 7) whereas a Cepheid distance to NGC 4603 in the same cluster is 33.3 Mpc and the SBF method from the large study of Tonry et al (2001) gives a distance of ~33 Mpc (same as the Cepheid distance). For Antlia the HKP I-TFR distance is 45.1 Mpc whereas the Tonry et al SBF distance is ~33 Mpc. Tully&Pierce (2000) (http://adsabs.harvard.edu/abs/2000ApJ...533..744T) found H0=77 from the I-band TFR, but they note that it might be more appropriate to use the maser distance to NGC 4258 to fix the zero point of the Cepheid distance scale rather than the traditionally used Large Magellanic Cloud distance. If the maser distance is used, then they would find H0=86 rather than 77. Using the maser distance would ripple through the distance indicators used by the HKP as well raising H0 above 80.

Perhaps it's worth taking a look at the OP again, and comparing it with Section 7 of the Freedman et al. final HKP paper.

In particular, Table 12 (I'm not going to try to reproduce it here) and Figure 3.

My impression is that the biggest single aspect missing from the otherwise good overview in the OP is an analysis of what Freedman et al. call uncertainties and errors.

Starting with "Error (random)", I think it is pertinent to ask how much the OP's summary in the form of "only {x} used ..." is blind to the frequentist, Bayesian, and Monte Carlo analyses which are reported in the Freedman et al. paper. Specifically, in the absence of any alternative analyses of the random error, is it reasonable to ignore such comments (in the OP)?

Moving on to the outlier in the HKP paper (the FP): a snippet from Figure 3 may serve as an appropriate sound bite "The systematic uncertainties for each method are indicated by the horizontal bars near the peak of each Gaussian" - the horizontal [FP] bar overlaps the horizontal bars of each of the four other methods.

Which brings up the general point of systematics ...

Here's what the final HKP paper says, in summary, on the systematics of the five methods they examined:
There are a number of systematic uncertainties that affect the determination of H0 for all the relative distance indicators discussed in the previous sections. These errors differ from the statistical and systematic errors associated with each of the individual secondary methods, and they cannot be reduced by simply combining the results from different methods. Significant sources of overall systematic error include the uncertainty in the zero point of the Cepheid PL relation, the effect of reddening and metallicity on the observed PL relations, the effects of incompleteness bias and crowding on the Cepheid distances, and velocity perturbations about the Hubble flow on scales comparable to, or larger than, the volumes being sampled.A quick scan of the ~1100 refereed papers which cite the final HKP one suggests that considerable effort has been put into addressing "the zero point of the Cepheid PL relation" systematic; indeed, the updates cited in the OP are all, I think, about this systematic.

Perhaps the most interesting question is then something like "in terms of the estimated uncertainty, in the zero point of the Cepheid PL relation, discussed in the final HKP paper, how big are the deltas from more recent estimates of this zero point?"

Specifically, do the one sigma (systematic uncertainty) bars (reported in the final HKP paper and in more recent papers) overlap?

dgruss23
2007-Aug-11, 08:49 PM
Perhaps it's worth taking a look at the OP again, and comparing it with Section 7 of the Freedman et al. final HKP paper.

You raise some good points for discussion Nereid. I'm actually going to address them in reverse order:



Here's what the final HKP paper says, in summary, on the systematics of the five methods they examined:A quick scan of the ~1100 refereed papers which cite the final HKP one suggests that considerable effort has been put into addressing "the zero point of the Cepheid PL relation" systematic; indeed, the updates cited in the OP are all, I think, about this systematic.

Perhaps the most interesting question is then something like "in terms of the estimated uncertainty, in the zero point of the Cepheid PL relation, discussed in the final HKP paper, how big are the deltas from more recent estimates of this zero point?"

Specifically, do the one sigma (systematic uncertainty) bars (reported in the final HKP paper and in more recent papers) overlap?

No, but how much the difference is depends upon how you compare them.

The HKP adopted a LMC distance modulus of 18.50 +/- 0.10. van Leeuwen et al find a LMC distance modulus of 18.39 +/- 0.05. An et al found a LMC distance modulus of 18.34 +/- 0.06.

So you can see that the 1 sigma errors of the recent van Leeuwen and An studies overlap. The difference between the HKP value and van Leeuwen et al is 1.1 sigma, between HKP and An et al is 1.6 sigma.

Taking the results of the new studies, the adopted HKP LMC distance modulus is 2.2 sigma larger than the van Leeuwen distance modulus and 2.7 sigma larger than the An distance modulus.


In any case, both new studies make all extragalactic Cepheid distances distances closer. Since the Cepheid galaxies are the zero point calibrators for the secondary distance indicators, this makes all the secondary distance indicator distances closer and H0 larger.



In particular, Table 12 (I'm not going to try to reproduce it here) and Figure 3.

My impression is that the biggest single aspect missing from the otherwise good overview in the OP is an analysis of what Freedman et al. call uncertainties and errors.

Starting with "Error (random)", I think it is pertinent to ask how much the OP's summary in the form of "only {x} used ..." is blind to the frequentist, Bayesian, and Monte Carlo analyses which are reported in the Freedman et al. paper. Specifically, in the absence of any alternative analyses of the random error, is it reasonable to ignore such comments (in the OP)?

Moving on to the outlier in the HKP paper (the FP): a snippet from Figure 3 may serve as an appropriate sound bite "The systematic uncertainties for each method are indicated by the horizontal bars near the peak of each Gaussian" - the horizontal [FP] bar overlaps the horizontal bars of each of the four other methods.

The point of this thread is to discuss whether or not H0 could be in the 80's. Do you see the above as something that establishes H0 could not be in the 80's?

Your point about the overlap of the FP uncertainty is very valid, but keep in mind - as I explained to Zahl - the fundamental plane result is not why I have suggested H0 could still be in the 80's.

rtomes
2007-Aug-11, 11:50 PM
From other recent posts of yours rtomes, it seems that you may not have caught up with the present Rules For Posting To This Board (http://www.bautforum.com/about-baut/32864-rules-posting-board.html), especially the one covering ATM ideas (my bold):Your post, which I am quoting, seems to be a promotion of the ATM idea that is the subject of one of the threads, in the ATM section, that you began recently.

Please keep promotion of your ATM ideas to only those Against the Mainstream threads which discuss them; please do not use Q&A threads to draw attention to your ideas.
Hi Nereid

Yes, I have read the rules for posting, thanks.

While there is a relationship between what I posted on the Hubble Constant and my posts in the other thread in that both are related to cycles, I disagree that this post is a promotion of those ideas. It is based only on published work by geologists and astronomers, none of which is has any knowledge of Harmonics theory, and I have not referenced that here in any way.

Certainly it is a new idea to measure the Hubble constant in this way, but the idea that there is a wave structure responsible for regular galaxy walls has appeared in astrophysics journals, and so looking at geophysical manifestations of such a wave is not a way out idea. The fact that recent values of the Hubble constant show the correspondence to be realistic should make it acceptable to the mainstream.

The idea is easily tested because there are observed to be a number of other related shorter geological cycles and these can be searched for in the redshift fluctuations in the data referenced. The longer of these cycles are quite obviously present (particularly a 4th harmonic), but the full test requires the actual data to be further analysed. I would expect the various mass-extinction cycles to show up in the data. They have shorter periods and so the periods are known more accurately and would allow an even more accurate determination of the Hubble constant.

I have tried to contact the authors of paper to get the data but have had no response. If anyone knows how to contact them I would appreciate information.

Nereid
2007-Aug-12, 02:08 AM
[snip]
Actually, most astronomers I know -- including myself -- are well aware that the Hubble constant could range anywhere between, say, 60 and 85 km/s/Mpc. We understand that significant systematic errors are possible, even likely.
Ok, let me try again. I'm sorry that you feel I didn't respond appropriately to this part of your comments. And I'm being serious, not sarcastic in what follows:

That's great to know! So if we take your experience as a correct representation of what most astronomers would think, then the problem would seem to be that this understanding is lost in translation when the information is communicated to laymen? We have two populations here - researchers and laymen.FWIW, there are considerably more than two, I think.

There are those who, as SM points out, are well aware that there's a great deal more to estimates of H0 than just "72 ± 8", particularly the systematics (esp the importance of the zero point of the Cepheid PL relation).

There are those* who are happy to run with "72 ± 8" and neglect (or are ignorant of) the carefully stated caveats in the relevant papers.

There are those who cherry pick juicy pieces from papers here and there and string them together with winks, nudges, and idiosyncratic views of the underlying science.

And there are many sub-populations within the H0 = 72 (or 70) population.
The comments so far on this thread suggest that most people that have responded think that it is unlikely that H0 could be in the 80's based upon the fact that most studies (or at least reported in popular literature I guess) find H0 ~ 70.

In this thread I've pointed to a few reasons to be cautious about the HKP final results and why it is still a viable possibility that H0 could be in the 80's. You can see my earlier posts for those reasons. But then based upon what you're saying, if evidence was presented that H0 is in the 80's most astronomers would not simply brush that aside by assuming that the researchers in question must have done something wrong because most studies point to lower H0 and WMAP results and concordance cosmology ... They would at least look carefully at the analysis?I'd go further than SM on this^: even leaving aside the usual caveats about who did the research, where they published it, and so on, I think it's very difficult to say, in general, what the response of a sub-population of astronomers might be.

For example, if the paper also did a nice job of showing consistency with a large subset of earlier (published) observations - reconciling new research findings with old data, for example - in other words anticipating many of the likely questions that would be asked, it would more likely be given more serious scrutiny than a paper which merely, baldly, presented an analysis of a small set of observations.

Then there's the surprise factor: if the paper presenting the "H0 is in the 80's" conclusion comes out of the blue, with an approach that's never been used before, data that was not gathered using 'telescope time' granted on the basis of a (much) earlier proposal, and so on, then, cet. par., rather a lot of astronomers would likely not read it very carefully.

*Actually, this may be a null set ... almost no one seems to address the ± 8 part, unless they also indicate they are well aware that there's a much richer background.

^"Yes.

However, let me point out that there are two ways one might "present evidence that H0 is in the 80s".

One way is to use a single method --- say, surface-brightness fluctuations --- to measure the distance to a small set of galaxies, calculate the value of H0 based on those distance and radial velocities, and claim "H0 is 82". This will sway very few scientists, because it will be a small bit of evidence for a high value of H0, whereas there exists a much larger body of evidence for a smaller H0.

Another way is to find an important systematic error in one of the earlier steps on the distance ladder. For example, if the distance modulus to the LMC could be shown to be much smaller than 18.50, due to (this is just an example) some kind of previously undetected anamolous extinction between it and the Milky Way, then _that_ would probably cause more astronomers to take the idea seriously."

Serenitude
2007-Aug-12, 05:34 AM
RTomes,

While you may disagree, a moderator has judged it to be promotion of an ATM idea outside of a dedicated ATM thread. Consequently, this is not a judgement open to debate. Incidentally, I concur with Neried and issue the same warning, and add, as a reminder, per rule #17 - if you disagree with a moderator's action, pm another moderator or an administrator. A moderator's decision isn't to be followed merely if you happen to agree. Please take this time to contact another moderator, Phil, or Fraser, if you are in further disagreeance with this action. The request will be fairly evaluated. In the meantime, the decision has been made, and compliance expected. Thank you.

neilzero
2007-Aug-12, 12:55 PM
Since the expansion of the universe is accelerating, perhaps the real Hubble constant increased by about 1% over the duration of this thread? Neil

dgruss23
2007-Aug-13, 10:22 PM
However, let me point out that there are two ways one might "present evidence that H0 is in the 80s".

One way is to use a single method --- say, surface-brightness fluctuations --- to measure the distance to a small set of galaxies, calculate the value of H0 based on those distance and radial velocities, and claim "H0 is 82". This will sway very few scientists, because it will be a small bit of evidence for a high value of H0, whereas there exists a much larger body of evidence for a smaller H0.

This illustrates part of what I'm talking about. The HKP used 5 methods, but the SBF method analysis involved 6 galaxies for determining H0 and the Type II SN analysis only had 4 galaxies for determining H0. Neither of these samples are large enough to stand on their own for reasons I indicated in my exchange with Zahl. In other words, if a researcher published an analysis based upon just 6 SBF distances and determined H0, the paper would not be heavily cited. As an example (http://adsabs.harvard.edu/abs/2003A%26A...399..441M) this paper has only been cited 3 times - by the authors of the paper.

But if the SBF and Type II SN cannot stand on their own as independent analyses, then they really don't add anything significant to the overall result of the HKP.

Since the FP gives H0=82, that leaves you with H0=72 being primarily held up not by 5 methods, but by 2 methods -- the Tully-Fisher Relation and Type Ia SN. The latter method has a high internal precision, but only 6 Cepheid calibrators at the time of the HKP final report - creating the possibility for a systematic error in the absolute calibration of the SN Ia.

The Tully-Fisher relation is subject to larger uncertainty from data errors, but has a small intrinsic scatter if data errors are managed.



Another way is to find an important systematic error in one of the earlier steps on the distance ladder. For example, if the distance modulus to the LMC could be shown to be much smaller than 18.50, due to (this is just an example) some kind of previously undetected anamolous extinction between it and the Milky Way, then _that_ would probably cause more astronomers to take the idea seriously."

The van Leeuwen et al study and An et al studies that I've cited speak to this issue.

Zahl
2007-Aug-15, 11:26 PM
Originally Posted by dgruss23


Originally Posted by Zahl
The above was refuting your mistaken claim that a single galaxy is used to represent the cluster distance that you have repeated several times. You have provided no quotes to that effect, because it is not what they have done.

Zahl, they did use a single galaxy per cluster. I have provided the quotes and reference to Table 4 of Ferrarese et al 2000:



Originally Posted by dgruss23
Here is the paper in which the HKP initially presented their SBF analysis. The calibration of the SBF method is discussed in section 6 and its application to H0 is discussed in section 8. For those that don't want to read the technical discussion in the paper you need look no farther than Table 4 in the paper. The table is titled "Sample of F814W-SBF Galaxies for Deriving H0". The first data column is titled "Cluster". The second column is titled "Galaxy ID". Note there are only 6 galaxies listed in Table 4 -- one galaxy for each cluster.

Please look at Table 4 of Ferrarese et al. I believe that any rational individual can see from that description and a look at Table 4 that the HKP did in fact use one galaxy per cluster. However, I also provided in an earlier post the following quote from the Freedman et al HKP

final report:



Originally Posted by dgruss23
Originally Posted by Freedman et al
With HST, this method (SBF) is now being extended to larger distances (Lauer et al 1998); unfortunately, however, only six galaxies beyond
the fornax cluster have published surface brightness fluctuation distances, with only four of them accurate enough to be of interest for cosmology.

So this quote establishes that the HKP in fact only used 6 galaxies for the SBF analysis - and Table 4 from Ferrarese et al 2000 shows which cluster each of those galaxies is in. Zahl, this point is irrefutable. It is what they did. Anybody following this thread can read it for themselves. And it seems that I have already in prior posts provided quotes you're saying I didn't provide.

That they used 6 galaxies that are located in 6 different clusters is not in dispute. What is in dispute is your claim that they assumed that the galaxies are at the mean distance of their respective clusters, making their result "irrelevant". I have refuted this claim many times, noting that you have not given a quote that supports this assertion. You just gave the "only six galaxies beyond the Fornax cluster" quote, but it does not make such an assumption.


Did the Hubble Key project use a mean cluster redshift or the redshift of the galaxy for which the SBF distance was determined when
they found H0 from the SBF method? Well, let's go by their own words. From Ferrarese et al (2000):

Quote:
Originally Posted by Ferrarese et al Section 8, 3rd paragraph, second column
The last difficulty to overcome in our quest for H0 is the determination of the clusters' "cosmic" velocities. (snip) ... later in paragraph ... For comparison we also list in column (6) of the same table (Table 4 mentioned earlier) the heliocentric systemic velocity of the cluster (from the CfA redshift Survey; J.Chen et al, in preparation), ...
Note Ferrarese et al did not say the last difficulty was to find the "galaxies" cosmic velocities. They said "clusters". Why would they say "clusters" if they meant "galaxies" Zahl?

The cosmic velocity of a galaxy is the same as the cosmic velocity of the host cluster. If you think otherwise, you are promoting some bizarre ATM idea.


And from Freedman et al 2001 (the very paragraph you quoted earlier this morning zahl!):

Quote:
Originally Posted by Freedman et al
Flow corrected velocities, distances, and H0 values are for the six clusters with SBF measurements are given in Table 10.

Again they use the term "clusters".

So you like to quote mine the papers for individual words. Why not quote the caption from figure 4 (The HKP final paper by Freedman et al.) that shows the Hubble diagram? There the distance vs. velocity results are given for "Tully-Fisher clusters", "fundamental plane clusters" and "surface brightness fluctuation galaxies".

http://www.journals.uchicago.edu/ApJ/journal/issues/ApJ/v553n1/52417/52417.fg4.html

Galaxies, not clusters.

Or why not quote Table 10 from the same paper where the final SBF results are given. Galaxies, their flow corrected velocities and distances are given. Galaxies, not clusters. Or why not quote this from the Ferrarese paper: "Errors on H0 are given by the formulae listed in the notes to part 3 of Table 5, for the case in which errors on the velocities and distances (d in Mpc) are identical for the N galaxies used to derive H0." It says "the N galaxies used to derive H0". Galaxies, not clusters. Instead, you just quote mine the papers for everything where the word "cluster" appears without understanding what they are talking about. The reason the word "clusters" is used in the above quote is that the authors are reminding the readers that the target galaxies used to derive H0 are in different clusters.


However, there is another way to check this out. Table 4 of Ferrarese et al (2000) lists the heliocentric velocity of the Coma cluster as 6965 km s-1. Is that the mean cluster redshift or the redshift of the SBF galaxy NGC 4881?

That is indeed the heliocentric systemic velocity of the Coma cluster that they gave "for comparison" as it reads in the paper. However, it is not the redshift that was used to determine H0. And then you go and quote redshifts from sources that have nothing to do with this paper and leave out the only one that matters - the CfA redshift survey that was the one used in the papers... Besides, even if there were a 3% error (6965 km/s / 6750 km/s = 1.03) in the redshifts, it would not change the NGC 4881 derived value of H0 by much: 6965 km/s / 102.3 Mpc = 68.1 km/s/Mpc vs. 6750 / 102.3 = 66 km/s/Mpc. This is well within the quoted errors for NGC 4881.

So, the HKP did indeed use valid distances and flow corrected velocities for the galaxies to derive H0. They do not assume (and they do not need to) that the galaxies are at the mean cluster distance. In fact, the sample size is quite sufficient to keep the error reasonable - this is discussed in detail in Table 5 of the Ferrarese paper.

http://www.journals.uchicago.edu/ApJ/journal/issues/ApJ/v529n2/40684/40684.html

Specifically, the sample size affects the error as follows:

Sqrt((R3.1/d)2+(0.46*H0*RSBF)2)/Sqrt(N)

where R3.1 is the random error on the flow velocities (+/- 400 km/s), RSBF is the random error on the color-corrected magnitude and N is the sample size. This is perfectly valid statistics. Dgruss23 can't show what's wrong with it, because nothing is. The final SBF H0 result is reliable, the Ferrarese paper where it was presented is heavily cited (well over 100 citations) and none of those who cite it in the literature question (let alone dismiss) this result. You simply erred when declaring it "irrelevant".

dgruss23
2007-Aug-16, 02:50 AM
That they used 6 galaxies that are located in 6 different clusters is not in dispute. What is in dispute is your claim that they assumed that the galaxies are at the mean distance of their respective clusters, making their result "irrelevant". I have refuted this claim many times, noting that you have not given a quote that supports this assertion. You just gave the "only six galaxies beyond the Fornax cluster" quote, but it does not make such an assumption.

I've already responded to every bit of this in post #69 (http://www.bautforum.com/1044736-post69.html) and post #74 (http://www.bautforum.com/1045014-post74.html) of this thread.

So it is agreed that they used 6 galaxies. It is also agreed that each of those galaxies was in a different cluster. However you continue to claim that they did not use the galaxy as a representation of the mean cluster distance. Let's look again at the Ferrarese et al (2000) (http://adsabs.harvard.edu/abs/2000ApJ...529..745F) paper in which the HKP presented the SBF analysis in detail.

From the first paragraph of section 8 of Ferrarese et al:


Lauer et al (1998) produced F814W-SBF measurements to the central galaxies in the Abell Clusters A262, A3560, A3565, and A3742, with heliocentric velocities between 3800 and 4900 km s-1

These are 4 of the 6 SBF galaxies used by the HKP. Note that they refer to these galaxies as the central galaxies in "clusters". Central galaxies are presumed to be at the mean cluster distance - otherwise they're not central!

Now - and I already quoted this before - later in the same section they state:


The last difficulty to overcome in our quest for H0 is the determination of the clusters' "cosmic" velocities.

They go on to explain the corrections applied:


We adopt velocities corrected for the local flow field as described in Mould et al (2000). Briefly, the heliocentric velocities (col. [6] of Table 4) are corrected first to the centroid of the local group ... (Explanation of corrections made to heliocentric velocities ... see the paragraph for description of corrections made - remaining parts of this quote are still from the same paragraph) ... The flow corrected velocities thus obtained are listed in column (10) of Table 4. For comparison, we also list in column (6) of the same table the heliocentric systemic velocity of the cluster (from the CfA redshift survey; .

But note carefully here - Zahl is arguing that the redshifts were not the cluster redshift, but the redshift of the individual galaxy. However, we can see that they state column 6 is the velocity of the cluster and earlier in the paragraph they describe making corrections to the velocity in column 6! And they say straight out that their last difficulty is to find the "clusters" velocity - not the individual galaxy velocities. And I explained all this before in my earlier posts.


So you like to quote mine the papers for individual words. Why not quote the caption from figure 4 (The HKP final paper by Freedman et al.) that shows the Hubble diagram? There the distance vs. velocity results are given for "Tully-Fisher clusters", "fundamental plane clusters" and "surface brightness fluctuation galaxies".

http://www.journals.uchicago.edu/ApJ/journal/issues/ApJ/v553n1/52417/52417.fg4.html

Galaxies, not clusters.

Or why not quote Table 10 from the same paper where the final SBF results are given. Galaxies, their flow corrected velocities and distances are given. Galaxies, not clusters. Or why not quote this from the Ferrarese paper: "Errors on H0 are given by the formulae listed in the notes to part 3 of Table 5, for the case in which errors on the velocities and distances (d in Mpc) are identical for the N galaxies used to derive H0." It says "the N galaxies used to derive H0". Galaxies, not clusters. Instead, you just quote mine the papers for everything where the word "cluster" appears without understanding what they are talking about. The reason the word "clusters" is used in the above quote is that the authors are reminding the readers that the target galaxies used to derive H0 are in different clusters.

You tell me I'm quote mining and don't understand what I'm talking about. That's rich. I've repeatedly demonstrated that they used one galaxy per cluster and in calculating H0 they used the redshift of the cluster - not the individual galaxies. YOU actually are the one that is ignoring the larger context of what they did. The underlying analysis is in the Ferrarese et al paper - that is what you must look at to understand what they did.

Of course in Table 10 (Freedman etal) they listed galaxies - those are the galaxies from which the cluster distance was represented. Note in Table 4 of the Ferrarese paper not only did they include the galaxy ID, but also the cluster it belongs to ... and further more they noted that the heliocentric redshift they corrected with their flow model was a cluster redshift. All this backs up what I've been saying. What - you think that because they didn't list the cluster names in the final paper it means that all the analysis in the Ferrarese et al paper - analysis that demonstrates they used cluster redshifts - doesn't exist?

Of course Ferrarese et al simply followed what Lauer et al (1998) (http://adsabs.harvard.edu/abs/1998ApJ...499..577L) did. Here is what they say about the 4 SBF galaxies that the HKP adopted from their study(Section 3 first paragraph):


The BCG's instead are simply treated as test particles, without reference to their photometric properties, (although we do use cluster averages for the velocities

So let me ask you this Zahl: How is it that they define the 6 galaxies used as "central" galaxies and then use the mean redshift for the cluster (not those individual galaxies) and yet you claim that they were not using the individual SBF galaxies as the mean cluster distance?



That is indeed the heliocentric systemic velocity of the Coma cluster that they gave "for comparison" as it reads in the paper. However, it is not the redshift that was used to determine H0.

It was the redshift that was corrected to the redshift used for calculating H0 through their flow model. Go back and read the paragraph again - or what I demonstrated above.


And then you go and quote redshifts from sources that have nothing to do with this paper and leave out the only one that matters - the CfA redshift survey that was the one used in the papers...

They cited "Chen et al in preparation" as the source of the CfA redshifts they used. Why don't you do an ADS search on Chen et al and tell me what you find? Then perhaps you'll understand why I had to quote redshifts from other sources.


Besides, even if there were a 3% error (6965 km/s / 6750 km/s = 1.03) in the redshifts, it would not change the NGC 4881 derived value of H0 by much: 6965 km/s / 102.3 Mpc = 68.1 km/s/Mpc vs. 6750 / 102.3 = 66 km/s/Mpc. This is well within the quoted errors for NGC 4881.

Agreed, you've been making a huge deal about something insignificant - and in the process the discussion has been distracted from the point you seem not to grasp.


So, the HKP did indeed use valid distances and flow corrected velocities for the galaxies to derive H0. They do not assume (and they do not need to) that the galaxies are at the mean cluster distance. In fact, the sample size is quite sufficient to keep the error reasonable - this is discussed in detail in Table 5 of the Ferrarese paper.

http://www.journals.uchicago.edu/ApJ/journal/issues/ApJ/v529n2/40684/40684.html

Specifically, the sample size affects the error as follows:

Sqrt((R3.1/d)2+(0.46*H0*RSBF)2)/Sqrt(N)

where R3.1 is the random error on the flow velocities (+/- 400 km/s), RSBF is the random error on the color-corrected magnitude and N is the sample size. This is perfectly valid statistics. Dgruss23 can't show what's wrong with it, because nothing is. The final SBF H0 result is reliable, the Ferrarese paper where it was presented is heavily cited (well over 100 citations) and none of those who cite it in the literature question (let alone dismiss) this result. You simply erred when declaring it "irrelevant".

You are missing the point Zahl. I did not say that the SBF distances could not be accurate. They might be very accurate -- (although if you read Ferrarese et al you'll see some explanation as to why the Coma cluster SBF has a high uncertainty). The sample size is too small for a global determination of H0. Period. That is why I said their SBF is irrelevant. And I stand by that. If you think that 6 galaxy distances is sufficient to determine the global value of H0, then we simply will not agree.

The only way their SBF result would be compelling is if the assumption that the 6 "central" brightest cluster galaxies they used are actually central to the clusters is valid. And how would we establish that the 6 SBF galaxies are central to their clusters --- well we'd need distances to those clusters from other methods. For Coma their SBF distance is 102.3 Mpc. The HKP gets 85.6 Mpc from the I-band TFR (28 galaxies) and 85.8 Mpc from the FP (81 galaxies).

So is NGC 4881's SBF distance correct? Either it is correct and (1) NGC 4881 is on the backside of the cluster not at its center or (2) NGC 4881 is at the center and the TFR and FP distances to Coma are grossly inaccurate due to an unknown systematic error. The other option is that NGC 4881's distance is incorrect and the TFR and FP distances are the correct distances.

How could we decide these options??? How about having a few more SBF distances to Coma? Which is my point - one SBF distance is not enough.

Zahl, even if you were correct (which you're not) that the SBF distances were individual distances not meant to be representative of the mean cluster distances, my primary objection would stand - 6 distances to individual galaxies is too small a sample for a global determination of H0.

dgruss23
2007-Aug-16, 03:10 AM
Zahl,

Really, I think a lot of our disagreement about the clusters vs. individual galaxies issue comes from some inconsistency in the use of the terms in the HKP papers. As I've demonstrated in my previous post, they clearly identified clusters in Table 4 and Section 8 of the Ferrarese et al SBF analysis.

However, in the final report they only provide the galaxies in Table 10 - not the clusters. Then there is this quote from section 6.4 of the final report:


As part of the Key Project, Ferrarese et al (2000a) applied an HST Cepheid calibration to the four Lauer et al (1998) SBF galaxies and derived H0=69 +/-4 +/-6 km s-1 Mpc-1. The results are unchanged if all six clusters are included.

Note that they call them galaxies in the one sentence and refer to the clusters in the next sentence. That creates some ambiguity as to what table 10 represents.

Later in the same paragraph they resolve the ambiguity in the direction I've been arguing:


Flow corrected velocities, distances, and H0 values for the six clusters with SBF distances are given in Table 10.

Now you made a big deal about the fact that table 10 doesn't say clusters. But in the text that refers to table 10 they do say clusters.

Are you going to continue to claim I'm mistaken about that which I've repeatedly demonstrated I'm not mistaken about, or could we perhaps address this issue about the sample size of the SBF and Type II SN samples?

Zahl
2007-Aug-17, 12:17 AM
So you like to quote mine the papers for individual words. Why not quote the caption from figure 4 (The HKP final paper by Freedman et al.) that shows the Hubble diagram? There the distance vs. velocity results are given for "Tully-Fisher clusters", "fundamental plane clusters" and "surface brightness fluctuation galaxies".

http://www.journals.uchicago.edu/ApJ...52417.fg4.html

Galaxies, not clusters.

Or why not quote Table 10 from the same paper where the final SBF results are given. Galaxies, their flow corrected velocities and distances are given. Galaxies, not clusters. Or why not quote this from the Ferrarese paper: "Errors on H0 are given by the formulae listed in the notes to part 3 of Table 5, for the case in which errors on the velocities and distances (d in Mpc) are identical for the N galaxies used to derive H0." It says "the N galaxies used to derive H0". Galaxies, not clusters. Instead, you just quote mine the papers for everything where the word "cluster" appears without understanding what they are talking about. The reason the word "clusters" is used in the above quote is that the authors are reminding the readers that the target galaxies used to derive H0 are in different clusters.

You tell me I'm quote mining and don't understand what I'm talking about. That's rich. I've repeatedly demonstrated that they used one galaxy per cluster

"One galaxy per cluster" is not in dispute. Now answer these questions:

Why does Freedman write "Tully-Fisher clusters", "fundamental plane clusters" but instead of "surface brightness fluctuation clusters" there reads "surface brightness fluctuation galaxies" in the distance vs. velocity plot (Figure 4)? Why does table 4 in Ferrarese give location and magnitude data for galaxies if the velocity data is not valid for them as well? Why does the table say "H0, all galaxies ... 70 ± 4" and "H0, excluding NGC 4881 and NGC 4373 ... 69 ± 4"? Why doesn't it say "H0, all clusters ... " and "H0, excluding Coma..."? Why does the Ferrarese paper say that "Errors on H0 are given by the formulae listed in the notes to part 3 of Table 5, for the case in which errors on the velocities and distances (d in Mpc) are identical for the N galaxies used to derive H0."? Why doesn't it say "errors on the velocities and distances (d in Mpc) are identical for the N clusters used to derive H0."?


What - you think that because they didn't list the cluster names in the final paper it means that all the analysis in the Ferrarese et al paper - analysis that demonstrates they used cluster redshifts - doesn't exist?

No, it means you don't understand that analysis. One would expect that in Freedman's final table where objects, distances, velocities and H0 values are listed, it is those objects that the given data refers to. But no, according to you they are just something from which the data was derived from and the objects that this data actually refers to were left out... Yeah, right. How logical.

Zahl
2007-Aug-17, 12:55 AM
So, the HKP did indeed use valid distances and flow corrected velocities for the galaxies to derive H0. They do not assume (and they do not need to) that the galaxies are at the mean cluster distance. In fact, the sample size is quite sufficient to keep the error reasonable - this is discussed in detail in Table 5 of the Ferrarese paper.

http://www.journals.uchicago.edu/ApJ...684/40684.html

Specifically, the sample size affects the error as follows:

Sqrt((R3.1/d)2+(0.46*H0*RSBF)2)/Sqrt(N)

where R3.1 is the random error on the flow velocities (+/- 400 km/s), RSBF is the random error on the color-corrected magnitude and N is the sample size. This is perfectly valid statistics. Dgruss23 can't show what's wrong with it, because nothing is. The final SBF H0 result is reliable, the Ferrarese paper where it was presented is heavily cited (well over 100 citations) and none of those who cite it in the literature question (let alone dismiss) this result. You simply erred when declaring it "irrelevant".

You are missing the point Zahl. I did not say that the SBF distances could not be accurate. They might be very accurate -- (although if you read Ferrarese et al you'll see some explanation as to why the Coma cluster SBF has a high uncertainty). The sample size is too small for a global determination of H0. Period. That is why I said their SBF is irrelevant. And I stand by that. If you think that 6 galaxy distances is sufficient to determine the global value of H0, then we simply will not agree.

The above formula directly addresses your "the sample size is too small" point. It gives statistically valid errors for H0 based on the 6 galaxies (the N refers to galaxies as it says in Ferrarese et al.). The H0 values given for the six galaxies in Ferrarese and Friedman papers are not global but are valid for the galaxies only. The global value is calculated from them and the above formula gives the error. NGC 4881 from Coma that you have made such a fuss about does not affect the calculated global H0 value at all. That formula would give invalid errors only if the six non-global H0 determinations were not drawn from a normal distribution, but they are and the result is thus valid. This is an inescapable fact, but I don't think you are going to get it anytime soon.

dgruss23
2007-Aug-17, 01:07 AM
"One galaxy per cluster" is not in dispute. Now answer these questions:

You ignore my questions and then demand I answer these:


Why does Freedman write "Tully-Fisher clusters", "fundamental plane clusters" but instead of "surface brightness fluctuation clusters" there reads "surface brightness fluctuation galaxies" in the distance vs. velocity plot (Figure 4)?

Why do Freedman et al say "Flow corrected velocities, distances, and H0 values for the 6 clusters with SBF measurements are given in Table 10."? Why do they say clusters in Table 10 Zahl? Here, I'll answer for you since you have repeatedly refused to answer my direct questions: The answer is because they assumed the SBF galaxy is at the mean cluster distance and utilized the mean cluster redshift for calculating H0.


Why does table 4 in Ferrarese give location and magnitude data for galaxies if the velocity data is not valid for them as well? Why does the table say "H0, all galaxies ... 70 ± 4" and "H0, excluding NGC 4881 and NGC 4373 ... 69 ± 4"? Why doesn't it say "H0, all clusters ... " and "H0, excluding Coma..."? Why does the Ferrarese paper say that "Errors on H0 are given by the formulae listed in the notes to part 3 of Table 5, for the case in which errors on the velocities and distances (d in Mpc) are identical for the N galaxies used to derive H0."? Why doesn't it say "errors on the velocities and distances (d in Mpc) are identical for the N clusters used to derive H0."?

We've been over this and over this and you have ignored the quotes I've provided that clearly demonstrated what the HKP did in their SBF analysis. And you continue to ignore the larger point I made.

Here is the basic summary of the HKP SBF analysis:

1. Six brightest cluster galaxies had SBF distances determined utilizing the HKP Cepheid distances for absolute calibration. As I noted in the previous two posts, four of these these galaxies are described as "central galaxies in the Abell clusters A262, A3560, A3565, and A3742, ..." by Ferrarese et al. As "central" galaxies they are assumed to be a good representation of the mean cluster distance.

2. They utilized the mean redshifts of the clusters in which the SBF galaxies resided for determining H0. The mean heliocentric cluster redshifts were corrected to their flow model redshifts as discussed in Ferrarese et al. This was unquestionably established in my previous posts and Zahl continues to ignore my points. If the HKP did not assume that the SBF galaxies were at the mean cluster distance, then why use the mean cluster redshift for their analysis Zahl? Please justify such a procedure -- specifically.



No, it means you don't understand that analysis.

This from you??? Who has been repeatedly shown to be wrong in this exchange?


One would expect that in Freedman's final table where objects, distances, velocities and H0 values are listed, it is those objects that the given data refers to. But no, according to you they are just something from which the data was derived from and the objects that this data actually refers to were left out... Yeah, right. How logical.

Actually Zahl, the very same page that has Table 10 states that the H0 values for the "six clusters with SBF measurements are given in Table 10." And it is established, but you don't seem to grasp it, that mean cluster velocities were used for the SBF H0 calculation. Does everybody else besides Zahl see this? Here is the quote:


The last difficulty to overcome in our quest for H0 is the determination of the clusters' "cosmic" velocities.

Why say "clusters" if it is really individual galaxies? How about this Zahl - could you provide a quote from either of the papers that trumps this quote above - that clearly demonstrates that the velocities used were the velocities of the individual galaxies and not the clusters? Your quote from the figure 4 caption doesn't cut it. It is superceded by all the other quotes and demonstrations I've provided.

But you ignored that point from my last post too. Why do they say clusters in the text and galaxies in the Table Zahl? Your wrong explanation is that they used the individual redshifts of the individual clusters and that the clusters were only mentioned because the galaxies were in clusters.

And you continue to ignore the larger point - Six galaxies is not enough galaxies to derive a global value for H0. Neither is the 4 galaxies utilized for the Type II SN sample.

Would you care to comment on the small sample size of the SBF and Type II SN samples. You were so offended by my use of the term irrelevant to describe the HKP SBF result (ironic that you would get offended by that considering how willingly you've chosen to be rude on this thread).

I've explained a number of times now that the problem I have with the SBF is what I've stated in bold above. Could you comment on that? Do you actually think that 6 galaxies is enough to determine the global value of H0?

And while you're at it, here is another point you ignored which I made in a post last week:


(2) Would it be wise for the HKP to determine the value of H0 from individual galaxies in clusters using the individual redshift of the galaxy as Zahl is proposing they did?

The answer is absolutely not - it would be absurd and while I'm critical of the SBF and Type II SN sample sizes they used, I give them a lot more credit than to think they would be foolish enough to compound using a single galaxy's distance in a cluster with the redshift of that single galaxy.

Here is where Zahl's idea about what they did goes wrong. Galaxies in clusters have peculiar motions. If you take any of the other cluster samples the HKP used such as the I-band TFR or the Fundamental Plane cluster samples, they anywhere from ~ 7 to 80 galaxies in those clusters and they did not use the individual galaxy redshifts, they used mean redshifts of cluster members corrected for various gravitationally induced flows (discussed in their papers). Any individual galaxy might have a redshift as much as 1000 km s-1 larger or smaller than the cluster mean so you cannot use an individual galaxy's redshift to represent a cluster.

The lowest redshift members of the Coma cluster have redshifts of ~ 6000 km s-1 while the largest redshifts of cluster members are ~ 8000 km s-1. Such a range provides huge range of H0 values. So you take the mean of multiple members in the cluster and hope that the cluster is close to being at rest relative to the Hubble flow. If it is, then the mean of the cluster members will be the clusters cosmic velocity.

Will you respond to that point?

Or how about the related points I made here:


You are missing the point Zahl. I did not say that the SBF distances could not be accurate. They might be very accurate -- (although if you read Ferrarese et al you'll see some explanation as to why the Coma cluster SBF has a high uncertainty). The sample size is too small for a global determination of H0. Period. That is why I said their SBF is irrelevant. And I stand by that. If you think that 6 galaxy distances is sufficient to determine the global value of H0, then we simply will not agree.

The only way their SBF result would be compelling is if the assumption that the 6 "central" brightest cluster galaxies they used are actually central to the clusters is valid. And how would we establish that the 6 SBF galaxies are central to their clusters --- well we'd need distances to those clusters from other methods. For Coma their SBF distance is 102.3 Mpc. The HKP gets 85.6 Mpc from the I-band TFR (28 galaxies) and 85.8 Mpc from the FP (81 galaxies).

So is NGC 4881's SBF distance correct? Either it is correct and (1) NGC 4881 is on the backside of the cluster not at its center or (2) NGC 4881 is at the center and the TFR and FP distances to Coma are grossly inaccurate due to an unknown systematic error. The other option is that NGC 4881's distance is incorrect and the TFR and FP distances are the correct distances.

How could we decide these options??? How about having a few more SBF distances to Coma? Which is my point - one SBF distance is not enough.

Zahl, even if you were correct (which you're not) that the SBF distances were individual distances not meant to be representative of the mean cluster distances, my primary objection would stand - 6 distances to individual galaxies is too small a sample for a global determination of H0.

Will you respond to this larger issue please? You've already been shown to be wrong regarding your take on what the HKP did regarding the SBF analysis. To continue to beat that dead horse is wasting everyone's time.

dgruss23
2007-Aug-17, 01:08 AM
The above formula directly addresses your "the sample size is too small" point. It gives statistically valid errors for H0 based on the 6 galaxies (the N refers to galaxies as it says in Ferrarese et al.). The H0 values given for the six galaxies in Ferrarese and Friedman papers are not global but are valid for the galaxies only. The global value is calculated from them and the above formula gives the error. NGC 4881 from Coma that you have made such a fuss about does not affect the calculated global H0 value at all. That formula would give invalid errors only if the six non-global H0 determinations were not drawn from a normal distribution, but they are and the result is thus valid. This is an inescapable fact, but I don't think you are going to get it anytime soon.

Ever heard of small number statistics?

Zahl
2007-Aug-17, 11:47 AM
It seems unbearably difficult for you to comprehend that the velocities given in the Vflow column in Freedman's Table 10 are valid for both galaxies and their host clusters. This is why distances and velocities are plotted in Freedman's Hubble diagram (Figure 4) for "surface brightness galaxies" instead of "surface brightness clusters" and this is why Ferrarese writes that the velocities and distances for the N galaxies were used to derive H0.



Originally Posted by Zahl
The above formula directly addresses your "the sample size is too small" point. It gives statistically valid errors for H0 based on the 6 galaxies (the N refers to galaxies as it says in Ferrarese et al.). The H0 values given for the six galaxies in Ferrarese and Friedman papers are not global but are valid for the galaxies only. The global value is calculated from them and the above formula gives the error. NGC 4881 from Coma that you have made such a fuss about does not affect the calculated global H0 value at all. That formula would give invalid errors only if the six non-global H0 determinations were not drawn from a normal distribution, but they are and the result is thus valid. This is an inescapable fact, but I don't think you are going to get it anytime soon.

Ever heard of small number statistics?

Do you disagree with the formula Sqrt((R3.1/d)2+(0.46*H0*RSBF)2)/Sqrt(N) and the error it gives for a sample size of 6?

Jerry
2007-Aug-17, 04:04 PM
http://arxiv.org/PS_cache/astro-ph/pdf/0505/0505465v1.pdf


We have shown that, contrary to expectations, optically thick lines do not necessarily show a minimum P-Cygni line profile flux at a Doppler-shifted wavelength that corresponds to the photospheric velocity. Instead, depending on the outflow properties, such a measurement can deliver an overestimate or an underestimate of the photospheric velocity. This is particularly problematic for earlier models which show broad P-Cygni line profile troughs, mostly for hydrogen Balmer lines. Unfortunately we have also demonstrated that, due to the more well defined photospheric radius, the lack of contaminating lines and a SED closer to that of a blackbody, it is at these earlier times that the EPM is best used.

This is important, because it reflects directly upon the Expanding Photosphere Method (EPM) used by Hamuy. What is perfectly unclear is how this impacts Ho, as determined by Type II Supernova. It certainly widens the error bars.

dgruss23
2007-Aug-17, 07:58 PM
It seems unbearably difficult for you to comprehend that the velocities given in the Vflow column in Freedman's Table 10 are valid for both galaxies and their host clusters.

Not at all, what you don't comprehend is that the velocities are only valid for both the galaxies and the clusters if the galaxy is at the mean cluster distance. There is a whole lot of reality that you're leaving out of this picture.

Basic reality #1: Galaxies in clusters have peculiar motions.

Basic reality #2: The distance distribution in galaxies includes a depth effect -- not all galaxies in a cluster are at the same actual distance.

Basic reality #3: Clusters may experience bulk motions relative to the Hubble flow.

The procedure for calculating H0 must account for these realities. First, you can't just pick a galaxy in a cluster, determine it's distance and redshift and calculate H0 from that data. The individual galaxy may have a peculiar motion as large as 1500 km s-1.

Generally it is assumed that galaxy clusters themselves will have smaller deviations from the Hubble flow than the individual galaxies within the clusters.

So, the procedure most groups have adopted with the TFR, FP, and SBF locally, is to measure distances to multiple member galaxies in a cluster and get a mean distance for the cluster members. By getting a mean distance from multiple members one corrects for the distance distribution of the cluster.

The second part of this is to adopt a mean redshift for the members of the cluster - not the individual redshift of a single galaxy. Assuming a normal distribution of redshifts around the mean (usually redshift limits are set to eliminate foreground and background interlopers), the mean redshift will correct for the individual peculiar motions. In other words, the mean redshift is the cosmic redshift for the cluster assuming no net peculiar motion for the cluster (not always a correct assumption - but close enough for this discussion).

H0 is then derived from each cluster using the cluster's mean redshift and mean distance.

Now as we've already discussed, the HKP team did use the mean cluster redshift for the cluster the SBF galaxies reside in. The distance to the SBF galaxies were determined and utilized to calculate H0. But this procedure is only valid if the SBF galaxies are actually at (or very close to) the mean cluster distance. A key point you've continued to ignore is that the galaxies in the SBF analysis used by the HKP were Brightest Cluster Galaxies (BCG's). The assumption here is that BCG's are not only the brightest galaxies in the cluster, but are also at the center of the cluster. Again - Ferrarese et al stated:


Lauer et al (1998) produced F814W-SBF measurements to the central galaxies in the Abell clusters A262, A3560, A3565, and A3742 with heliocentric velocities between 3800 and 4900 km s-1.

"Central" is a key piece of this procedure that you have yet to acknowledge. It is the key assumption upon which any validity to their SBF analysis hinges.

For a SBF galaxy that is not in fact central, the cosmic velocity of the cluster is not the same as the cosmic velocity of the individual SBF galaxy. I'm well aware that the Coma cluster SBF distance is not as accurate as the rest, but it serves to illustrate my point.

For the sake of argument let's adopt the popular H0=70 km s-1 Mpc-1. Now NGC 4881 has a SBF distance of 102.3 Mpc. The I-band TFR distance is 85.6 Mpc and the FP is 85.8 Mpc. So here is the problem. If the NGC 4881 SBF distance is correct, and the value of the Hubble Constant is 70, then the cosmic redshift for NGC 4881 is 7161 km s-1.

But is NGC 4881 at the center (and hence the actual mean distance) of the cluster? According to the TFR and FP results it may not be. Now based upon the TFR and FP results, the cosmic velocity of the coma cluster (if H0=70) is 6000 km s-1 if the TFR and FP distances are correct.

If the SBF distance to NGC 4881 and the TFR and FP distances to the Coma cluster are all correct, then NGC 4881 is not central to the Coma cluster and there is a 1160 km s-1 difference between the cosmic redshift of NGC 4881 and the cosmic redshift of the Coma cluster.

So here is the concern with the HKP SBF result. They adopted the mean redshift of the clusters in which the SBF galaxies reside as the cosmological redshift of the galaxy at its SBF distance. They did this because they were using BCG, which are more likely to be central than any random elliptical galaxy within a cluster. But that mean redshift is only valid for calculating H0 if the SBF distance to the galaxy is the same distance as the distance to the cluster.


Do you disagree with the formula Sqrt((R3.1/d)2+(0.46*H0*RSBF)2)/Sqrt(N) and the error it gives for a sample size of 6?

No, the formula is fine and the error it gives is fine. What you have is a very fine estimate of H0 and the uncertainty in that H0 estimate derived from 6 galaxies. I'm not saying that the H0 value they got is not the H0 value indicated from their sample. Nor am I saying that the uncertainty that they derived is not the correct uncertainty from their data.

What I am saying is that the sample size is too small to be certain that they have actually randomly sampled from the true distribution of H0 values. And I'm also saying that the validity of any of the individual H0 values from their SBF sample hinges critically upon the assumption that the SBF galaxies are in fact central galaxies within those clusters. And that assumption is not quantified in the above equation.

Sticking with basic cosmological models in which there is a global value for H0 that we can track down, it should be remembered that the correct global value of H0 does not have to fall within the systematic uncertainty of the method utilized to determine H0.

For example, the Sandage group (http://adsabs.harvard.edu/abs/2006ApJ...653..843S) comes up with H0=62.3 (+/-5 systematic error). The Sandage group uncertainty overlaps the HKP uncertainty. Does that mean the true value of H0 falls in the overlap?

Well what about the TFR study of Tully&Pierce (http://adsabs.harvard.edu/abs/2000ApJ...533..744T) (2000)? They found H0=77 +/-8. However, their study pre-dates the final metallicity corrected Cepheid distances in the HKP final report. Adopting the HKP final cepheid distances reduces their I-band TFR zero point from 21.57 to 21.50 and thus increases H0 to 79.5. This still overlaps with the uncertainty of the HKP TFR H0 estimate and overall H0 estimate ... but it doesn't overlap with the Sandage estimated uncertainty.

If Sandage et al are right, then the true value of H0 is outside the systematic uncertainty of the TP00 study and the reverse is true if Tully & Pierce are correct.

What it boils down to is that different studies adopt different procedures and you can't always capture those differences in procedure and assumptions into the systematic uncertainty you report.

Zahl
2007-Aug-18, 01:26 PM
Originally Posted by Zahl
It seems unbearably difficult for you to comprehend that the velocities given in the Vflow column in Freedman's Table 10 are valid for both galaxies and their host clusters.

Not at all, what you don't comprehend is that the velocities are only valid for both the galaxies and the clusters if the galaxy is at the mean cluster distance.

I am not sure what ATM theory you are proposing. In mainstream cosmology clusters are gravitationally bound and the expansion of the universe is stopped within them, resulting in the same expansion velocity (in kilometers per second) for all of their gravitationally bound objects after peculiar velocities have been corrected for. If the galaxies inside a cluster had different expansion velocities, they would be flying away from each other and the cluster would come apart. So what is this ATM theory you are proposing?


No, the formula is fine and the error it gives is fine. What you have is a very fine estimate of H0 and the uncertainty in that H0 estimate derived from 6 galaxies.

So are you disagreeing with the systematic error equation of 0.46*H0*SSBF and its given error (+/- 6 km/s/Mpc) then?


What I am saying is that the sample size is too small to be certain that they have actually randomly sampled from the true distribution of H0 values.

What evidence do you have that the sample is not actually random or are you just arguing that this could be the case?

parejkoj
2007-Aug-18, 01:58 PM
I am not sure what ATM theory you are proposing. In mainstream cosmology clusters are gravitationally bound and the expansion of the universe is stopped within them, resulting in the same expansion velocity (in kilometers per second) for all of their gravitationally bound objects after peculiar velocities have been corrected for. If the galaxies inside a cluster had different expansion velocities, they would be flying away from each other and the cluster would come apart. So what is this ATM theory you are proposing?


dgruss is correct about this point, no ATM involved. Clusters are gravitationally bound, but that doesn't mean that all the galaxies within a cluster will have exactly the same redshift. The "fingers of god" (http://en.wikipedia.org/wiki/Fingers_of_God) are due to this large scatter in velocity between the cluster's members.

dgruss23
2007-Aug-18, 02:16 PM
I am not sure what ATM theory you are proposing. In mainstream cosmology clusters are gravitationally bound and the expansion of the universe is stopped within them, resulting in the same expansion velocity (in kilometers per second) for all of their gravitationally bound objects after peculiar velocities have been corrected for.

And you correct for peculiar motions by getting a mean redshift for the cluster and assuming that the cluster has a negligible bulk motion relative to the Hubble flow - which I already pointed out:


The procedure for calculating H0 must account for these realities. First, you can't just pick a galaxy in a cluster, determine it's distance and redshift and calculate H0 from that data. The individual galaxy may have a peculiar motion as large as 1500 km s-1.

Generally it is assumed that galaxy clusters themselves will have smaller deviations from the Hubble flow than the individual galaxies within the clusters.

But let me ask you this. Since clusters are gravitationally bound, and individual galaxies have peculiar velocities that must be corrected for - and "expansion stops" within them ... it would be an incorrect procedure to take a single galaxy from one of those clusters and calculate H0 from that galaxy and the redshift of that galaxy ... right?

I ask because as this discussion has proceded you've steadily changed your position to that procedure I've advocated from the start - without acknowledging that I was right. I'm not talking about the "SBF irrelevant" opinion I stated. I know you don't agree with that, I'm talking about the correct procedure for calculating H0 using the SBF galaxies. Here was what you said early on in this exchange:



When finding h0 with the Surface Brightness Fluctuations method a distance is determined to a galaxy (not cluster) by measuring SBF in that galaxy and finding a Cepheid in that galaxy for calibration. Redshift is then found and h0 calculated. There is no need for the SBF galaxy to be representative of the mean cluster distance.

This is of course incorrect as I pointed out immediately and subsequently demonstrated. The HKP used mean cluster redshifts for the 6 SBF galaxies and described those 6 galaxies as "central" to the cluster.

As I pointed out, you cannot select a galaxy from a cluster and calculate H0 from that galaxy's distance and redshift because (1) the redshift of the galaxy may be contaminated by peculiar motion and (2) the galaxy may not be representative of the mean cluster distance:


Here is where Zahl's idea about what they did goes wrong. Galaxies in clusters have peculiar motions. If you take any of the other cluster samples the HKP used such as the I-band TFR or the Fundamental Plane cluster samples, they anywhere from ~ 7 to 80 galaxies in those clusters and they did not use the individual galaxy redshifts, they used mean redshifts of cluster members corrected for various gravitationally induced flows (discussed in their papers). Any individual galaxy might have a redshift as much as 1000 km s-1 larger or smaller than the cluster mean so you cannot use an individual galaxy's redshift to represent a cluster.

The lowest redshift members of the Coma cluster have redshifts of ~ 6000 km s-1 while the largest redshifts of cluster members are ~ 8000 km s-1. Such a range provides huge range of H0 values. So you take the mean of multiple members in the cluster and hope that the cluster is close to being at rest relative to the Hubble flow. If it is, then the mean of the cluster members will be the clusters cosmic velocity.

As of post#91 you seemed not yet to understand what the HKP did or why what you were proposing they did would be absurd:


So, the HKP did indeed use valid distances and flow corrected velocities for the galaxies to derive H0. They do not assume (and they do not need to) that the galaxies are at the mean cluster distance.

It does appear that you now understand I was right in my earlier statements although you continue to suggest I don't know what I'm talking about. You're now talking about correcting for peculiar motions.

But do you understand that clusters have a depth effect? In other words, they're not all at the same distance. If you calculate the distances to multiple galaxies within a cluster you'll get a distribution of distances. With a large enough sample the mean and the median distance for the galaxies in the cluster will be very close.

So H0 is calculated from the mean distance and the mean redshift. With the SBF analysis, the HKP takes the SBF galaxy to be at the center because it is a brightest cluster galaxy. You said earlier (quoted above) that there was no need for them to do so - but there is because they are using the mean redshift of the cluster for calculating H0. If the SBF galaxies are not at the mean cluster distance, then the H0 value derived is not correct.


So are you disagreeing with the systematic error equation of 0.46*H0*SSBF and its given error (+/- 6 km/s/Mpc) then?

No, the statistics are properly applied to their sample. But as I pointed out Sandage et al find 62.3 +/-5 while Tully&Pierce find 79.5 +/-8. The statistical uncertainties do not overlap. Both studies cannot be correct - even though their statistical calculations of their uncertainty is valid.


What evidence do you have that the sample is not actually random or are you just arguing that this could be the case?

This is what I said:


What I am saying is that the sample size is too small to be certain that they have actually randomly sampled from the true distribution of H0 values.

What evidence do you have that it is random? You could strengthen any evidence you do have with a larger sample of SBF galaxies.

Zahl
2007-Aug-19, 04:14 PM
Originally Posted by Zahl
I am not sure what ATM theory you are proposing. In mainstream cosmology clusters are gravitationally bound and the expansion of the universe is stopped within them, resulting in the same expansion velocity (in kilometers per second) for all of their gravitationally bound objects after peculiar velocities have been corrected for. If the galaxies inside a cluster had different expansion velocities, they would be flying away from each other and the cluster would come apart. So what is this ATM theory you are proposing?

Originally Posted by parejkoj:

dgruss is correct about this point, no ATM involved. Clusters are gravitationally bound, but that doesn't mean that all the galaxies within a cluster will have exactly the same redshift. The "fingers of god" (http://en.wikipedia.org/wiki/Fingers_of_God) are due to this large scatter in velocity between the cluster's members.

No, he is not. If you had read the article you linked to, you would have learned that the fingers are caused by peculiar velocities. In the above quote I wrote after the peculiar velocities have been corrected for. Once this is done, the fingers collapse to points and all gravitationally bound objects within the cluster have the same expansion velocity (given in the Vflow column in Freedman's table 10), contrary to what dgruss23 claims.

Zahl
2007-Aug-19, 04:34 PM
dgruss23:
Here was what you said early on in this exchange:


Originally Posted by Zahl post#56
When finding h0 with the Surface Brightness Fluctuations method a distance is determined to a galaxy (not cluster) by measuring SBF in that galaxy and finding a Cepheid in that galaxy for calibration. Redshift is then found and h0 calculated. There is no need for the SBF galaxy to be representative of the mean cluster distance.

This is of course incorrect as I pointed out immediately and subsequently demonstrated. The HKP used mean cluster redshifts for the 6 SBF galaxies and described those 6 galaxies as "central" to the cluster.

You apparently think that because the preceding sentence reads "a distance is determined to a galaxy (not cluster)" that the following sentence "Redshift is then found" means that they just take the radial velocity of that galaxy, but the quote doesn't actually say that. In fact, that passage does not describe at all how the redshift is determined for H0 calculation, because the point of contention back then was not redshifts but distances and specifically your claim that the SBF galaxies were assumed to be at the mean cluster distance - a notion which the above passage refutes. But none of this is relevant to the following which you apparently now finally accept:


Originally Posted by Zahl
It seems unbearably difficult for you to comprehend that the velocities given in the Vflow column in Freedman's Table 10 are valid for both galaxies and their host clusters.


Originally Posted by Zahl
In mainstream cosmology clusters are gravitationally bound and the expansion of the universe is stopped within them, resulting in the same expansion velocity (in kilometers per second) for all gravitationally bound objects inside them after peculiar velocities have been corrected for. If the galaxies had different expansion velocities, they would be expanding away from each other and the cluster would fly apart.

So you either accept the above or you are proposing some ATM theory. Which is it?


As of post#91 you seemed not yet to understand what the HKP did or why what you were proposing they did would be absurd:

Quote:
Originally Posted by Zahl
So, the HKP did indeed use valid distances and flow corrected velocities for the galaxies to derive H0. They do not assume (and they do not need to) that the galaxies are at the mean cluster distance.

Huh? The above clearly says "flow corrected". Do you happen to have dyslexia or something?



So are you disagreeing with the systematic error equation of 0.46*H0*SSBF and its given error (+/- 6 km/s/Mpc) then?

No, the statistics are properly applied to their sample.

Your argumentation makes no sense in the slightest. First you argue for significant systematic errors and then you agree with the given (small) systematic errors. Make up your mind. Do you agree with the latter or the former? Because you can't agree with both and still claim to be logical.



What evidence do you have that the sample is not actually random or are you just arguing that this could be the case?

This is what I said:


Originally Posted by dgruss23
What I am saying is that the sample size is too small to be certain that they have actually randomly sampled from the true distribution of H0 values.

What evidence do you have that it is random? You could strengthen any evidence you do have with a larger sample of SBF galaxies.

I have qualitative and quantitative evidence. The qualitative evidence is that the result is not subject to systematic undetected biases in a single cluster environment (such as undetected dust in front of the target galaxy) because different cluster environments are sampled. The target galaxies are also in different general directions so that undetected systematics in the local flow field do not give rise to systematic errors. In fact, working in the context of your own argument, you have not given any reason why the galaxies would be systematically located on the far side of their host clusters as required by your case. Again working in the context of your own argument, if there would be a systematic bias in galaxy locations, it would be more likely that the galaxies would be on the near side of the clusters, because galaxies on the near side are not as much obstructed from our point of view by other clusters structures and dust as those on the far side, potentially leading to a selection effect that prefers galaxies on the near side.

The quantative evidence comes from Jarque-Bera normality test (done with R 2.5.0 & fBasics statistics package) run on the sample of six H0 values. It gives the following results:

LM p-value: 0.946
ALM p-value: 0.723
Asymptotic: 0.919

This is in very good agreement with the distribution expected if the sample was indeed random and had no outliers. Therefore we can be confident that the values do not come from a mixture distribution (true and a biased one). This leaves the possibility that the parameter µ is biased by a constant systematic error that equally affects all H0 values in the sample. But this is unlikely as I qualitatively argued above. Moreover, such as error would not be lessened or even detected by increasing the sample size (because increasing the sample size does not help to reduce the systematic error when that systematic error is constant), thus refuting dgruss23's insufficient sample size argument.

Now let's hear dgruss23's evidence for systematic errors.

TomT
2007-Aug-19, 05:12 PM
No, he is not. If you had read the article you linked to, you would have learned that the fingers are caused by peculiar velocities. In the above quote I wrote after the peculiar velocities have been corrected for. Once this is done, the fingers collapse to points and all gravitationally bound objects within the cluster have the same expansion velocity (given in the Vflow column in Freedman's table 10), contrary to what dgruss23 claims.

If this is true, since V = H0 * D, wouldn't it then be true that all gravitationally bound objects within the cluster are the same distance from us?
In other words, if H0 is a constant, and the expansion velocities of the objects are all the same, then the distances D = V/H0, would also all be the same.

dgruss23
2007-Aug-19, 06:28 PM
You apparently think that because the preceding sentence reads "a distance is determined to a galaxy (not cluster)" that the following sentence "Redshift is then found" means that they just take the radial velocity of that galaxy, but the quote doesn't actually say that.

That's my point. Your summary is so general as to be misleading. And perhaps you could explain the incorrect reference to the Cepheid part of the description?



When finding h0 with the Surface Brightness Fluctuations method a distance is determined to a galaxy (not cluster) by measuring SBF in that galaxy and finding a Cepheid in that galaxy for calibration. Redshift is then found and h0 calculated. There is no need for the SBF galaxy to be representative of the mean cluster distance.

This is clearly incorrect. The Cepheid galaxies used to calibrate the SBF galaxies are not the same SBF galaxies used to calculate H0. The table 10 galaxies do not have Cepheid distances. Given this error it is not hard to understand why I would take your meaning to be that you were saying redshift is found for the galaxy -- especially when you emphasize the incorrect notion that you think there was no need for the galaxy to be representative of the mean cluster distance.



In fact, that passage does not describe at all how the redshift is determined for H0 calculation, because the point of contention back then was not redshifts but distances and specifically your claim that the SBF galaxies were assumed to be at the mean cluster distance - a notion which the above passage refutes.

You've offered nothing that refutes my "claim". The SBF galaxies are assumed to be central, are stated by Ferrarese et al to be central and it would make no sense to calculate H0 from the mean cluster redshift if the galaxies were not assumed to be central. You still refuse to acknowledge whether or not you understand that clusters have a depth effect.



Huh? The above clearly says "flow corrected". Do you happen to have dyslexia or something?

You obviously don't know what dyslexia means any more than you understand what "flow corrected" is. Flow corrected velocities are adjustments to the heliocentric redshifts to account for the following motions: First the centroid of the local group correction is applied, then infall for three attractors: Virgo, The Great Attractor, and the Shapley Concentration. This flow corrected model has absolutely nothing to do with the intracluster peculiar motions of the individual galaxies within the cluster - that is what I'm referring to. The flow model that you're referring to is a set of corrections for bulk motions of groups and clusters due to other major mass concentrations. See section 8 of the Ferrarese paper for the discussion on the flow model and for more details they refer the reader to Mould et al (2000).

In the context of this discussion my point is that you cannot calculate H0 from the distance to an individual galaxy in a cluster and its individual redshift. You must use the mean cluster distance and the mean cluster redshift because the individual galaxies have varying distances and random peculiar motions that can be a sizeable fraction of the cosmological redshift.

And this is where my objection to the number of galaxies comes in. They used a single galaxy to define a cluster distance. As I've stated a number of times now - and you continue to fail to see the relevance of the point because you have mistaken notions about how this was done - the only reason they could reasonably use a single galaxy to determine the cluster distance is because they selected the BCG's - which are traditionally assumed to be central - the very word Ferrarese et al used.



Your argumentation makes no sense in the slightest. First you argue for significant systematic errors and then you agree with the given (small) systematic errors. Make up your mind. Do you agree with the latter or the former? Because you can't agree with both and still claim to be logical.

You have such a narrow understanding. Zahl, as I pointed out - it is not always easy or possible to capture all the systematic errors. The equation captures what they think they know about the systematic errors and if they are right that the galaxies are at the cluster center, then their systematic error is just fine.

You continue to selectively respond to my examples and ignore those that you don't have an answer for.




I have qualitative and quantitative evidence. The qualitative evidence is that the result is not subject to systematic undetected biases in a single cluster environment (such as undetected dust in front of the target galaxy) because different cluster environments are sampled. The target galaxies are also in different general directions so that undetected systematics in the local flow field do not give rise to systematic errors. In fact, working in the context of your own argument, you have not given any reason why the galaxies would be systematically located on the far side of their host clusters as required by your case. Again working in the context of your own argument, if there would be a systematic bias in galaxy locations, it would be more likely that the galaxies would be on the near side of the clusters, because galaxies on the near side are not as much obstructed from our point of view by other clusters structures and dust as those on the far side, potentially leading to a selection effect that prefers galaxies on the near side.

The part in bold is completely inapplicable to this situation. These are brightest cluster galaxies - obstruction is not an issue!


The quantative evidence comes from Jarque-Bera normality test (done with R 2.5.0 & fBasics statistics package) run on the sample of six H0 values. It gives the following results:

LM p-value: 0.946
ALM p-value: 0.723
Asymptotic: 0.919

This is in very good agreement with the distribution expected if the sample was indeed random and had no outliers. Therefore we can be confident that the values do not come from a mixture distribution (true and a biased one). This leaves the possibility that the parameter µ is biased by a constant systematic error that equally affects all H0 values in the sample. But this is unlikely as I qualitatively argued above. Moreover, such as error would not be lessened or even detected by increasing the sample size (because increasing the sample size does not help to reduce the systematic error when that systematic error is constant), thus refuting dgruss23's insufficient sample size argument.

Really? All systematic errors are distance independent? Have you actually compared the SBF distances with other distance estimates? We'll go in order of increasing SBF distance:

NGC 4373 (ESO 322-6) - a member of the centaurus cluster. The HKP finds 36.3 Mpc for this galaxy. Tonry et al find a SBF distance to the nearby galaxy ESO 322-8 of 36.3 Mpc - in exact agreement. Newman et al (1999) find a Cepheid distance to NGC 4603 in centaurus of 33.3 Mpc. Tully&Pierce (2000) find an I-band TFR distance of 38.9 Mpc. So the HKP SBF distance is in excellent agreement with the other estimates.

Next out are the neighbor galaxies NGC 5193 and IC 4296 for which the HKP find distances of 51.5 and 55.5 Mpc respectively. These galaxies are members of clusters which would be part of a larger structure of clusters encompassing Abell 3574. Abell 3574 has a Fundamental plane distance of 51.6 Mpc and using the K-band TFR a distance of 57.5 Mpc is found. Again the SBF distance is in good agreement with the other methods.

Next up NGC 7014 with a SBF distance of 67.3 Mpc. The K-band TFR for 4 neighbors of NGC 7014 gives a distance of 58.3 Mpc. The NGC 7014 distance modulus is larger by +0.31 mag.

NGC 708 in Abell 262 has a SBF distance of 68.2 Mpc. Tully&Pierce (2000) find a I-band TFR distance of 58.3 Mpc to the Pisces filament which A262 belongs. The NGC 708 SBF distance modulus is larger by +0.34 mag.

Finally there is the Coma cluster galaxy NGC 4881 for which the SBF distance is 102.3 Mpc. This compares with 83.6 Mpc for the Tully&Pierce I-band TFR and 85.8 Mpc for the HKP Fundamental Plane distance. The SBF distance modulus is larger by +0.37 to +0.44 mag.

This suggests a possible systematic difference that increases with distance. For the galaxies with SBF distances less than 60 Mpc the SBF is in agreement with the other distance methods. For the two galaxies at ~68 Mpc, the SBF distance moduli are greater by +0.31 to +0.34 mag. For the Coma cluster the SBF distance modulus is greater by +0.37 to +0.44 mag.

Do I have an explanation for this? No.

Can we be certain there is a systematic error in the SBF distances? No.

Why can't we be certain there is a systematic error that increases with distance? The sample is too small! Only 6 galaxies! More SBF work is needed beyond 60 Mpc to figure this out.

dgruss23
2007-Aug-19, 06:39 PM
If this is true, since V = H0 * D, wouldn't it then be true that all gravitationally bound objects within the cluster are the same distance from us?
In other words, if H0 is a constant, and the expansion velocities of the objects are all the same, then the distances D = V/H0, would also all be the same.

Tom,

Zahl doesn't understand the application of peculiar motions to this scenario. If you look at his/her comments as I noted in my previous post you'll see he thinks that peculiar motions within the cluster (fingers of god effect) are accounted for by the flow model.

That is simply not the case. The Flow model corrects for large scale motions of the local group and infall from nearby attractors. It has nothing to do with the individual motions of galaxies within a cluster. You "correct" for peculiar motions by taking a mean redshift for the cluster. If the cluster redshifts have a normal distribution, then the mean of the cluster redshifts essentially cancels out the random peculiar motions.

Then if you calculate a mean distance by determining the distances to multiple galaxies in the cluster you have a distance that is "corrected" for the depth of the cluster.

Using the mean redshift and mean distance gives you H0.

This is all very basic stuff and yet for all my patience and effort to explain this to Zahl, I might as well be doing this: :wall:

Jerry
2007-Aug-21, 06:22 PM
In general I agree with Dgruss, but Zahl makes a good point when he suggests luminosity bias should favor selection of galaxies within a cluster that are closer rather than further. So the reason Dgruss suggest that the SBF distances may be suspect is counter-intuitive.

In any case, since the calculated total systemic errors do not overlap, something is systemically wrong with at least one study. I think it is premature to state, at this time, that there is a convergence of observational evidence that is consistent with lower values of Ho. I think it is more correct to say lower values of Ho are more compatible with current cosmological theory.

dgruss23
2007-Aug-21, 08:59 PM
In general I agree with Dgruss, but Zahl makes a good point when he suggests luminosity bias should favor selection of galaxies within a cluster that are closer rather than further. So the reason Dgruss suggest that the SBF distances may be suspect is counter-intuitive.

Jerry, Zahl is correct that luminosity bias can be an issue. However, it is not an issue with the HKP SBF analysis because they only used a single galaxy - the brightest cluster galaxy. The BCG's in these relatively local clusters (<150 Mpc) are well within the detection limits of the surveys.

You have to watch out for luminosity bias when you survey multiple galaxies within a cluster. As you include fainter galaxies from the luminosity function, the chances of preferentially selecting the near-side galaxies increases. If the near-side galaxies in the survey outnumber the far-side galaxies, then the calculated cluster distance will be too small and the resulting H0 will be too large.

This is similar to the Malmquist bias in which for a magnitude limited sample, as one includes galaxies at steadily larger distances, the sample will preferentially include the galaxies from the brighter end of the luminosity function. For secondary distance indicators this will lead to a trend of systematically underestimating distances as distance increases and results in a steadily increasing value of H0 as distance increases.

Just how significant Malmquist bias is also depends critically upon the intrinsic scatter of the secondary distance indicator. Distance indicators with small intrinsic scatter will have smaller effects from Malmquist bias.

At any rate, luminosity bias is not an issue with BCG's of the HKP SBF sample.


In any case, since the calculated total systemic errors do not overlap, something is systemically wrong with at least one study. I think it is premature to state, at this time, that there is a convergence of observational evidence that is consistent with lower values of Ho. I think it is more correct to say lower values of Ho are more compatible with current cosmological theory.

I would agree that the current cosmological theory favors a lower value of H0. That's why it is so important to periodically examine whether or not the empirically derived value of H0 can be improved.

Zahl
2007-Aug-22, 05:45 PM
Originally Posted by Zahl
No, he is not. If you had read the article you linked to, you would have learned that the fingers are caused by peculiar velocities. In the above quote I wrote after the peculiar velocities have been corrected for. Once this is done, the fingers collapse to points and all gravitationally bound objects within the cluster have the same expansion velocity (given in the Vflow column in Freedman's table 10), contrary to what dgruss23 claims.

TomT:

If this is true, since V = H0 * D, wouldn't it then be true that all gravitationally bound objects within the cluster are the same distance from us? In other words, if H0 is a constant, and the expansion velocities of the objects are all the same, then the distances D = V/H0, would also all be the same.

All gravitationally bound objects within clusters are not at the same distance from us and the global value of H0 is constant, but H0 from a single bound target galaxy in a cluster is not global even if there are no measurement errors. There will be intrinsic scatter in such values following straight from the fact that the expansion of the universe is stopped within clusters. However, a few independent measurements beat down this scatter (random error can be reduced by increasing the sample size) unless there are systematic errors. Besides, such scatter is small because as big as clusters are, they are still pretty small in the grand scheme of things and the universe wouldn't expand much over a cluster size slab of space anyway. A galaxy located somewhere near the center, having an expansion redshift of (say) 7000 km/s at 100 Mpc from us would give an estimate of 7000/100=70 km/s/Mpc for H0. If that galaxy was located on the outskirts of the cluster in the radial direction, at a distance of 5 Mpc (about the characteristic radius of clusters) from the core, the intrinsic error in the H0 determination would be only about 3 km/s/Mpc (7000/105=66.7 km/s/Mpc -> 70-66.7=3.3 km/s/Mpc) compared with the case where the galaxies were two free streaming galaxies in the Hubble flow separated by that same 5 Mpc (7000/100=70 km/s/Mpc & 7350/105=70 km/s/Mpc). This is why it does not matter much where the target galaxy is located in the cluster - a few independent measurements of individual galaxies in different clusters beat down the random error and the error would not amount to much anyway.

Zahl
2007-Aug-22, 06:09 PM
So dgruss23, you have twice dodged this question. Your answer?


Originally Posted by Zahl
It seems unbearably difficult for you to comprehend that the velocities given in the Vflow column in Freedman's Table 10 are valid for both galaxies and their host clusters.

Originally Posted by Zahl
In mainstream cosmology clusters are gravitationally bound and the expansion of the universe is stopped within them, resulting in the same expansion velocity (in kilometers per second) for all gravitationally bound objects inside them after peculiar velocities have been corrected for. If the galaxies had different expansion velocities, they would be expanding away from each other and the cluster would fly apart.

So you either accept the above or you are proposing some ATM theory. Which is it?



Originally Posted by Zahl
You apparently think that because the preceding sentence reads "a distance is determined to a galaxy (not cluster)" that the following sentence "Redshift is then found" means that they just take the radial velocity of that galaxy, but the quote doesn't actually say that.

That's my point. Your summary is so general as to be misleading. And perhaps you could explain the incorrect reference to the Cepheid part of the description?



Originally Posted by zahl
When finding h0 with the Surface Brightness Fluctuations method a distance is determined to a galaxy (not cluster) by measuring SBF in that galaxy and finding a Cepheid in that galaxy for calibration. Redshift is then found and h0 calculated. There is no need for the SBF galaxy to be representative of the mean cluster distance.

This is clearly incorrect. The Cepheid galaxies used to calibrate the SBF galaxies are not the same SBF galaxies used to calculate

H0. The table 10 galaxies do not have Cepheid distances. Given this error it is not hard to understand why I would take your meaning to be that you were saying redshift is found for the galaxy -- especially when you emphasize the incorrect notion that you think there was no need for the galaxy to be representative of the mean cluster distance.


In fact, that passage does not describe at all how the redshift is determined for H0 calculation, because the point of contention back then was not redshifts but distances and specifically your claim that the SBF galaxies were assumed to be at the mean cluster distance - a notion which the above passage refutes.

You've offered nothing that refutes my "claim". The SBF galaxies are assumed to be central, are stated by Ferrarese et al to be central and it would make no sense to calculate H0 from the mean cluster redshift if the galaxies were not assumed to be central. You still refuse to acknowledge whether or not you understand that clusters have a depth effect.


Huh? The above clearly says "flow corrected". Do you happen to have dyslexia or something?

You obviously don't know what dyslexia means any more than you understand what "flow corrected" is. Flow corrected velocities are adjustments to the heliocentric redshifts to account for the following motions: First the centroid of the local group correction is applied, then infall for three attractors: Virgo, The Great Attractor, and the Shapley Concentration. This flow corrected model has absolutely nothing to do with the intracluster peculiar motions of the individual galaxies within the cluster - that is what I'm referring to. The flow model that you're referring to is a set of corrections for bulk motions of groups and clusters due to other major mass concentrations. See section 8 of the Ferrarese paper for the discussion on the flow model and for more details they refer the reader to Mould et al (2000).

I am not going to debate semantics with you as they have no relevance to your claim that the HKP SBF results are subject to significant systematics that render them irrelevant, a notion for which you have so far offered no evidence. This is the only thing that interests me in this exchange. I just note that there is nothing wrong with phrases like "flow corrected velocities for the galaxies" as evidenced by the fact that Freedman herself used such in the final HKP paper: "The galaxy velocities have been corrected for the flow-field model described above."



Your argumentation makes no sense in the slightest. First you argue for significant systematic errors and then you agree with the given (small) systematic errors. Make up your mind. Do you agree with the latter or the former? Because you can't agree with both and still claim to be logical.

You have such a narrow understanding. Zahl, as I pointed out - it is not always easy or possible to capture all the systematic errors. The equation captures what they think they know about the systematic errors and if they are right that the galaxies are at the cluster center, then their systematic error is just fine.

So (according to you) they were not able to capture all the systematic errors. Therefore (again according to you) the formula and the systematic errors given in the paper are wrong. What are the correct systematic errors then and how are they derived? You can't expect to be taken seriously if you argue that the real systematic errors are larger than those reported and declare the result irrelevant but not offer the corrected error treatment.



it would be more likely that the galaxies would be on the near side of the clusters, because galaxies on the near side are not as much obstructed from our point of view by other cluster structures and dust as those on the far side, potentially leading to a selection effect that prefers galaxies on the near side.

The part in bold is completely inapplicable to this situation. These are brightest cluster galaxies - obstruction is not an issue!

This has nothing to do with the luminosity of the SBF galaxy. They have to be confident that the measured fluctuation power really comes from the SBF galaxy and not from the intervening structures. If the (well visible) SBF galaxy is behind a foreground galaxy or dust that can't be reliably removed, they have to reject the SBF galaxy and choose another. This creates a potential selection effect favoring the near side SBF candidate galaxies because the far side SBF candidate galaxies are more likely to have their faces partially covered by one of the galaxies in the cluster, contaminating the SBF fluctuation spectrum.



The quantitative evidence comes from Jarque-Bera normality test (done with R 2.5.0 & fBasics statistics package) run on the sample of six H0 values. It gives the following results:

LM p-value: 0.946
ALM p-value: 0.723
Asymptotic: 0.919

This is in very good agreement with the distribution expected if the sample was indeed random and had no outliers. Therefore we can be confident that the values do not come from a mixture distribution (true and a biased one). This leaves the possibility that the parameter µ is biased by a constant systematic error that equally affects all H0 values in the sample. But this is unlikely as I qualitatively argued above. Moreover, such an error would not be lessened or even detected by increasing the sample size (because increasing the sample size does not help to reduce the systematic error when that systematic error is constant), thus refuting dgruss23's insufficient sample size argument.

Really?

Yes.


All systematic errors are distance independent?

No, but I addressed just such a possibility before the bolded part.


Have you actually compared the SBF distances with other distance estimates? We'll go in order of increasing SBF distance:

NGC 4373 (ESO 322-6) - a member of the centaurus cluster. The HKP finds 36.3 Mpc for this galaxy. Tonry et al find a SBF distance to the nearby galaxy ESO 322-8 of 36.3 Mpc - in exact agreement. Newman et al (1999) find a Cepheid distance to NGC 4603 in centaurus of 33.3 Mpc. Tully&Pierce (2000) find an I-band TFR distance of 38.9 Mpc. So the HKP SBF distance is in excellent agreement with the other estimates.

Next out are the neighbor galaxies NGC 5193 and IC 4296 for which the HKP find distances of 51.5 and 55.5 Mpc respectively. These galaxies are members of clusters which would be part of a larger structure of clusters encompassing Abell 3574. Abell 3574 has a Fundamental plane distance of 51.6 Mpc and using the K-band TFR a distance of 57.5 Mpc is found. Again the SBF distance is in good agreement with the other methods.

Next up NGC 7014 with a SBF distance of 67.3 Mpc. The K-band TFR for 4 neighbors of NGC 7014 gives a distance of 58.3 Mpc. The NGC 7014 distance modulus is larger by +0.31 mag.

NGC 708 in Abell 262 has a SBF distance of 68.2 Mpc. Tully&Pierce (2000) find a I-band TFR distance of 58.3 Mpc to the Pisces filament which A262 belongs. The NGC 708 SBF distance modulus is larger by +0.34 mag.

Finally there is the Coma cluster galaxy NGC 4881 for which the SBF distance is 102.3 Mpc. This compares with 83.6 Mpc for the Tully&Pierce I-band TFR and 85.8 Mpc for the HKP Fundamental Plane distance. The SBF distance modulus is larger by +0.37 to +0.44 mag.

This suggests a possible systematic difference that increases with distance. For the galaxies with SBF distances less than 60 Mpc the SBF is in agreement with the other distance methods. For the two galaxies at ~68 Mpc, the SBF distance moduli are greater by +0.31 to +0.34 mag. For the Coma cluster the SBF distance modulus is greater by +0.37 to +0.44 mag.

Do I have an explanation for this? No.

I have. What kind of person gives distance estimates and argues for systematic errors, but does not give the errors in those distance estimates? Well, let's be courteous and not characterize such an individual. No reference, distance errors or names for these "4 neighbors of NGC 7014" were given so I have no choice but to dismiss the given 58.3 Mpc figure until the missing info is given.

Tully & Pierce (2000) give 60.3 +/- 2 Mpc for Pisces Filament vs. 68.2 +/- 6.7 Mpc for NGC 708 in A262 from Freedman. Tully & Pierce (2000) give 86.3 +/- 4 Mpc for Coma vs. 102.3 +/- 24.8 Mpc for NGC 4881 in Coma from Freedman. These can be added in quadrature to give a difference of +16 +/- 25.1 Mpc for NGC 7014/Coma and +7.9 +/- 7 Mpc for NGC 708/A262/Pisces.

As can be seen, neither of these differences is statistically significant. Together they give a reduced chi-square of 1.68 for two degrees of freedom and p=0.43. There is no statistically significant difference between Tully & Pierce TFR distance estimates and those from Freedman's SBF sample even when considering only the largest claimed differences. Including the targets that show even less difference in the analysis, the p-value goes even higher. We can conclude that there is still no evidence of systematic errors, just like Jarque-Bera told us.

dgruss23
2007-Aug-22, 11:02 PM
So dgruss23, you have twice dodged this question. Your answer?

These questions?


Originally Posted by Zahl
It seems unbearably difficult for you to comprehend that the velocities given in the Vflow column in Freedman's Table 10 are valid for both galaxies and their host clusters.

Originally Posted by Zahl
In mainstream cosmology clusters are gravitationally bound and the expansion of the universe is stopped within them, resulting in the same expansion velocity (in kilometers per second) for all gravitationally bound objects inside them after peculiar velocities have been corrected for. If the galaxies had different expansion velocities, they would be expanding away from each other and the cluster would fly apart.

So you either accept the above or you are proposing some ATM theory. Which is it?

I've spent considerable time showing why you're wrong. These questions are rooted in profound misunderstandings on your part as to what the HKP did. You have refused to respond to key points in my explanations throughout this discussion.

parejkoj (http://www.bautforum.com/1051832-post102.html) tried to explain your error to you as well and again you failed to grasp the point.

The whole basis for you asking this question is rooted in your own misunderstanding. You claimed that the HKP used the individual redshifts of the individual SBF galaxies to calculate H0 and that the SBF did not need to represent the mean cluster distance:



When finding h0 with the Surface Brightness Fluctuations method a distance is determined to a galaxy (not cluster) by measuring SBF in that galaxy and finding a Cepheid in that galaxy for calibration. Redshift is then found and h0 calculated. There is no need for the SBF galaxy to be representative of the mean cluster distance.

There are several facts about the process that you have not understood:

1. The HKP used the mean redshift of the cluster members to calculate H0, not the individual redshift of the galaxy with a SBF distance.
2. Part of the reason that they did this was that they selected the brightest cluster galaxies which might reasonably be assumed to be near the cluster center.
3. Galaxies within clusters have a depth effect (not all clusters are at the same distance from the Milky Way) and therefore H0 should not be calculated from the distance to an individual galaxy, but rather the mean cluster distance. The reason the HKP was willing to use a single galaxy to calculate H0 was that it was a BCG.

More recently we can add the following facts that you have a misunderstanding about:

4. Peculiar motions of individual cluster members are corrected for by taking a mean cluster redshift.
5. The Flow model used by the HKP corrects for large scale flows outside the cluster's internal dynamics not intracluster peculiar motions.

That is five very profound misunderstandings that you have either persisted with, dropped, or eventually adopted view on without acknowledging that you were initially incorrect.

So I'll break apart your "questions" and explain where you are right, where you are mistaken, and where you are not specific enough:


It seems unbearably difficult for you to comprehend that the velocities given in the Vflow column in Freedman's Table 10 are valid for both galaxies and their host clusters.


You are incorrect here. The Vflow column is valid as the mean cluster redshift for the mean cluster distance, not the individual galaxies at their individual distances. I'm not sure why you cannot understand this. It connects to your failure to acknowledge misunderstanding #3 above.


In mainstream cosmology clusters are gravitationally bound and the expansion of the universe is stopped within them,

You are correct on this - gravitational dynamics and peculiar motions should dominate over the Hubble flow in a cluster



resulting in the same expansion velocity (in kilometers per second) for all gravitationally bound objects inside them after peculiar velocities have been corrected for.

You are not specific enough here. Yes, all galaxies within a cluster should have the same cosmological redshift, but they're not all at the exact same distance due to the depth effect. Since you have been advocating that you can calculate H0 from a single galaxy within a cluster, it is not clear to me that you understand the potential influence of the depth effect on the H0 calculation from an individual galaxy within a cluster.

The Hubble relation is linear. For the Coma cluster the individual galaxy distances calculated from the Tully-Fisher relation range from ~70 Mpc to ~100 Mpc. The HKP adopted 7143 km s-1 for the Coma cluster. If I only pick one galaxy in the cluster to calculate H0, then if it is the 70 Mpc galaxy I get H0=102.0. If my single galaxy happens to be the galaxy at 100 Mpc I get H0=71.4.

Again, that is why we take multiple galaxies to get both the redshift and the distance for the cluster - and that is why it is so important to note that the HKP used the BCG's. They would not have just selected some galaxy at random. They had to pick a galaxy they could be pretty sure was very close to the cluster center.

Depending upon how much of the above you actually understand - which from what I can tell from your various statements was not much before we started this discussion - but meant by your above statement, then your statement could be correct or incorrect.


If the galaxies had different expansion velocities, they would be expanding away from each other and the cluster would fly apart.

This is correct. That's why the peculiar motions within the cluster suggest the presences of DM.



I am not going to debate semantics with you

Zahl, either you have not understood my explanation as to the difference between peculiar motions and the flow model, or you're dodging admitting you made an error here. I don't know which it is, but neither possibility is very good for you.

The difference between correcting the mean cluster redshift for a flow model and correcting for peculiar motions is not "semantics". It is a genuine important difference. Peculiar motions are "corrected" for by finding the mean cluster redshift. That mean cluster redshift is the raw heliocentric redshifts. The heliocentric redshift is corrected for important influences external to the cluster using the flow model. The galaxies within the cluster still have peculiar motions relative to the cluster mean regardless of what flow model is adopted. Most researchers adopt the CMB reference frame for their corrected redshift - not the flow model of the HKP.



as they have no relevance to your claim that the HKP SBF results are subject to significant systematics that render them irrelevant, a notion for which you have so far offered no evidence. This is the only thing that interests me in this exchange. I just note that there is nothing wrong with phrases like "flow corrected velocities for the galaxies" as evidenced by the fact that Freedman herself used such in the final HKP paper: "The galaxy velocities have been corrected for the flow-field model described above."

There is nothing wrong with the phrase, just the way you've used it.



So (according to you) they were not able to capture all the systematic errors. Therefore (again according to you) the formula and the systematic errors given in the paper are wrong. What are the correct systematic errors then and how are they derived? You can't expect to be taken seriously if you argue that the real systematic errors are larger than those reported and declare the result irrelevant but not offer the corrected error treatment.

What I pointed out (and provided examples of) is that different studies take different approaches and sometimes the systematic errors for the different approaches don't overlap. One of the possibilities is a systematic difference between two different methods. You can look for a systematic difference in the global H0 result. In that case the SBF method result agrees with most other results. You can also look at the specific distances and compare those with the distances derived from other methods to see if any systematic differences exist.

I pointed to some other distance determinations and noted that - for the small sample of only 6 SBF galaxies - the SBF agrees with the other methods for distances <60 Mpc, but when the SBF distance is above 60 Mpc, the SBF distance is larger than the other distance estimate and possibly such that the difference increases with distance.

And as was my point from the very beginning of our disagreement. We cannot be sure whether or not such a systematic effect is real because we're only working with 6 SBF galaxies. You need a larger sample to pin down whether or not this possible systematic effect is a real systematic effect. And if larger samples ultimately show that it is a real effect, then you have to go back and refigure the value of H0 from the SBF method - or possibly the other methods - because you'll have to identify a reason for the systematic difference and correct for it.

But you cannot do that with only 6 galaxies and be certain about what you're doing. You need a larger sample - which has been one of my themes throughout our discussion.


This has nothing to do with the luminosity of the SBF galaxy. They have to be confident that the measured fluctuation power really comes from the SBF galaxy and not from the intervening structures. If the (well visible) SBF galaxy is behind a foreground galaxy or dust that can't be reliably removed, they have to reject the SBF galaxy and choose another.

But that's not what happened - they used 6 brightest cluster galaxies. At no point did they reject the brightest cluster galaxy and pick a fainter galaxy in the same cluster.

And actually if you go back further into this - you'll see they adopted four of their SBF galaxies from the Lauer et al (http://adsabs.harvard.edu/abs/1998ApJ...499..577L) study. Lauer et al didn't actually use those 4 galaxies to find H0, they used those 4 galaxies as zero point calibrators for the Hubble diagram of 114 brightest cluster galaxies out to redshifts of ~15,000 km s-1.



This creates a potential selection effect favoring the near side SBF candidate galaxies because the far side SBF candidate galaxies are more likely to have their faces partially covered by one of the galaxies in the cluster, contaminating the SBF fluctuation spectrum.

Two things - First, that potential selection effect never actually happened. They adopted the data for the 4 Lauer et al galaxies and added two more of their own. Second, you're talking about multiple sight lines. You could - following your proposal - select a galaxy, find it is too obstructed, and then the next galaxy you select is actually suitably unobstructed but deeper in the cluster. Obstruction can be very patchy.



What kind of person gives distance estimates and argues for systematic errors, but does not give the errors in those distance estimates? Well, let's be courteous and not characterize such an individual. No reference, distance errors or names for these "4 neighbors of NGC 7014" were given so I have no choice but to dismiss the given 58.3 Mpc figure until the missing info is given.

I'm sorry, but the K-band data is from a paper currently under review. Some people put their papers on astro-ph before they're accepted. I try to wait until it is the final accepted version.


Tully & Pierce (2000) give 60.3 +/- 2 Mpc for Pisces Filament vs. 68.2 +/- 6.7 Mpc for NGC 708 in A262 from Freedman. Tully & Pierce (2000) give 86.3 +/- 4 Mpc for Coma vs. 102.3 +/- 24.8 Mpc for NGC 4881 in Coma from Freedman. These can be added in quadrature to give a difference of +16 +/- 25.1 Mpc for NGC 7014/Coma and +7.9 +/- 7 Mpc for NGC 708/A262/Pisces.

As can be seen, neither of these differences is statistically significant. Together they give a reduced chi-square of 1.68 for two degrees of freedom and p=0.43. There is no statistically significant difference between Tully & Pierce TFR distance estimates and those from Freedman's SBF sample even when considering only the largest claimed differences.
Including the targets that show even less difference in the analysis, the p-value goes even higher. We can conclude that there is still no evidence of systematic errors, just like Jarque-Bera told us.

Of course it's not statistically significant. The sample is too small. If a systematic offset persists for a larger sample - which we don't have available because only 6 galaxies were used - then you have something that becomes statistically significant. And if it persists for a larger sample you cannot just hide behind the errors overlapping.

And of course if you want to ignore a possible trend in which the difference increases with increasing distance, you can add in the lower distance galaxies where no difference shows up and really hide a real possibility for a systematic effect.

But I don't suppose you have any interest in a larger sample. Your position has been that none is needed - we can be very confident in an H0 value derived from 6 galaxies. Adding more galaxies won't change the result. Let's just do a chi-square test with 2 galaxies and call it a proof that no systematic errors would exist if we had a larger sample. I guess it's hard to imagine why scientists are always seeking more data points and larger samples!

Zahl
2007-Aug-26, 12:41 PM
Let's summarize your claims in your most recent post:

You claim that "The Vflow column is valid as the mean cluster redshift for the mean cluster distance, not the individual galaxies at their individual distances." You agree with the statements "In mainstream cosmology clusters are gravitationally bound and the expansion of the universe is stopped within them" and "If the galaxies had different expansion velocities, they would be expanding away from each other and the cluster would fly apart."

You claim that the statement "resulting in the same expansion velocity (in kilometers per second) for all gravitationally bound objects inside them [clusters] after peculiar velocities have been corrected for." is not specific enough and start to ramble about distances and H0 calculation, but admit that "Yes, all galaxies within a cluster should have the same cosmological redshift".

Now, the cosmological redshift that you correctly say is the same for all galaxies within a cluster is caused by the expansion of space and is identified by the letter z in the well known formula d*H0 = c*z. Since it is caused by the expansion of space, the cosmological redshift is obtained after peculiar velocities of galaxies and perturbations caused by mass concentrations outside the cluster have been corrected for. What we have then is the cosmological redshift that can be converted to velocity by multiplying it with c.

But wait. This is the same velocity as what is given in the Vflow column in Freedman's Table 10! This is evidenced by the fact that the Vflow velocities have been corrected for peculiar velocities of galaxies and perturbations caused by mass concentrations outside the cluster and are actually used in the final calculation of H0 estimates in table 10, e.g., Vflow = 7441 +/- 300 km/s and D = 102.3 +/- 24.8 Mpc gives H0 = 72.7 +/- 18.7 km/s/Mpc. Thus we can see dgruss23's error - no wonder that he didn't describe how the determinations of cosmological redshifts and Vflow velocities differ because they don't. Both are the same for all gravitationally bound galaxies within a cluster. And it could not be any other way, because otherwise the galaxies would be flying away from each other and the cluster would come apart just as dgruss23 admitted.

Now let's look at what dgruss23 had to say about distances: "Yes, all galaxies within a cluster should have the same cosmological redshift, but they're not all at the exact same distance due to the depth effect. Since you have been advocating that you can calculate H0 from a single galaxy within a cluster, it is not clear to me that you understand the potential influence of the depth effect on the H0 calculation from an individual galaxy within a cluster.

The Hubble relation is linear. For the Coma cluster the individual galaxy distances calculated from the Tully-Fisher relation range from ~70 Mpc to ~100 Mpc. The HKP adopted 7143 km s-1 for the Coma cluster. If I only pick one galaxy in the cluster to calculate H0, then if it is the 70 Mpc galaxy I get H0=102.0. If my single galaxy happens to be the galaxy at 100 Mpc I get H0=71.4.

Again, that is why we take multiple galaxies to get both the redshift and the distance for the cluster"

Again you are giving distance estimates without their errors in the usual crackpot fashion. I really don't understand why you repeatedly mislead BAUT readers like this. If you had ever taken a freshman physics laboratory course you would have been taught that just giving a result without its error is useless. It should read in the forum rules that you are not allowed to compare different results without giving their errors to prevent the kind of atrocity dgruss23 is committing above. I won't do dgruss23's work for him (again!), but I just note that when the errors are given, the above estimates are consistent with each other and the HKP value for the Hubble constant. Tully-Fisher method has very large error bars for single galaxies.


Again, that is why we take multiple galaxies to get both the redshift and the distance for the cluster - and that is why it is so important to note that the HKP used the BCG's

They used Brightest Cluster Galaxies for SBF because these galaxies have narrow photometric and color distributions and are the brightest galaxies in their respective clusters, giving the best possible signal to noise ratio and the farthest reach possible with the SBF method, not because BCGs are near the cluster centers. As I demonstrated in post #111 with step-by-step calculations, it does not matter where the SBF galaxies are located in their respective clusters. I reproduce the post here:

"All gravitationally bound objects within clusters are not at the same distance from us and the global value of H0 is constant, but H0 from a single bound target galaxy in a cluster is not global even if there are no measurement errors. There will be intrinsic scatter in such values following straight from the fact that the expansion of the universe is stopped within clusters. However, a few independent measurements beat down this scatter (random error can be reduced by increasing the sample size) unless there are systematic errors. Besides, such scatter is small because as big as clusters are, they are still pretty small in the grand scheme of things and the universe wouldn't expand much over a cluster size slab of space anyway. A galaxy located somewhere near the center, having an expansion redshift of (say) 7000 km/s at 100 Mpc from us would give an estimate of 7000/100=70 km/s/Mpc for H0. If that galaxy was located on the outskirts of the cluster in the radial direction, at a distance of 5 Mpc (about the characteristic radius of clusters) from the core, the intrinsic error in the H0 determination would be only about 3 km/s/Mpc (7000/105=66.7 km/s/Mpc -> 70-66.7=3.3 km/s/Mpc) compared with the case where the galaxies were two free streaming galaxies in the Hubble flow separated by that same 5 Mpc (7000/100=70 km/s/Mpc & 7350/105=70 km/s/Mpc). This is why it does not matter much where the target galaxy is located in the cluster - a few independent measurements of individual galaxies in different clusters beat down the random error and the error would not amount to much anyway."



Originally Posted by Zahl

So (according to you) they were not able to capture all the systematic errors. Therefore (again according to you) the formula and the systematic errors given in the paper are wrong. What are the correct systematic errors then and how are they derived? You can't expect to be taken seriously if you argue that the real systematic errors are larger than those reported and declare the result irrelevant but not offer the corrected error treatment.

Originally Posted by dgruss23

[word salad snipped]


We have no choice but to accept the error treatment given by Ferrarese as you are unable to provide for a replacement and too incompetent to even try.


I pointed to some other distance determinations and noted that - for the small sample of only 6 SBF galaxies - the SBF agrees with the other methods for distances <60 Mpc, but when the SBF distance is above 60 Mpc, the SBF distance is larger than the other distance estimate and possibly such that the difference increases with distance.

Let's make one point absolutely clear - the HKP SBF determinations agree with the (given by you) TFR determinations for all distances. You just naively eye-balled the figures without comprehending that the error bars become larger with increasing distance, leaving no evidence of any difference between the two data sets as I demonstrated in my previous post.



Originally Posted by Zahl

This has nothing to do with the luminosity of the SBF galaxy. They have to be confident that the measured fluctuation power really comes from the SBF galaxy and not from the intervening structures. If the (well visible) SBF galaxy is behind a foreground galaxy or dust that can't be reliably removed, they have to reject the SBF galaxy and choose another.

Originally Posted by dgruss23

But that's not what happened - they used 6 brightest cluster galaxies. At no point did they reject the brightest cluster galaxy and pick a fainter galaxy in the same cluster.

You don't know if it happened or not as it is not known how many Brightest Cluster Galaxies were on top of the list, but had to be rejected due to contamination. If this is what happened, they would ditch that cluster because it does not have other targets that could give them a signal to noise ratio as high as a clean BCG can. They have to go for another cluster. I am not saying that this is what actually happened (it probably didn't), but at least I have described a mechanism that can plausibly result in the near side galaxies to be preferentially targeted for SBF. You have still not given any reason why far side galaxies would be preferentially targeted for SBF. But as my above calculations demonstrate, it would not matter anyway.



Originally Posted by Zahl

Tully & Pierce (2000) give 60.3 +/- 2 Mpc for Pisces Filament vs. 68.2 +/- 6.7 Mpc for NGC 708 in A262 from Freedman. Tully & Pierce (2000) give 86.3 +/- 4 Mpc for Coma vs. 102.3 +/- 24.8 Mpc for NGC 4881 in Coma from Freedman. These can be added in quadrature to give a difference of +16 +/- 25.1 Mpc for NGC 7014/Coma and +7.9 +/- 7 Mpc for NGC 708/A262/Pisces.

As can be seen, neither of these differences is statistically significant. Together they give a reduced chi-square of 1.68 for two degrees of freedom and p=0.43. There is no statistically significant difference between Tully & Pierce TFR distance estimates and those from Freedman's SBF sample even when considering only the largest claimed differences. Including the targets that show even less difference in the analysis, the p-value goes even higher. We can conclude that there is still no evidence of systematic errors, just like Jarque-Bera told us.

Originally Posted by dgruss23

Of course it's not statistically significant. The sample is too small. If a systematic offset persists for a larger sample - which we don't have available because only 6 galaxies were used - then you have something that becomes statistically significant. And if it persists for a larger sample you cannot just hide behind the errors overlapping.

Once the proper error treatment is given, there is no "systematic offset" between the two samples, period. It is not difficult to think of ways things could be. After testing the sample with two quantitative methods we have found that there is no evidence of systematic errors in the SBF sample and the sample size is large enough to keep the random error reasonably small. Therefore we must conclude that the HKP SBF result for H0 is a good estimate and not "irrelevant".

Tim Thompson
2007-Aug-26, 04:58 PM
I put in my 2-cents worth back on page 1, and don't see any particularly good reason to change my mind. The long discussion thus far is indicative of a strange phenomenon I have encountered, and been bemused by quite a bit over the years: People can be surprisingly emotional about cosmology, which seems to me to be about the least deserving topic of emotional impact that I can think of.

The title question reads, Is the value of the Hubble constant locked down?. Personally, I would say "no", simply because "locked down" has a ring of finality to it in my ears, which seems undeserved. Thereafter, the thread is devoted to the possibliity that the Hubble constant might be in the mid 80's. Considering all the argument about the value published in Freedman, et al., 2001 (http://adsabs.harvard.edu/abs/2001ApJ...553...47F), I would like to make an observation. Here is a list of H0 values cribbed from their own abstract, where the uncertainties are first random, then systematic (all in km/sec/Mpc):

70 +/-5 +/-6 (surface brightness fluctuations)
71 +/-2 +/-6 (Type Ia supernovae)
71 +/-3 +/-7 (Tully-Fisher relation)
72 +/-9 +/-7 (Type II supernovae)
82 +/-6 +/-9 (fundamental plane)

The final reported value of 72+/-8 is statistically derived from this list of values. In this discussion so far, the final reported value has been treated, I think, with rather more a sense of certainty than it deserves. Just looking at the list, those values span a range from 59 to 97 km/sec/Mpc. Based on this alone, it would be absurd to insist that it was impossible for H0 to reside in the region of the mid 80's. But it would not be absurd to suggest that it would be improbable, and my own judgement that it is improbable seems justified by these, and other data.

Furthermore, the list above is prima facie evidence that the value of the Hubble constant is far from "locked down", even if we think we can assign probabilities. Also, the Hubble Key Project is hardly the final word on the matter, although one might think it were, judging from its place in this discussion. Consider Sandage, et al., 2006 (http://adsabs.harvard.edu/abs/2006ApJ...653..843S). This is the final report for their 15 year program using HST observations of Cepheid variables to calibrate the luminosity of type Ia supernovae. They report H0 = 62.3+/-1.3 (random) +/-5.0 (systematic), which certainly overlaps the bottom end of the list above, and overlaps comfortably with the lower part of the final value reported above by the Hubble Key Project. And, as Huchra's list of H0 values (http://cfa-www.harvard.edu/~huchra/hubble/) shows, a value in the mid 80's remains well outside the range of reported values since then (although my earlier statement that this list contained all published values may not be correct).

As I see it, there has been no point made in this discussion to dissuade me from the obvious conclusion that a value in the mid 80's is improbable, but not at all impossible. I see no point in a thread devoted to the idea that we need to read & critically evaluate every single paper published on the Hubble constant, and ourselves decide whether or not the authors have done the right thing.

Zahl
2007-Aug-26, 05:56 PM
As I see it, there has been no point made in this discussion to dissuade me from the obvious conclusion that a value in the mid 80's is improbable, but not at all impossible.

That conclusion was unanimously agreed on by the second page. For the last few pages the discussion has been on the Hubble Key Project SBF result (contaminated by systematic errors or not?) and my objections to dgruss23's habit of comparing distance measurements and claiming there is a "systematic offset" without giving error bars and not doing any error analysis.

TomT
2007-Aug-26, 08:05 PM
I am trying to learn from this discussion, and have a couple questions if that is OK.




Now, the cosmological redshift that you correctly say is the same for all galaxies within a cluster is caused by the expansion of space and is identified by the letter z in the well known formula d*H0 = c*z. Since it is caused by the expansion of space, the cosmological redshift is obtained after peculiar velocities of galaxies and perturbations caused by mass concentrations outside the cluster have been corrected for. What we have then is the cosmological redshift that can be converted to velocity by multiplying it with c.

How does one calculate the galaxy pecular velocities, and how many galaxies within the cluster should be used to obtain a representative value (average for the galaxy) of the cosmological redshift?


But wait. This is the same velocity as what is given in the Vflow column in Freedman's Table 10! This is evidenced by the fact that the Vflow velocities have been corrected for peculiar velocities of galaxies and perturbations caused by mass concentrations outside the cluster and are actually used in the final calculation of H0 estimates in table 10, e.g., Vflow = 7441 +/- 300 km/s and D = 102.3 +/- 24.8 Mpc gives H0 = 72.7 +/- 18.7 km/s/Mpc. Thus we can see dgruss23's error - no wonder that he didn't describe how the determinations of cosmological redshifts and Vflow velocities differ because they don't. Both are the same for all gravitationally bound galaxies within a cluster. And it could not be any other way, because otherwise the galaxies would be flying away from each other and the cluster would come apart just as dgruss23 admitted.

I don't quite follow this. Are you saying the Vflows, calculated by the method you say is correct, give a solution for H0 = 72.7 +/- 18.7? In other words H0 lies between 64.0 and 91.4?

An aside comment: I agree with Tim Thompson's statement that it is surprising that people get emotional over cosmology.

TomT

dgruss23
2007-Aug-27, 12:01 AM
Let's summarize your claims in your most recent post:

You claim that "The Vflow column is valid as the mean cluster redshift for the mean cluster distance, not the individual galaxies at their individual distances." You agree with the statements "In mainstream cosmology clusters are gravitationally bound and the expansion of the universe is stopped within them" and "If the galaxies had different expansion velocities, they would be expanding away from each other and the cluster would fly apart."

You claim that the statement "resulting in the same expansion velocity (in kilometers per second) for all gravitationally bound objects inside them [clusters] after peculiar velocities have been corrected for." is not specific enough and start to ramble about distances and H0 calculation, but admit that "Yes, all galaxies within a cluster should have the same cosmological redshift".

Now, the cosmological redshift that you correctly say is the same for all galaxies within a cluster is caused by the expansion of space and is identified by the letter z in the well known formula d*H0 = c*z. Since it is caused by the expansion of space, the cosmological redshift is obtained after peculiar velocities of galaxies and perturbations caused by mass concentrations outside the cluster have been corrected for. What we have then is the cosmological redshift that can be converted to velocity by multiplying it with c.

But wait. This is the same velocity as what is given in the Vflow column in Freedman's Table 10! This is evidenced by the fact that the Vflow velocities have been corrected for peculiar velocities of galaxies and perturbations caused by mass concentrations outside the cluster and are actually used in the final calculation of H0 estimates in table 10, e.g., Vflow = 7441 +/- 300 km/s and D = 102.3 +/- 24.8 Mpc gives H0 = 72.7 +/- 18.7 km/s/Mpc. Thus we can see dgruss23's error - no wonder that he didn't describe how the determinations of cosmological redshifts and Vflow velocities differ because they don't. Both are the same for all gravitationally bound galaxies within a cluster. And it could not be any other way, because otherwise the galaxies would be flying away from each other and the cluster would come apart just as dgruss23 admitted.

Zahl, I never said that the cosmological redshifts and the Vflow redshifts were different. YOU - yes YOU Zahl, incorrectly stated that the Vflow model corrects for the peculiar motions within the cluster. It does not. You can correct the mean redshift of the cluster using the Vflow model. You can also correct individual galaxies within the cluster using the Vflow model. The cosmological redshift for a cluster is found in two steps. First, the mean redshift of the cluster members is found. This averaging of the individual galaxies redshifts corrects for the effect of peculiar motions. Second, the Vflow model was applied to correct the mean cluster redshift for motions external to the galaxy.

You misunderstood this from the beginning and you continue to refuse to admit your errors. You do not understand the difference between corrections to a heliocentric redshift and correction for peculiar motions within a cluster. And you want people to take seriously anything you say.



Now let's look at what dgruss23 had to say about distances: "Yes, all galaxies within a cluster should have the same cosmological redshift, but they're not all at the exact same distance due to the depth effect. Since you have been advocating that you can calculate H0 from a single galaxy within a cluster, it is not clear to me that you understand the potential influence of the depth effect on the H0 calculation from an individual galaxy within a cluster.

The Hubble relation is linear. For the Coma cluster the individual galaxy distances calculated from the Tully-Fisher relation range from ~70 Mpc to ~100 Mpc. The HKP adopted 7143 km s-1 for the Coma cluster. If I only pick one galaxy in the cluster to calculate H0, then if it is the 70 Mpc galaxy I get H0=102.0. If my single galaxy happens to be the galaxy at 100 Mpc I get H0=71.4.

Again, that is why we take multiple galaxies to get both the redshift and the distance for the cluster"

Yep, that's a very accurate explanation. Too bad you don't get it.


Again you are giving distance estimates without their errors in the usual crackpot fashion. I really don't understand why you repeatedly mislead BAUT readers like this. If you had ever taken a freshman physics laboratory course you would have been taught that just giving a result without its error is useless. It should read in the forum rules that you are not allowed to compare different results without giving their errors to prevent the kind of atrocity dgruss23 is committing above. I won't do dgruss23's work for him (again!), but I just note that when the errors are given, the above estimates are consistent with each other and the HKP value for the Hubble constant. Tully-Fisher method has very large error bars for single galaxies.

That's it Zahl. I am done reading anything you post. You've been wrong repeatedly and refused to admit it - despite being clearly shown to be wrong. And you've been insulting, rude, obnoxious.... You're on my ignore list - a list exactly one person long. Blather on all you wish. Your repeated inaccuracy and failure to grasp basic concepts and acknowledge them when you're corrected make it an irrelevant exercise to even try to discuss anything with you.

dgruss23
2007-Aug-27, 12:21 AM
I put in my 2-cents worth back on page 1, and don't see any particularly good reason to change my mind. The long discussion thus far is indicative of a strange phenomenon I have encountered, and been bemused by quite a bit over the years: People can be surprisingly emotional about cosmology, which seems to me to be about the least deserving topic of emotional impact that I can think of.

I agree - Zahl's reaction has been emotional and knee-jerk to my suggestion that the SBF and Type II SN samples adopted by the HKP were too small to be compelling. The word I used that Zahl has reacted so childishly too was "irrelevant".

Any emotion on my part stems from being repeatedly insulted and forced to repeatedly point out errors on Zahl's part that Zahl refused to acknowledge - not from the cosmology issue itself.



The title question reads, Is the value of the Hubble constant locked down?. Personally, I would say "no", simply because "locked down" has a ring of finality to it in my ears, which seems undeserved.

Agreed, but the reason I started the thread is sometimes in these discussions people treat the value as if it is "locked down" - as if the HKP final report provides a written in stone answer. Stupendousman pointed out that researchers themselves often do not see it that way.


Thereafter, the thread is devoted to the possibliity that the Hubble constant might be in the mid 80's. Considering all the argument about the value published in Freedman, et al., 2001 (http://adsabs.harvard.edu/abs/2001ApJ...553...47F), I would like to make an observation. Here is a list of H0 values cribbed from their own abstract, where the uncertainties are first random, then systematic (all in km/sec/Mpc):

70 +/-5 +/-6 (surface brightness fluctuations)
71 +/-2 +/-6 (Type Ia supernovae)
71 +/-3 +/-7 (Tully-Fisher relation)
72 +/-9 +/-7 (Type II supernovae)
82 +/-6 +/-9 (fundamental plane)The final reported value of 72+/-8 is statistically derived from this list of values. In this discussion so far, the final reported value has been treated, I think, with rather more a sense of certainty than it deserves. Just looking at the list, those values span a range from 59 to 97 km/sec/Mpc. Based on this alone, it would be absurd to insist that it was impossible for H0 to reside in the region of the mid 80's.

And that was my point for starting the thread.


But it would not be absurd to suggest that it would be improbable, and my own judgement that it is improbable seems justified by these, and other data.

I understand why people would feel that H0 in the mid-80's is improbable, but this is why it is worth looking at the details of the samples utilized. My biggest concern with the SBF and Type II SN samples is the very small sample sizes and the small number of calibrators - which I pointed out in the OP.

Things got so hung up on the SBF, that potential problems with the HKP I-band TFR distances were not explored.


Furthermore, the list above is prima facie evidence that the value of the Hubble constant is far from "locked down", even if we think we can assign probabilities. Also, the Hubble Key Project is hardly the final word on the matter, although one might think it were, judging from its place in this discussion. Consider Sandage, et al., 2006 (http://adsabs.harvard.edu/abs/2006ApJ...653..843S). This is the final report for their 15 year program using HST observations of Cepheid variables to calibrate the luminosity of type Ia supernovae. They report H0 = 62.3+/-1.3 (random) +/-5.0 (systematic), which certainly overlaps the bottom end of the list above, and overlaps comfortably with the lower part of the final value reported above by the Hubble Key Project.

They also adopted a steeper slope for the Cepheid P-L relation than the HKP. The van Leeuwen paper I linked to earlier confirms the HKP Cepheid slope over the Sandage teams slope - which means that the Sandage result is suspect.


And, as Huchra's list of H0 values (http://cfa-www.harvard.edu/~huchra/hubble/) shows, a value in the mid 80's remains well outside the range of reported values since then (although my earlier statement that this list contained all published values may not be correct).

As I see it, there has been no point made in this discussion to dissuade me from the obvious conclusion that a value in the mid 80's is improbable, but not at all impossible. I see no point in a thread devoted to the idea that we need to read & critically evaluate every single paper published on the Hubble constant, and ourselves decide whether or not the authors have done the right thing.

I've certainly not advocated that we look at every single Hubble Constant paper - but I see no reason why those that are interested should not critically evaluate papers that they consider most relevant. The focus has remained on the HKP papers.

dgruss23
2007-Aug-27, 12:24 AM
How does one calculate the galaxy pecular velocities, and how many galaxies within the cluster should be used to obtain a representative value (average for the galaxy) of the cosmological redshift?

For galaxies in a cluster, you can calculate the peculiar velocity relative to the cluster mean where

Vpec = Vobserved - Vmean

Since the mean redshift should be close to the cosmological redshift at the cluster's distance, this should give you a pretty good estimate of the peculiar motion of the galaxy within the cluster - and it will be independent of the cluster's distance in this case too.

TomT
2007-Aug-27, 12:28 AM
Now let's look at what dgruss23 had to say about distances: "Yes, all galaxies within a cluster should have the same cosmological redshift, but they're not all at the exact same distance due to the depth effect. Since you have been advocating that you can calculate H0 from a single galaxy within a cluster, it is not clear to me that you understand the potential influence of the depth effect on the H0 calculation from an individual galaxy within a cluster.

The Hubble relation is linear. For the Coma cluster the individual galaxy distances calculated from the Tully-Fisher relation range from ~70 Mpc to ~100 Mpc. The HKP adopted 7143 km s-1 for the Coma cluster. If I only pick one galaxy in the cluster to calculate H0, then if it is the 70 Mpc galaxy I get H0=102.0. If my single galaxy happens to be the galaxy at 100 Mpc I get H0=71.4.

Again, that is why we take multiple galaxies to get both the redshift and the distance for the cluster"

Again you are giving distance estimates without their errors in the usual crackpot fashion.

Zahl, I don't get your statement in the last sentence. Given the example by dgruss23, the variation in H0 is 71.4 to 100, or 85.7 +/- 14.3. If you account for the errors in each calculation of the TF distances, there is a +/- associated with each of these which would add to the +/- 14.3 error in H0. So in other words you seem to be arguing that the error in H0 is larger than that given by dgruss23.
TomT

TomT
2007-Aug-27, 12:32 AM
For galaxies in a cluster, you can calculate the peculiar velocity relative to the cluster mean where

Vpec = Vobserved - Vmean

Since the mean redshift should be close to the cosmological redshift at the cluster's distance, this should give you a pretty good estimate of the peculiar motion of the galaxy within the cluster - and it will be independent of the cluster's distance in this case too.

Makes sense to me, but that would mean that one would need redshifts and independent distance calculations for a lot of galaxies in the cluster. It seems that Zahl is arguing that multiple galaxy redshifts and distance calculations are not needed. Am I missing something.
TomT

ToSeek
2007-Aug-27, 12:55 AM
We have no choice but to accept the error treatment given by Ferrarese as you are unable to provide for a replacement and too incompetent to even try.

Zahl, if you continue to throw in personal attacks along with your arguments, your posting privileges here will be suspended or terminated. This is an official warning.

ToSeek
BAUT Forum Moderator

matt.o
2007-Aug-27, 01:38 AM
For galaxies in a cluster, you can calculate the peculiar velocity relative to the cluster mean where

Vpec = Vobserved - Vmean

Since the mean redshift should be close to the cosmological redshift at the cluster's distance, this should give you a pretty good estimate of the peculiar motion of the galaxy within the cluster - and it will be independent of the cluster's distance in this case too.

This is correct to first order, however the more conventional approach to determining the peculiar velocity for a galaxy within a cluster comes from the following equation which shows the redshift contributions to the observed redshift are not simply additive:

1+zobs=(1+zcosm)(1+zpec)(1+zhelio)

Where zobs is the redshift of the galaxy of interest, zhelio is the correction to a heliocentric redshift. If it is assumed that zcosm (the cosmological component to the observed redshift) is the mean cluster redshift, <zclus>, (usually a more accurate estimator than the gaussian mean is used), then zpec is:

1+zpec=(1+zobs)/[(1+<zclus>)(1+zhelio)]

From here, depending on the magnitude of the peculiar velocity, you can get the peculiar velocity simply by multiplying be c (for a first order approximation), or you can employ the full special relativistic formula for converting redshift to velocity. If you multiply out the first equation, you will see dgruss23's equation is a first order approximation to the above.

Anyway, that is just an aside which I thought I would point out. Carry on!

TomT
2007-Aug-27, 02:20 AM
This is correct to first order, however the more conventional approach to determining the peculiar velocity for a galaxy within a cluster comes from the following equation which shows the redshift contributions to the observed redshift are not simply additive:

1+zobs=(1+zcosm)(1+zpec)(1+zhelio)

Where zobs is the redshift of the galaxy of interest, zhelio is the correction to a heliocentric redshift. If it is assumed that zcosm (the cosmological component to the observed redshift) is the mean cluster redshift, <zclus>, (usually a more accurate estimator than the gaussian mean is used), then zpec is:

1+zpec=(1+zobs)/[(1+<zclus>)(1+zhelio)]



I understand this, I think, but the discussion is centered on determing H0 from cluster data. So since zobs is the observed redshift value for a given galaxy in the cluster and zhelio is a correction for the motion of the location where the observation is taken from (am I right on this point?), that leaves zclus and zpec. Where do we go from here to calculate H0?
TomT

matt.o
2007-Aug-27, 03:53 AM
I understand this, I think, but the discussion is centered on determing H0 from cluster data.

Yes, my post was a side point for clarification.


So since zobs is the observed redshift value for a given galaxy in the cluster and zhelio is a correction for the motion of the location where the observation is taken from (am I right on this point?), that leaves zclus and zpec. Where do we go from here to calculate H0?
TomT

First of all, you use zobs for a sample of galaxies which lie within the cluster. The distribution of zobs will be roughly gaussian depending on the dynamical state of the cluster. The mean/median etc. value for the sample of galaxies then gives a fairly good approximation of the cosmological contribution to the observed redshift of each galaxy. You then need to determine the distance to a sample of galaxies within the cluster using TFR or SBF or some other method to derive a mean distance to the cluster. In combination with the mean redshift, you can get H0.

Ari Jokimaki
2007-Aug-27, 06:43 AM
Two redshift based methods have been suggested for determining cosmological redshift of a galaxy cluster. I'd like to explore these a bit further.

1) The method of selecting one galaxy at the center of the cluster and assuming that the galaxy's redshift represents the cosmological redshift of the cluster. In my opinion, this method seems to have at least two flaws:

- There's no way of saying how much peculiar velocity is included in the redshift of that galaxy. The galaxy could be travelling inside the cluster having a velocity towards or away from us. How much that velocity could be is another question, but for example NED says two galaxies are probably not associated if they have radial velocity difference of 1000 km/s. I have understood that even more velocity difference is accepted in cluster conditions. It is possible that radial velocities for galaxies in central regions of clusters might deviate little less than for galaxies in edges of clusters, but I think that if we only check one galaxy in one cluster, there's no way of saying that it's redshift is exactly the cosmological redshift of the cluster. I'd say that assuming for example +- 500 km/s for central galaxy's redshift due to peculiar velocities would not be far fetched, but there's probably statistical studies done on this question. Perhaps if we take hundreds of central galaxies of clusters, then they might be on average be used as cosmological redshifts of the clusters, but I agree with dgruss23 that 6 galaxies doesn't seem to be even nearly enough.

- The assumption that galaxy is at the center of the cluster is not necessarily very good one. When we look at the galaxy clusters, we see a 2D presentation of them. If we see a galaxy being apparently at the center of the cluster, it actually might not be at the center. It might be at the far or at the near edge of the cluster. I don't think we have a way of saying where in the cluster a single galaxy exactly lies, even if it would appear to be at the center. How likely is it that a single galaxy is actually at the center? If we assume that "the center" is within half of the cluster's radius, then only one third of the galaxies that seem to be at the center actually are at the center (one third is at the near edge and one third is at the far edge), so to me it seems that we can't correct this flaw simply by taking large sample. There might be a way to improve the situation though. Perhaps we know that some galaxy types are more likely to be found at the center of the clusters, so determining the galaxy type first might make the situation so much better that we could use this method with large samples. But I don't think that 6 is large enough.

2) The method of taking the mean of redshifts of the cluster galaxies, and then assuming it to represent the cluster's cosmological redshift. This seems to be quite good method, but its accuracy depends on the number of galaxies in the cluster, and number of clusters in our sample. I created 10 test clusters simply by assigning random radial velocities between 6500 - 7500 km/s (Assuming a cluster radial velocity of 7000 km/s with peculiar velocities of +-500 km/s) for 50 galaxies for each 10 clusters. At worst I got an error of 71 km/s for a single cluster, and the average for 10 clusters only deviated from 7000 km/s by 2.4 km/s. So this method seems to be good one. But what are the actual numbers of galaxies in each cluster usually in actual studies? Doing same random check with 10 galaxies in each cluster resulted in 143 km/s worst single cluster deviation and even the average of 10 clusters deviated by 50 km/s (which is actually not so bad).

Well, my pondering above probably contains lot of misunderstandings and false/too simplistic assumptions, but perhaps some knowledgeable members here are kind enough to correct me. :)

TomT
2007-Aug-27, 02:26 PM
Yes, my post was a side point for clarification.



First of all, you use zobs for a sample of galaxies which lie within the cluster. The distribution of zobs will be roughly gaussian depending on the dynamical state of the cluster. The mean/median etc. value for the sample of galaxies then gives a fairly good approximation of the cosmological contribution to the observed redshift of each galaxy. You then need to determine the distance to a sample of galaxies within the cluster using TFR or SBF or some other method to derive a mean distance to the cluster. In combination with the mean redshift, you can get H0.

Thanks,
This is how I understood one way for doing the calculation, and is how dgruss23 is saying it should be done. However, there is one difference in doing it this way, you don't compute a peculiar velocity, but instead just take multiple readings so hopefully the average peculiar velocity is near zero.

Also, if I understand correctly, this is not how the HKP study did it.

Maybe Zahl is arguing that if you take multiple clusters with only one galaxy per cluster, you could get a sort of averaging that way. In other words one cluster could have a galaxy in the near side and another could have a galaxy on the far side, etc, and it would average out that way. I would think you would need a lot of clusters to get any accuracy doing this. Seems the first way is a lot better.
TomT

dgruss23
2007-Aug-27, 04:39 PM
First of all, you use zobs for a sample of galaxies which lie within the cluster. The distribution of zobs will be roughly gaussian depending on the dynamical state of the cluster. The mean/median etc. value for the sample of galaxies then gives a fairly good approximation of the cosmological contribution to the observed redshift of each galaxy. You then need to determine the distance to a sample of galaxies within the cluster using TFR or SBF or some other method to derive a mean distance to the cluster. In combination with the mean redshift, you can get H0.

Thanks for your input matt.o. That is exactly what I've been describing.

A question about your previous post. Galaxy redshifts are typically corrected for motions within the local group, Virgo infall, and/or to the CMB reference frame. The uncorrected observed redshifts are usually called "heliocentric" redshifts in the literature.

So you don't mean the uncorrected observed redshift when you refer to Vhelio in the equation -right? You would be referring to a correction made to account for the Earth's orbital motion?

The reason I ask is that the uncorrected heliocentric redshifts published in papers are primarily composed of the cosmological component - at least at larger distances. So to multiply the heliocentric redshift provided in the published papers (1+z helio) by the cosmological (1+z cosm) is essentially multiplying the cosmological redshift by itself.

dgruss23
2007-Aug-27, 04:57 PM
Thanks,
This is how I understood one way for doing the calculation, and is how dgruss23 is saying it should be done. However, there is one difference in doing it this way, you don't compute a peculiar velocity, but instead just take multiple readings so hopefully the average peculiar velocity is near zero.

Also, if I understand correctly, this is not how the HKP study did it.


That is how the HKP did it for the FP and TFR samples. For the SBF survey they used a single galaxy - the brightest cluster galaxy. As I explained to Zahl numerous times, using the BCG in theory is not unreasonable because they can reasonably be expected to be close to the center - but I think you need a larger sample - not only for determination of H0, but for comparison with other distance methods to the same clusters.

You can't reliably determine whether or not a systematic difference exists between the SBF and other methods if you are only comparing a handful of distance estimates. The Tully-Fisher relation for example has dozens of calibrator galaxies and dozens of galaxy clusters with hundreds of galaxies in the samples for different studies.

People like to claim that the TFR is less reliable because of larger intrinsic scatter. But this is not as much of an issue anymore because research has identified some of those sources of error - slower rotators (<120 km s-1 rotational velocity) have significantly larger scatter than faster rotators. Restrictions can be made regarding inclination, Hubble type, rotation curve or hydrogen profile structure and other factors. These types of pruning can significantly reduce the observed TFR scatter.


Maybe Zahl is arguing that if you take multiple clusters with only one galaxy per cluster, you could get a sort of averaging that way. In other words one cluster could have a galaxy in the near side and another could have a galaxy on the far side, etc, and it would average out that way. I would think you would need a lot of clusters to get any accuracy doing this. Seems the first way is a lot better.
TomT

That has been my point about the HKP SBF sample - only 6 galaxies. I'm not particularly comfortable with H0 determined from such a small sample of galaxies.

And the Type II SN sample is even worse - 4 galaxies to determine H0 and 3 cepheid calibrators to determine the zero point.

As was my point in the OP, having a handful of galaxies for use with these methods is not very compelling compared to the much larger samples available for the TFR.

Jerry
2007-Aug-27, 07:43 PM
Furthermore, the list above is prima facie evidence that the value of the Hubble constant is far from "locked down", even if we think we can assign probabilities. Also, the Hubble Key Project is hardly the final word on the matter, although one might think it were, judging from its place in this discussion. Consider Sandage, et al., 2006 (http://adsabs.harvard.edu/abs/2006ApJ...653..843S). This is the final report for their 15 year program using HST observations of Cepheid variables to calibrate the luminosity of type Ia supernovae. They report H0 = 62.3+/-1.3 (random) +/-5.0 (systematic), which certainly overlaps the bottom end of the list above, and overlaps comfortably with the lower part of the final value reported above by the Hubble Key Project.
Interesting.

In this later paper,

http://arxiv.org/abs/astro-ph/0608677

Sandage used a different baseline to calculate an independent estimate:


If we assign a generous range of systematic error of ~ 0.3 mag, the distance D = 20.9 Mpc (m - M = 31.60) has a range from 24.0 Mpc to 18.2 Mpc (m - M between 31.9 and 31.3), and a Hubble constant of Ho = 56 between the limits of 49 and 65 when used with a cosmic expansion velocity of 1175 km s-1 determined by the method of distance ratios of remote clusters to Virgo.

In 2001, Sandage was arguing in favor of an Ho value of 60.

http://arxiv.org/PS_cache/astro-ph/pdf/0112/0112489v1.pdf

And in 2000, He argued for an Ho of ~58:

http://arxiv.org/PS_cache/astro-ph/pdf/0010/0010422v1.pdf
http://arxiv.org/PS_cache/astro-ph/pdf/0004/0004063v1.pdf

In 1998, Sandage published an Ho ~ 56
http://arxiv.org/PS_cache/astro-ph/pdf/9904/9904360v1.pdf

…and in 1996, Sandage’s Ho was ~55

http://arxiv.org/PS_cache/astro-ph/pdf/9611/9611170v1.pdf




The status of the determination of the Hubble constant is reviewed, setting
out the evidence for the long distance scale with H0 = 55 ± 5… There is no valid evidence for H0 > 70.

But if current trends continue and he lives long enough, the Sandage estimate in the year 2026 will be Ho~82+/-0.07:)

Seriously, Sandages estimates have always been on the low end, and Ho has proven to be a tough number to pin down. Something is wrong.

jimmarsen
2007-Aug-27, 11:25 PM
Does this complicate the issue?


Anisotropy in the Hubble constant as observed in the HST Extragalactic Distance Scale Key Project results (http://arxiv.org/abs/astro-ph/0703556v1)

Jerry
2007-Aug-28, 08:09 PM
It certainly leaves the zero point, that is, the depth at which Hubble flow becomes significant more ambigous.

Meanwhile, More problems at the bottom of the distance ladder:

http://arxiv.org/PS_cache/arxiv/pdf/0708/0708.3382v1.pdf


The original Hipparcos parallaxes led de Zeeuw et al. to conclude that Cr 121 and the surrounding association of OB stars form a relatively compact and coherent moving group at a distance of 550 – 600 pc. Our corrected parallaxes reveal a different spatial distribution of young stellar populace in this area. Both the cluster Cr 121 and the extended OB association are considerably more distant (750 – 1000 pc), and the latter has a large depth probably extending beyond 1 kpc. Therefore, not only are the recalculated parallaxes in complete agreement with the photometric uvbyβ parallaxes, but the structure of the field they reveal is no longer in discrepancy with that found by the photometric method.

It is amazing that the Hipporcos measurement of the distance to something this close could be off by ~80%. Could it? What a mess!

StupendousMan
2007-Aug-29, 01:32 AM
It is amazing that the Hipporcos measurement of the distance to something this close could be off by ~80%. Could it? What a mess!

No, it's not amazing at all.

The typical precision of the parallax measurement for a single star made by Hipparcos is somewhere around 0.001 arcsec. As pointed out by Lutz and Kelker and others long ago, that means that the effective range of DISTANCE measurements made by Hipparcos is only 100-200 parsecs; by "effective range" I mean "the range at which the uncertainty in the distance becomes more than 20 percent or so." Moreover, the uncertainty in DISTANCE does not follow a normal distribution. For details, you might read

http://stupendous.rit.edu/richmond/answers/parallax.txt

which, as you will note, is over 10 years old.

The association Jerry mentions is far beyond this limit. There was and is no reason to believe that measurements made by Hipparcos would be very accurate. The authors of the original study of the distance to Cr 121,

Open clusters with Hipparcos. I. Mean astrometric parameters (http://articles.adsabs.harvard.edu/cgi-bin/nph-iarticle_query?bibcode=1999A%26A...345..471R&db_key=AST&page_ind=11&plate_select=NO&data_type=GIF&type=SCREEN_GIF&classic=YES&high=44c573e6c313465)

were well aware of this fact. They state in their paper that the Hipparcos-based distances to this (and other, distant clusters in their sample) were likely to have errors of around 40%; they recommended that these estimates be used only for statistical purposes, and that others use the photometry-based distances to this cluster instead.

I don't know how much longer I'll bother to respond to Jerry's frequent claims of "Oh my goodness! Astronomers are wrong again! We'll have to re-evaluate all of cosmology!" In some cases, the claims are easy to dismiss if one reads the primary literature carefully.

folkhemmet
2007-Aug-29, 04:36 AM
StupendousMan said: "I don't know how much longer I'll bother to respond to Jerry's frequent claims of "Oh my goodness! Astronomers are wrong again! We'll have to re-evaluate all of cosmology!" In some cases, the claims are easy to dismiss if one reads the primary literature carefully."

I am with you on this one, StupendousMan; these rants are tiresome. Inherent to the majority of Jerry's posts is a mocking I am superior tone bordering on being hostile toward astronomers as well as the very practice of astronomy. It's dismaying how Jerry and his boys claim to have a monopoly on objectivity by consistently acting as if they are the only ones who are capable of applying the scientific method correctly! For example, Jerry refuses to accept the view that if the current cosmological model is gravely wrong, then professional cosmologists will eventually use the scientific method to determine its invalidity.

Yes, the Hubble constant is a hard number to pin down, but one who says that considerable progress has not been made in narrowing down the range of possible values for Ho is a poor historian suffering from the pitfalls of nay-saying. There used to be factor of 10 disagreements over the value of Ho and now people are arguing over whether its closer to 60 or closer to 80 something. Based on all of the available data and a string of careful analyses one can be very sure that the Hubble constant is somewhere between 60 and 80. Of course there are going to be disagreements over the exact value of any number in science, but one should not use this as an excuse to ignore/bash the hard work, careful analyses, and craftsmanship that have contributed to our knowledge of the Universe. Beyond a reasonable doubt we are making significant strides in our quest to know our origins.

Jerry
2007-Aug-29, 06:44 PM
http://arxiv.org/PS_cache/arxiv/pdf/...708.3382v1.pdf


Statistically significant discrepancies between the Hipparcos trigonometric and traditional photometric, spectroscopic and interferometric results have been reported in the literature for selected small-scale fields, most notably for the Pleiades open cluster...

Our result implies that the problem of inaccurate mean parallaxes in Hipparcos affects more regions, and of larger angular area, than just a few small patches occupied by dense open clusters.

Not my statements and conclusions: Theirs. Statistically significant means that even when error bars are included, there is a discrepancy that is outside of the known systemic limits. They go on to say they can correct this:


This is not an irreversible situation, because the method of astrometric solution of the available Hipparcos data used in this paper proves once again successful in correcting this error, despite its limitations.

But the corrections increase the estimate of the distance to nearby clusters by a factor of almost two. This is very significant if the Hipparcos distance to the Pleiades is influenced by this correction, and this is part of the distance ladder used to calibrate cephied magnitudes, and ultimately the value of Ho.

From your reference:

The estimation of the formal error of the mean parallax based upon distant clusters seems statistically realistic. There is then no reason to suspect the presence of a problem in closer clusters, because the error in the parallax is independent from the parallax itself. concerning the Pleiades, this suggests that the formal parallax error has been correctly esitimated ...PSSKH cautioned the users of Hipparcos data for the stars with high [greek]. This section shows that, on the contrary, no bias on the parallax can be attributed to [greek]; neither on large scale nor on small scale.

I'm not even close to an astrometrist, but I can read plain English: There are genuine differences of opinion concerning both the bottom steps of the distance scaling ladder and the value of Hubbles constant. Even cursory reading shows some methodologies favor a value near 62; others place Ho closer to 78; and although this difference is less than it was twenty years ago, there is genuine scientific disagreement: The error bars barely overlap, if at all.

In cases like this there are several conclusions that can be drawn.

1)One or more of the groups is allowing their personal prejudices to bias their interpretations of the data.

2) Somebody is making an undetected systemic error.

3) The differences fall into the gray area of science: Everyone is making their best good-faith estimates; but the weight or bias that they assign unavoidable parametric estimates is inaccurate.

Most of you (viz folkhemmet) would likely select #3 with maybe a little of #1. I think the correct answer is likely #4: There are unknown systemics that effect different measurements in different ways. The irony is that this position places more faith in the efforts of the scientists and engineers than in the basic tools they are using...tools they developed.

Zahl
2007-Aug-30, 05:40 PM
Originally Posted by dgruss23

Zahl, I never said that the cosmological redshifts and the Vflow redshifts were different.

You wrote: "The Vflow column is valid as the mean cluster redshift for the mean cluster distance, not the individual galaxies at their individual distances." and in post #100 "the Vflow velocities are only valid for both the galaxies and the clusters if the galaxy is at the mean cluster distance". You also wrote: "Yes, all galaxies within a cluster should have the same cosmological redshift".

These statements contradict each other if the Vflow redshifts and the cosmological redshifts in fact are the same thing. If all galaxies within a cluster have the same cosmological redshift (fact), and the cosmological redshifts and the Vflow redshifts are the same thing (fact), then all galaxies within a cluster have the same Vflow redshift (fact), not just those that are at the mean cluster distance as you claimed above.

In post #100 you also claimed that galaxies have different cosmic redshifts depending on their location within the cluster: "For the sake of argument let's adopt the popular H0=70 km s-1 Mpc-1. Now NGC 4881 has a SBF distance of 102.3 Mpc. The I-band TFR distance is 85.6 Mpc and the FP is 85.8 Mpc. So here is the problem. If the NGC 4881 SBF distance is correct, and the value of the Hubble Constant is 70, then the cosmic redshift for NGC 4881 is 7161 km s-1. But is NGC 4881 at the center (and hence the actual mean distance) of the cluster? According to the TFR and FP results it may not be. Now based upon the TFR and FP results, the cosmic velocity of the coma cluster (if H0=70) is 6000 km s-1 if the TFR and FP distances are correct. If the SBF distance to NGC 4881 and the TFR and FP distances to the Coma cluster are all correct, then NGC 4881 is not central to the Coma cluster and there is a 1160 km s-1 difference between the cosmic redshift of NGC 4881 and the cosmic redshift of the Coma cluster."

This contradicts your most recent admission that "Yes, all galaxies within a cluster should have the same cosmological redshift".

But I am glad that everyone finally agrees that all galaxies within a cluster have the same Vflow (i.e. Hubble Flow) redshift.


YOU - yes YOU Zahl, incorrectly stated that the Vflow model corrects for the peculiar motions within the cluster.

Now this desperate strawman is quite funny. I have never stated in this thread how the peculiar velocities of the SBF galaxies are corrected for, because until recently the clusters vs. galaxies part of the debate was about distances, not redshifts, and because correcting for the peculiar velocities of the SBF galaxies does not affect the Key Project SBF results anyway.

Zahl
2007-Aug-30, 05:43 PM
But wait. This is the same velocity as what is given in the Vflow column in Freedman's Table 10! This is evidenced by the fact that the Vflow velocities have been corrected for peculiar velocities of galaxies and perturbations caused by mass concentrations outside the cluster and are actually used in the final calculation of H0 estimates in table 10, e.g., Vflow = 7441 +/- 300 km/s and D = 102.3 +/- 24.8 Mpc gives H0 = 72.7 +/- 18.7 km/s/Mpc. Thus we can see dgruss23's error - no wonder that he didn't describe how the determinations of cosmological redshifts and Vflow velocities differ because they don't. Both are the same for all gravitationally bound galaxies within a cluster. And it could not be any other way, because otherwise the galaxies would be flying away from each other and the cluster would come apart just as dgruss23 admitted.

Originally posted by TomT

I don't quite follow this. Are you saying the Vflows, calculated by the method you say is correct, give a solution for H0 = 72.7 +/- 18.7? In other words H0 lies between 64.0 and 91.4?

That is the H0 estimate based on one of the SBF galaxies (NGC 4881), but there are five others and they have smaller errors. The final Key Project SBF estimate for the value of the Hubble constant considers these galaxies together and has even smaller errors. The errors in distance and H0 are large for NGC 4881 because it was not observed in the V band, resulting in a large uncertainty in the color dependence of the fluctuation magnitude.


An aside comment: I agree with Tim Thompson's statement that it is surprising that people get emotional over cosmology.

Not cosmology, but it really gets on my nerves when a self-proclaimed "researcher" claims that there is a "systematic offset" between two data sets without doing any error analysis and when that error analysis is done it shows that the data sets are in fact consistent with each other.

Zahl
2007-Aug-30, 05:45 PM
Originally Posted by Zahl

Now let's look at what dgruss23 had to say about distances: "Yes, all galaxies within a cluster should have the same cosmological redshift, but they're not all at the exact same distance due to the depth effect. Since you have been advocating that you can calculate H0 from a single galaxy within a cluster, it is not clear to me that you understand the potential influence of the depth effect on the H0 calculation from an individual galaxy within a cluster.

The Hubble relation is linear. For the Coma cluster the individual galaxy distances calculated from the Tully-Fisher relation range from ~70 Mpc to ~100 Mpc. The HKP adopted 7143 km s-1 for the Coma cluster. If I only pick one galaxy in the cluster to calculate H0, then if it is the 70 Mpc galaxy I get H0=102.0. If my single galaxy happens to be the galaxy at 100 Mpc I get H0=71.4.

Again, that is why we take multiple galaxies to get both the redshift and the distance for the cluster"

Again you are giving distance estimates without their errors in the usual crackpot fashion.

Originally Poster by TomT

I don't get your statement in the last sentence. Given the example by dgruss23, the variation in H0 is 71.4 to 100, or 85.7 +/- 14.3.

That is not even close to how it is done.


If you account for the errors in each calculation of the TF distances, there is a +/- associated with each of these which would add to the +/- 14.3 error in H0. So in other words you seem to be arguing that the error in H0 is larger than that given by dgruss23.

Dgruss did not give any errors, just two numbers, and your attempt to derive errors from them does not change this fact.

Zahl
2007-Aug-30, 05:48 PM
Originally Posted by matt.o

First of all, you use zobs for a sample of galaxies which lie within the cluster. The distribution of zobs will be roughly gaussian depending on the dynamical state of the cluster. The mean/median etc. value for the sample of galaxies then gives a fairly good approximation of the cosmological contribution to the observed redshift of each galaxy. You then need to determine the distance to a sample of galaxies within the cluster using TFR or SBF or some other method to derive a mean distance to the cluster. In combination with the mean redshift, you can get H0.

A distance from TFR to a single galaxy and H0 calculated from it have so large errors that the results are not very useful. So they observe many galaxies per cluster with this method to get better errors. However, a single SBF distance to a galaxy has small errors. Because SBF observations are so expensive, it is much better to have several SBF distances of individual galaxies from different clusters to avoid systematics from a single cluster environment than to choose many galaxies from the same cluster for SBF.

Zahl
2007-Aug-30, 06:00 PM
Originally Posted by Ari Jokimaki

Two redshift based methods have been suggested for determining cosmological redshift of a galaxy cluster. I'd like to explore these a bit further.

1) The method of selecting one galaxy at the center of the cluster and assuming that the galaxy's redshift represents the cosmological redshift of the cluster. In my opinion, this method seems to have at least two flaws:

- There's no way of saying how much peculiar velocity is included in the redshift of that galaxy. The galaxy could be travelling inside the cluster having a velocity towards or away from us.

The Key Project SBF team naturally corrected for peculiar velocities, but it made very little difference:

N708 4855 km/s, 4897 km/s (Ferrarese) +0.87%
N4881 6720 km/s, 6965 km/s (Ferrarese) +3.69%
N7014 4764 km/s, 4918 km/s (Ferrarese) +3.23%
IC 4296 3762 km/s, 3686 km/s (Ferrarese) -2.03%
N5193 3735 km/s, 3806 km/s (Ferrarese) +1.90%
N4373 3373 km/s, 3395 km/s (Ferrarese) +0.65%

The first velocity is the heliocentric radial velocity of the galaxy from the CfA 2000 catalog (ftp://fang-ftp.cfa.harvard.edu/pub/catalogs/velocity.dat) that was used to calculate the velocities in the Friedman and Ferrarese HKP papers, the second is the mean cluster velocity from the same source as given by Ferrarese. If you calculate H0 from them, the differences will be well within the errors given in the papers.

Galaxies in clusters have a Gaussian like distribution on a velocity histogram, meaning that not all velocities are equally likely. Typical clusters at typical SBF distances have a velocity dispersion of 5-15% of the mean cluster velocity. Coma has a mean velocity of 6965 km/s according to CfA and a velocity dispersion of about 1000 km/s. For A262 the values are 4897 km/s and 588 km/s and for A3742 4918 km/s and 267 km/s. One could use a Gaussian random number generator to draw 2 "galaxies" from each of these clusters and see how much the difference is when the 6 galaxies are compared with their respective cluster means. The chances are that the difference does not amount to much. Still I agree that correcting for peculiar velocities is a good idea.


- The assumption that galaxy is at the center of the cluster is not necessarily very good one.

It does not matter where the galaxies are located as I have calculated several times in this thread. The galaxies are far away (~60 Mpc) and the radii of the clusters are less than 10% of this. Moreover, the central regions are more populated than the outskirts so that a random galaxy is more likely to reside in the central region. Six galaxies is then more than enough to keep the random error minimal.

Jerry
2007-Aug-30, 10:18 PM
These statements contradict each other if the Vflow redshifts and the cosmological redshifts in fact are the same thing. If all galaxies within a cluster have the same cosmological redshift (fact), and the cosmological redshifts and the Vflow redshifts are the same thing (fact), then all galaxies within a cluster have the same Vflow redshift (fact), not just those that are at the mean cluster distance as you claimed above.

Is the Vflow necessarily the same as the cosmological redshift? I don't see why it has to be. Or are you specifically calling out the case where they are the same?

A cluster could have a relative velocity - proper motion towards our own cluster, but at the same time be 'moving away from us due to expansion' at a rate that is less than what the normalized redshift would be in all directions at the same distance.

Isn't this one of the reasons that the zero-point for cosmic redshift flow remains so difficult to pin down?

RussT
2007-Aug-31, 01:05 AM
That is the H0 estimate based on one of the SBF galaxies (NGC 4881), but there are five others and they have smaller errors. The final Key Project SBF estimate for the value of the Hubble constant considers these galaxies together and has even smaller errors. The errors in distance and H0 are large for NGC 4881 because it was not observed in the V band, resulting in a large uncertainty in the color dependence of the fluctuation magnitude.

http://adsabs.harvard.edu/abs/1995AJ....110.2537B

dgruss23
2007-Aug-31, 01:45 AM
http://adsabs.harvard.edu/abs/1995AJ....110.2537B

Good example RussT. The GCLF distance to NGC 4881 was 108 Mpc (-11, +infinity). The SBF distance to NGC 4881 is 102.3 Mpc (+/-25). The Fundamental plane distance was 85.8 +/- 6 Mpc (81 galaxies) while the I-band TFR distance was 85.6 Mpc for the HKP or 83.6 Mpc for Tully&Pierce (2000) after correcting their TFR zero point to the HKP final cepheid distances.

As I've pointed out - NGC 4881 is a single galaxy and it may simply be on the backside of the cluster. If that is the case, then the H0 value derived for the Coma cluster using the SBF distance is wrong.

Contrary to what some have claimed, the difference is much more than a couple Mpc in this instance.

Ari Jokimaki
2007-Aug-31, 07:52 AM
One could use a Gaussian random number generator to draw 2 "galaxies" from each of these clusters and see how much the difference is when the 6 galaxies are compared with their respective cluster means. The chances are that the difference does not amount to much. Still I agree that correcting for peculiar velocities is a good idea.
Chances are that yes, but chances are also that difference amounts to quite much. If you select only one galaxy without knowing the cluster mean (which was the first method I talked about), you not only have a reasonable chance of selecting poor representative, you also have a better chance of selecting a foreground or background galaxy. With that method sample of 6 seems inadequate.


It does not matter where the galaxies are located as I have calculated several times in this thread. The galaxies are far away (~60 Mpc) and the radii of the clusters are less than 10% of this.
I'm not sure if it matters or not, but I don't consider distance from the cluster center to be the important factor. What I'm thinking is that you might have better chance of galaxy's peculiar velocity to be quite large if if is at the edge of the cluster. I'm not sure about that, but I have a vague recollection that it might be the case. Perhaps someone here knows the observational situation, do galaxies at the edges of clusters have generally more velocity differences with respect to the cluster mean than the galaxies at the center of clusters?


Moreover, the central regions are more populated than the outskirts so that a random galaxy is more likely to reside in the central region. Six galaxies is then more than enough to keep the random error minimal.
Well, with possible 10% from cluster radius and 10% (for example) from peculiar velocities, I rather doubt that 6 would be enough, but that's just my 2 cents.

Zahl
2007-Sep-01, 09:04 PM
Is the Vflow necessarily the same as the cosmological redshift? I don't see why it has to be.

It follows from their (HKP) definition. They have corrected for all perturbations (including the bulk cluster motions) as well as they could back then to extract the pure Hubble Flow velocities.

Zahl
2007-Sep-01, 09:11 PM
http://adsabs.harvard.edu/abs/1995AJ....110.2537B

If it is true that "the minimum Coma distance is 108 Mpc" as they say in the paper it would mean that the HKP FP & TFR and Tully & Pierce TFR distances for Coma are wrong. This is also supported by Kavelaars et al. (http://adsabs.harvard.edu/abs/2000ApJ...533..125K) who found a distance of 102±6 Mpc for Coma using the GCLF, Liu & Graham (http://adsabs.harvard.edu/abs/2001ApJ...557L..31L) who found 100±10 Mpc for the central supergiant cD galaxy (NGC 4874) in the core of the Coma cluster, and Masters et al. (http://adsabs.harvard.edu/abs/2006ApJ...653..861M) who found that the TFR paper by Tully & Pierce (2000) "still makes no attempt to account for selection biases relying on the erroneous idea that cluster samples are complete". Fixing this error, Masters et al. provided a measure of H0=74±2±6 km/s/Mpc. They didn't give individual target distances, but using their value of 74±2±6 km/s/Mpc for H0 and HKP's Vflow of 7441±300 km/s, it can be calculated that Coma lies at 101±12 Mpc although the errors are not exactly Gaussian.

RussT
2007-Sep-01, 11:59 PM
If it is true that "the minimum Coma distance is 108 Mpc" as they say in the paper it would mean that the HKP FP & TFR and Tully & Pierce TFR distances for Coma are wrong. This is also supported by Kavelaars et al. (http://adsabs.harvard.edu/abs/2000ApJ...533..125K) who found a distance of 102±6 Mpc for Coma using the GCLF, Liu & Graham (http://adsabs.harvard.edu/abs/2001ApJ...557L..31L) who found 100±10 Mpc for the central supergiant cD galaxy (NGC 4874) in the core of the Coma cluster, and Masters et al. (http://adsabs.harvard.edu/abs/2006ApJ...653..861M) who found that the TFR paper by Tully & Pierce (2000) "still makes no attempt to account for selection biases relying on the erroneous idea that cluster samples are complete". Fixing this error, Masters et al. provided a measure of H0=74±2±6 km/s/Mpc. They didn't give individual target distances, but using their value of 74±2±6 km/s/Mpc for H0 and HKP's Vflow of 7441±300 km/s, it can be calculated that Coma lies at 101±12 Mpc although the errors are not exactly Gaussian.

dgruss23 will be able to comment on these specifics much better than I can, as I am almost certain he will.

I noticed this statement above...



The errors in distance and H0 are large for NGC 4881 because it was not observed in the V band, resulting in a large uncertainty in the color dependence of the fluctuation magnitude.

It may just be me, but this looks like an attempt at minimizing the problems of choosing a galaxy such as NGC 4881, which is a bright Elliptical, as only having a problem in the V band, as the 'only' problem.

But, as dgruss23 has shown over and over, just choosing the 1 BCG, for each of the 6 clusters, while not exactly 'irrelavant', is extremely problematic, for more reasons than just the V band in NGC 4881!

While this evalution below is older, it is still nearly as problematic today...

http://donegal.uchicago.edu/science/blazar/MRK501.html

Zahl
2007-Sep-02, 09:43 AM
The Key Project SBF team naturally corrected for peculiar velocities, but it made very little difference:

N708 4855 km/s, 4897 km/s (Ferrarese) +0.87%
N4881 6720 km/s, 6965 km/s (Ferrarese) +3.69%
N7014 4764 km/s, 4918 km/s (Ferrarese) +3.23%
IC 4296 3762 km/s, 3686 km/s (Ferrarese) -2.03%
N5193 3735 km/s, 3806 km/s (Ferrarese) +1.90%
N4373 3373 km/s, 3395 km/s (Ferrarese) +0.65%

The first velocity is the heliocentric radial velocity of the galaxy from the CfA 2000 catalog (ftp://fang-ftp.cfa.harvard.edu/pub/c...s/velocity.dat) that was used to calculate the velocities in the Friedman and Ferrarese HKP papers, the second is the mean cluster velocity from the same source as given by Ferrarese. If you calculate H0 from them, the differences will be well within the errors given in the papers.

Galaxies in clusters have a Gaussian like distribution on a velocity histogram, meaning that not all velocities are equally likely. Typical clusters at typical SBF distances have a velocity dispersion of 5-15% of the mean cluster velocity. Coma has a mean velocity of 6965 km/s according to CfA and a velocity dispersion of about 1000 km/s. For A262 the values are 4897 km/s and 588 km/s and for A3742 4918 km/s and 267 km/s. One could use a Gaussian random number generator to draw 2 "galaxies" from each of these clusters and see how much the difference is when the 6 galaxies are compared with their respective cluster means. The chances are that the difference does not amount to much. Still I agree that correcting for peculiar velocities is a good idea.

Originally Posted by Ari Jokimaki

Chances are that yes, but chances are also that difference amounts to quite much. If you select only one galaxy without knowing the cluster mean (which was the first method I talked about), you not only have a reasonable chance of selecting poor representative

Wrong. The probability that the true cluster mean velocity is more than 10% greater than the mean radial velocity of six randomly chosen galaxies is just 0.7% if the six clusters have a velocity dispersion of 10% of the mean cluster velocity. This probability would also imply that H0 is really that much higher.

z = (X-µ)/(σ/Sqrt(N)) = (x-1.1x)/(0.1x/Sqrt(6)) = -2.45 sigma one sided -> 0.7%.

You can test this with a Gaussian random number generator like this one:

http://www.graphpad.com/quickcalcs/randomN1.cfm


Well, with possible 10% from cluster radius and 10% (for example) from peculiar velocities, I rather doubt that 6 would be enough, but that's just my 2 cents.

Well, why not show it quantitatively then? Do you just think that your 2 cents trumps the HKP's error analysis that says N=6 results in an error of 4 km/s/Mpc in H0?

Zahl
2007-Sep-02, 10:02 AM
Originally Posted by Zahl
The errors in distance and H0 are large for NGC 4881 because it was not observed in the V band, resulting in a large uncertainty in the color dependence of the fluctuation magnitude.

It may just be me, but this looks like an attempt at minimizing the problems of choosing a galaxy such as NGC 4881, which is a bright Elliptical, as only having a problem in the V band, as the 'only' problem.

What problems? The reason for large error bars for NGC 4881 is the one I described above and is explained in detail in the papers.


But, as dgruss23 has shown over and over, just choosing the 1 BCG, for each of the 6 clusters, while not exactly 'irrelavant', is extremely problematic, for more reasons than just the V band in NGC 4881!

Dgruss has so far offered nothing quantitative to support his claims, just word salad. He has given no quantitative or even qualitative evidence for systematic errors while I have shown quantitatively that there is no evidence of systematic errors. I have also pointed out that dgruss has egregiously and repeatedly neglected error analysis, that peculiar velocity correction for the SBF galaxies made very little difference, that Vflow velocities are the same for all galaxies within a cluster and that the SBF distances are consistent with other datasets.

dgruss23
2007-Sep-02, 03:25 PM
dgruss23 will be able to comment on these specifics much better than I can, as I am almost certain he will.

Actually RussT, without your quoting Zahl, I wouldn't know what he's saying. As I stated before, I'm done even reading his comments as he hasn't shown the ability to politely discuss. I have no idea what Zahl has said other than what you quoted above since I told Zahl I was done being insulted by him.

However, since you're interested, I'll share a few comments on the issue of the Coma cluster distance.

The distance estimates that have been discussed include:

HKP SBF distance to NGC 4881: 102.3 Mpc (+/-24.8)
Lui&Graham SBF distance to NGC 4874: 102 Mpc (+/-6)
HKP FP distance using 81 galaxies: 85.8 Mpc (+/-5.9)
Tully&Pierce I-band TFR distance using 28 spiral galaxies: 83.5 Mpc (+/-14.2)

Note that the Tully&Pierce distance is different from that reported in the paper (86.3 Mpc) because TP00 published their analysis before the final metallicity corrected Cepheid distances of the HKP Final report. To bring the TP00 TFR distances to the same distance scale as the other distance estimates I recalculated the TP00 zero points for the 24 calibrators using the HKP final metallicity corrected cepheid distances. This reduced the zero point from 21.57 to 21.50 mag (+/-0.23)

So RussT you'll have to decide what you think is the best approach to evaluating this. I see from the quote you provided that Zahl claims the FP and TFR distances to Coma are wrong. That may be so. However, one has to realize that if the FP and TFR distances are wrong, then that implies a sizeable systematic error in the FP and I-band TFR zero points (~0.40 mag).

The TP00 I-band TFR used 24 cepheid calibrators to fix the zero point whereas the SBF calibrator sample was 6 cepheid calibrators. The TP00 Coma cluster sample included 28 galaxies. We have 2 SBF to Coma distances presented. Which is more likely - a large zero point error in the distance method that has 6 zero point calibrators or the distance method that has 24 zero point calibrators?

In my opinion, it is reckless to conclude from the handful of SBF galaxies and zero point calibrators that it must be the TFR distances that are wrong. While that could be the case, you need a larger sample of SBF distances to adequately explore this possibility - as I've been saying since the OP of this thread.

And its not a simple matter to just say the I-band TFR zero point is 0.40 mag too small. Remember that 24 Cepheid calibrators - the same pool of galaxies from which the 6 SBF zero point calibrators were drawn - were used to fix the zero point. So if you just increase the zero point of the I-band TFR by 0.40 mag the Cepheid calibrators will then be systematically underluminous by 0.40 mag at a given rotational velocity relative to the spiral galaxies in the Coma cluster. Given that the observed scatter of the I-band TFR zero point is +/-0.23 mag, this is no small problem.

Another possibility that would have to be investigated is the slope of the TFR. The larger the adopted slope, the smaller the calculated zero point. However, slope errors are partially self correcting problem for this very reason. Let's say that the TFR has some true global underlying slope in the I-band that galaxies would tightly follow if we could eliminate all data errors.

You can find I-band TFR distance moduli using the equation from TP00 with the updated zero point:

m-M = 21.50 + Itc + 8.11(log WRi - 2.5)

where 21.50 is the zero point, 8.11 is the slope, log WRi is the logarithm of the parameter WRi which is twice the rotational velocity of the galaxy corrected for redshift, inclination and turbulence, and Itc is the total I-band magnitude corrected for internal and galactic absorption and redshift.

But if you arbitrarily increase the slope, you will reduce the zero point. Depending upon the rotational velocity distribution of the sample (specifically if most of the galaxies have log WRi>2.5), this can be somewhat self correcting if you have an incorrect slope because too steep a slope will tend to overestimate distances for the faster rotators (and the faster the rotator, the greater the effect), but the zero point is smaller with a steeper slope which reduces the size of the error from too steep a slope.

Another possibility would be a systematic underestimate of the rotational velocities for Coma cluster galaxies. This could occur if inclinations were systematically estimated too close to edge on or if rotational velocities derived from hydrogen linewidths were systematically undersampling the true linewidth.

And there are other avenues to explore for possible large errors in the TFR distances.

You'll have to decide what to take from all this. In the quote you provided Zahl has declared that the I-band TFR distances are wrong, but offers nothing specific that proves that is the case.

My point has simply been that while the I-band TFR distances could be wrong, the SBF sample is very small and has a smaller number of calibrators for the zero point. My view is that more SBF distances are needed before it can be concluded that the SBF distances (a) represent the actual Coma cluster distance and (b) are not affected by a systematic error. Note that in an earlier post I pointed out that at distances less than 60 Mpc, the TFR and SBF distances agree. It is beyond 60 Mpc that the offset appears and suggests a possible systematic effect- but the SBF sample is so limited in size, it is impossible to meaningfully determine if such an offset is in fact real and if so, which method of determining distance is responsible for the systematic offset.

I'll be interested to hear your thoughts on all this RussT.

Ari Jokimaki
2007-Sep-03, 06:35 AM
Wrong. The probability that the true cluster mean velocity is more than 10% greater than the mean radial velocity of six randomly chosen galaxies is just 0.7% if the six clusters have a velocity dispersion of 10% of the mean cluster velocity. This probability would also imply that H0 is really that much higher.

z = (X-µ)/(σ/Sqrt(N)) = (x-1.1x)/(0.1x/Sqrt(6)) = -2.45 sigma one sided -> 0.7%.

You can test this with a Gaussian random number generator like this one:

http://www.graphpad.com/quickcalcs/randomN1.cfm
That rests on the assumption that the velocity distribution is Gaussian. That might be a good guess, but when we do error analysis you shouldn't trust your assumptions, you should go for worst case scenario. If you have maximum peculiar velocity of 500 km/s (for example), and if you have 6 clusters with cosmological redshift of 3000 km/s, then the maximum error due to peculiar velocity for each cluster is about 16.7%, and the maximum error for all six clusters is also the 16.7% (for 6 clusters having cosmological redshift of 6000 km/s the error is 8.3%). For error analysis, in my opinion, there's no reason to assume that any single galaxy does not have the maximum peculiar velocity.


Well, why not show it quantitatively then? Do you just think that your 2 cents trumps the HKP's error analysis that says N=6 results in an error of 4 km/s/Mpc in H0?
I haven't been talking about HKP. I started by comparing two methods (that were talked about in this thread) of determining the cosmological redshift of a galaxy cluster. You then commented my arguments about the first method, which was selecting one galaxy in the cluster and assuming that the redshift of that galaxy represents the cluster's cosmological redshift. We're still on that road, and I'm still talking about that method, not about HKP. But generally I don't see any reasons to assume that HKP's (or anyone else's) error analysis would be correct by default. Also, while I do think my 2 cents are not usually worth much in these kind of things, there's no reason to assume that HKP must be correct over me. That would be logical fallacy called "appeal to authority", I think.

RussT
2007-Sep-03, 08:28 AM
Actually RussT, without your quoting Zahl, I wouldn't know what he's saying. As I stated before, I'm done even reading his comments as he hasn't shown the ability to politely discuss. I have no idea what Zahl has said other than what you quoted above since I told Zahl I was done being insulted by him.

I had actually forgetten that you had said this. Sorry for making the statement that you would probably respond :(



But, as dgruss23 has shown over and over, just choosing the 1 BCG, for each of the 6 clusters, while not exactly 'irrelavant', is extremely problematic, for more reasons than just the V band in NGC 4881!

While this evalution below is older, it is still nearly as problematic today...

http://donegal.uchicago.edu/science/blazar/MRK501.html

Posting this was just adding to what I consider a very logical position that only using 1 BCG as any kind of valid distance parameter to any cluster is fraught with problems that always need to be robustly checked by numerous other parameters, and so makes the 1 BCG as a cluster distance guide, all but moot.

The "Brighter" the galaxy the more uncertain are ALL the parameters in determining many things ;)

And, Markian 501 is only 300 million lys away, it is not clear across the universe.

Zahl
2007-Sep-03, 07:02 PM
Actually RussT, without your quoting Zahl, I wouldn't know what he's saying. As I stated before, I'm done even reading his comments as he hasn't shown the ability to politely discuss. I have no idea what Zahl has said other than what you quoted above since I told Zahl I was done being insulted by him.

However, since you're interested, I'll share a few comments on the issue of the Coma cluster distance.

The distance estimates that have been discussed include:

HKP SBF distance to NGC 4881: 102.3 Mpc (+/-24.8)
Lui&Graham SBF distance to NGC 4874: 102 Mpc (+/-6)
HKP FP distance using 81 galaxies: 85.8 Mpc (+/-5.9)
Tully&Pierce I-band TFR distance using 28 spiral galaxies: 83.5 Mpc (+/-14.2)

Note that the Tully&Pierce distance is different from that reported in the paper (86.3 Mpc) because TP00 published their analysis before the final metallicity corrected Cepheid distances of the HKP Final report. To bring the TP00 TFR distances to the same distance scale as the other distance estimates I recalculated the TP00 zero points for the 24 calibrators using the HKP final metallicity corrected cepheid distances. This reduced the zero point from 21.57 to 21.50 mag (+/-0.23)

So RussT you'll have to decide what you think is the best approach to evaluating this. I see from the quote you provided that Zahl claims the FP and TFR distances to Coma are wrong. That may be so. However, one has to realize that if the FP and TFR distances are wrong, then that implies a sizeable systematic error in the FP and I-band TFR zero points (~0.40 mag).

The TP00 I-band TFR used 24 cepheid calibrators to fix the zero point whereas the SBF calibrator sample was 6 cepheid calibrators. The TP00 Coma cluster sample included 28 galaxies. We have 2 SBF to Coma distances presented. Which is more likely - a large zero point error in the distance method that has 6 zero point calibrators or the distance method that has 24 zero point calibrators?

In my opinion, it is reckless to conclude from the handful of SBF galaxies and zero point calibrators that it must be the TFR distances that are wrong. While that could be the case, you need a larger sample of SBF distances to adequately explore this possibility - as I've been saying since the OP of this thread.

And its not a simple matter to just say the I-band TFR zero point is 0.40 mag too small. Remember that 24 Cepheid calibrators - the same pool of galaxies from which the 6 SBF zero point calibrators were drawn - were used to fix the zero point. So if you just increase the zero point of the I-band TFR by 0.40 mag the Cepheid calibrators will then be systematically underluminous by 0.40 mag at a given rotational velocity relative to the spiral galaxies in the Coma cluster. Given that the observed scatter of the I-band TFR zero point is +/-0.23 mag, this is no small problem.

Another possibility that would have to be investigated is the slope of the TFR. The larger the adopted slope, the smaller the calculated zero point. However, slope errors are partially self correcting problem for this very reason. Let's say that the TFR has some true global underlying slope in the I-band that galaxies would tightly follow if we could eliminate all data errors.

You can find I-band TFR distance moduli using the equation from TP00 with the updated zero point:

m-M = 21.50 + Itc + 8.11(log WRi - 2.5)

where 21.50 is the zero point, 8.11 is the slope, log WRi is the logarithm of the parameter WRi which is twice the rotational velocity of the galaxy corrected for redshift, inclination and turbulence, and Itc is the total I-band magnitude corrected for internal and galactic absorption and redshift.

But if you arbitrarily increase the slope, you will reduce the zero point. Depending upon the rotational velocity distribution of the sample (specifically if most of the galaxies have log WRi>2.5), this can be somewhat self correcting if you have an incorrect slope because too steep a slope will tend to overestimate distances for the faster rotators (and the faster the rotator, the greater the effect), but the zero point is smaller with a steeper slope which reduces the size of the error from too steep a slope.

Another possibility would be a systematic underestimate of the rotational velocities for Coma cluster galaxies. This could occur if inclinations were systematically estimated too close to edge on or if rotational velocities derived from hydrogen linewidths were systematically undersampling the true linewidth.

And there are other avenues to explore for possible large errors in the TFR distances.

You'll have to decide what to take from all this. In the quote you provided Zahl has declared that the I-band TFR distances are wrong, but offers nothing specific that proves that is the case.

My point has simply been that while the I-band TFR distances could be wrong, the SBF sample is very small and has a smaller number of calibrators for the zero point. My view is that more SBF distances are needed before it can be concluded that the SBF distances (a) represent the actual Coma cluster distance and (b) are not affected by a systematic error. Note that in an earlier post I pointed out that at distances less than 60 Mpc, the TFR and SBF distances agree. It is beyond 60 Mpc that the offset appears and suggests a possible systematic effect- but the SBF sample is so limited in size, it is impossible to meaningfully determine if such an offset is in fact real and if so, which method of determining distance is responsible for the systematic offset.

I'll be interested to hear your thoughts on all this RussT.

The above post omits two important things from the quote it replies to:

Originally Posted by Zahl
If it is true that "the minimum Coma distance is 108 Mpc" as they say in the paper it would mean that the HKP FP & TFR and Tully & Pierce TFR distances for Coma are wrong. This is also supported by Kavelaars et al. (http://adsabs.harvard.edu/abs/2000ApJ...533..125K) who found a distance of 102±6 Mpc for Coma using the GCLF, Liu & Graham (http://adsabs.harvard.edu/abs/2001ApJ...557L..31L) who found 100±10 Mpc for the central supergiant cD galaxy (NGC 4874) in the core of the Coma cluster, and Masters et al. (http://adsabs.harvard.edu/abs/2006ApJ...653..861M) who found that the TFR paper by Tully & Pierce (2000) "still makes no attempt to account for selection biases relying on the erroneous idea that cluster samples are complete". Fixing this error, Masters et al. provided a measure of H0=74±2±6 km/s/Mpc. They didn't give individual target distances, but using their value of 74±2±6 km/s/Mpc for H0 and HKP's Vflow of 7441±300 km/s, it can be calculated that Coma lies at 101±12 Mpc although the errors are not exactly Gaussian.

Not a word on Kavelaars et al. (http://adsabs.harvard.edu/abs/2000ApJ...533..125K) who found a distance of 102±6 Mpc for Coma using the GCLF method.

Not a word on the error in the Tully & Pierce paper identified and fixed by Masters et al.

We thus have the distance to Coma measured with four different methods and they all give ~100 Mpc and an error identified in the T&P measurement:

Kavelaars et al. 102±6 Mpc (GCLF)

Liu & Graham et al 100±10 Mpc for the central supergiant cD galaxy NGC 4874 (SBF)

Capaccioli et al. 102±23 Mpc (Novae & Supernovae)
http://adsabs.harvard.edu/abs/1990ApJ...350..110C

Masters et al. 101±12 Mpc (Tully-Fisher)

Baum et al. 105±6 Mpc to IC 4051 (GCLF)
http://adsabs.harvard.edu/abs/1997AJ....113.1483B

Okon et al. 111±11 Mpc to NGC 4926 (GCLF)

I think this is enough evidence to show that Coma lies at ~100 Mpc.

Zahl
2007-Sep-03, 08:02 PM
Originally Posted by Zahl
Wrong. The probability that the true cluster mean velocity is more than 10% greater than the mean radial velocity of six randomly chosen galaxies is just 0.7% if the six clusters have a velocity dispersion of 10% of the mean cluster velocity. This probability would also imply that H0 is really that much higher.

z = (X-µ)/(σ/Sqrt(N)) = (x-1.1x)/(0.1x/Sqrt(6)) = -2.45 sigma one sided -> 0.7%.

You can test this with a Gaussian random number generator like this one:

http://www.graphpad.com/quickcalcs/randomN1.cfm

That rests on the assumption that the velocity distribution is Gaussian. That might be a good guess, but when we do error analysis you shouldn't trust your assumptions, you should go for worst case scenario.

If you want to reliably describe errors in the general case, you have to use parameter values that apply to the general case, not the worst case. If you want to know how much off you can be in principle, regardless of how unlikely that might be, then you use parameter values that apply to the worst case scenario.


If you have maximum peculiar velocity of 500 km/s (for example), and if you have 6 clusters with cosmological redshift of 3000 km/s, then the maximum error due to peculiar velocity for each cluster is about 16.7%, and the maximum error for all six clusters is also the 16.7%

First, the probability that all six samples come from the low side of the distribution is 1.5% (0.5^6). A cluster might be somewhat non-gaussian, but so what? I doubt you can show that a randomly chosen ensemble of six clusters will be skewed so much as to materially change that probability. Second, the probability that all six targets come from the maximum peculiar velocity tail is ridiculously low, much lower than 1:10^6.


For error analysis, in my opinion, there's no reason to assume that any single galaxy does not have the maximum peculiar velocity.

You are seriously confused if you think that my "z = (X-µ)/(σ/Sqrt(N)) = (x-1.1x)/(0.1x/Sqrt(6)) = -2.45 sigma one sided -> 0.7%" calculation "assumes that any single galaxy does not have the maximum peculiar velocity". In fact you are assuming in your calculation that every single galaxy will have the maximum peculiar velocity. This is a completely baseless assumption save for the extreme one in a billion freak event.



Originally Posted by Zahl
Well, why not show it quantitatively then? Do you just think that your 2 cents trumps the HKP's error analysis that says N=6 results in an error of 4 km/s/Mpc in H0?

I haven't been talking about HKP. I started by comparing two methods (that were talked about in this thread) of determining the cosmological redshift of a galaxy cluster. You then commented my arguments about the first method, which was selecting one galaxy in the cluster and assuming that the redshift of that galaxy represents the cluster's cosmological redshift. We're still on that road, and I'm still talking about that method, not about HKP. But generally I don't see any reasons to assume that HKP's (or anyone else's) error analysis would be correct by default. Also, while I do think my 2 cents are not usually worth much in these kind of things, there's no reason to assume that HKP must be correct over me. That would be logical fallacy called "appeal to authority", I think.

What you have offered as errors is in conflict with the HKP determination. Of course HKP's error analysis is not "correct by default", but it is not "wrong by hunch" either. I know that the quality of argumentation can be low on forums like this, but it still amazes me how low it can be sometimes.

Ari Jokimaki
2007-Sep-04, 04:43 AM
If you want to reliably describe errors in the general case, you have to use parameter values that apply to the general case, not the worst case.
I don't see no reason to be optimistic when doing error analysis. But if this is commonly done as you say, then I don't wonder why there's talk about "precision cosmology". I don't see any point in reporting errors as they are in a "good day".


In fact you are assuming in your calculation that every single galaxy will have the maximum peculiar velocity.
Exactly. The worst case scenario.


This is a completely baseless assumption save for the extreme one in a billion freak event.
But it still gives the error limits of the situation.


What you have offered as errors is in conflict with the HKP determination. Of course HKP's error analysis is not "correct by default", but it is not "wrong by hunch" either. I know that the quality of argumentation can be low on forums like this, but it still amazes me how low it can be sometimes.
I still haven't made any claims on HKP's error analysis. I haven't said, nor implied that HKP is wrong. Why do you insist I am doing that? Are you just looking for someone to pick a fight with? I won't bother with your last remark except to point out that once again a discussion containing you had to end with insults. Have a nice day.

Jim
2007-Sep-04, 03:30 PM
If you want to reliably describe errors in the general case, you have to... (much snips)

Zahl, this was a very good response, until you came to that last sentence. There is blunt and there is insulting; please watch the line between the two a bit more closely.

Nereid
2007-Sep-04, 06:45 PM
[snip]

No, but how much the difference is depends upon how you compare them.

The HKP adopted a LMC distance modulus of 18.50 +/- 0.10. van Leeuwen et al find a LMC distance modulus of 18.39 +/- 0.05. An et al found a LMC distance modulus of 18.34 +/- 0.06.

So you can see that the 1 sigma errors of the recent van Leeuwen and An studies overlap. The difference between the HKP value and van Leeuwen et al is 1.1 sigma, between HKP and An et al is 1.6 sigma.

Taking the results of the new studies, the adopted HKP LMC distance modulus is 2.2 sigma larger than the van Leeuwen distance modulus and 2.7 sigma larger than the An distance modulus.

In any case, both new studies make all extragalactic Cepheid distances distances closer. Since the Cepheid galaxies are the zero point calibrators for the secondary distance indicators, this makes all the secondary distance indicator distances closer and H0 larger.Section 8.1 in Freedman et al. contains a discussion of the estimated LMC distance; Figure 5 is quite interesting, if somewhat daunting.

I guess the question that comes to mind, wrt these two, is why they should be given particular weight, as estimates of the LMC distance? Other than that they were published after the HKP paper, and so presumably address at least some of the reasons for the differences in estimated systematic errors published in previous papers.

More generally, what approaches to combining many independent estimates are robust?
The point of this thread is to discuss whether or not H0 could be in the 80's. Do you see the above as something that establishes H0 could not be in the 80's?

Your point about the overlap of the FP uncertainty is very valid, but keep in mind - as I explained to Zahl - the fundamental plane result is not why I have suggested H0 could still be in the 80's.I'll return to this later.

TomT
2007-Sep-04, 08:28 PM
Exactly. The worst case scenario.

But it still gives the error limits of the situation.

Hi Ari,

You may be having the same difficulty I have in getting used to quoted accuracy in astronomy measurements and data. I think one has to assume that when an error bar is quoted, it is understood that it is a 1 sigma value. So a Hubble value of 72 +/- 8 for example, means the the value is 64 to 80 with a certainty of 66.7%. Some one please correct me if I am wrong on this.

Being an engineer, I am used to a design with a 3 sigma error analysis, and then adding in a factor of safety of 3 to 5 depending on the situation. So the worst case scenario is very important, as is knowing the broad error limits.

With this said, I think it is still remarkable that astronomers can come up with the numbers, equipment, and techniques they do. And since the accuracy is also dependent on the validity of the theory the numbers are being applied to, I can't see the reason for the emotion that sometimes arises over questions and critiques about the numbers.

Nereid
2007-Sep-04, 10:02 PM
Hi Ari,

You may be having the same difficulty I have in getting used to quoted accuracy in astronomy measurements and data. I think one has to assume that when an error bar is quoted, it is understood that it is a 1 sigma value. So a Hubble value of 72 +/- 8 for example, means the the value is 64 to 80 with a certainty of 66.7%. Some one please correct me if I am wrong on this.

[snip]It was covered earlier, at some length.

"72 +/- 8" is one level of summary; Freedman et al. went to considerable lengths to provide a more detailed, more nuanced result ... including separating estimates of uncertainty due to random error from those due to systematic error, and doing three different kinds of analyses of the errors (frequentist, Bayesian, and Monte Carlo).

I would recommend that you read the whole paper, if you have a chance ... wrt the scope of this thread, I think there's little more one can really add to what's already in that paper, other than (perhaps) dotting the odd i or crossing a stray t. Oh, and Tim Thompson made pretty much the same point, several pages ago now ...

Ari Jokimaki
2007-Sep-05, 05:03 AM
Hi Ari,

You may be having the same difficulty I have in getting used to quoted accuracy in astronomy measurements and data. I think one has to assume that when an error bar is quoted, it is understood that it is a 1 sigma value. So a Hubble value of 72 +/- 8 for example, means the the value is 64 to 80 with a certainty of 66.7%. Some one please correct me if I am wrong on this.

Being an engineer, I am used to a design with a 3 sigma error analysis, and then adding in a factor of safety of 3 to 5 depending on the situation. So the worst case scenario is very important, as is knowing the broad error limits.

With this said, I think it is still remarkable that astronomers can come up with the numbers, equipment, and techniques they do. And since the accuracy is also dependent on the validity of the theory the numbers are being applied to, I can't see the reason for the emotion that sometimes arises over questions and critiques about the numbers.
I agree with everything you said, Tom.

One thing I wonder is that are there any benefits of that 1 sigma analysis over worst case scenario, other than making your results look more accurate than they are?

Zahl
2007-Sep-06, 11:09 AM
Originally Posted by dgruss23 View Post
[snip]

No, but how much the difference is depends upon how you compare them.

The HKP adopted a LMC distance modulus of 18.50 +/- 0.10. van Leeuwen et al find a LMC distance modulus of 18.39 +/- 0.05. An et al found a LMC distance modulus of 18.34 +/- 0.06.

So you can see that the 1 sigma errors of the recent van Leeuwen and An studies overlap. The difference between the HKP value and van Leeuwen et al is 1.1 sigma, between HKP and An et al is 1.6 sigma.

Taking the results of the new studies, the adopted HKP LMC distance modulus is 2.2 sigma larger than the van Leeuwen distance modulus and 2.7 sigma larger than the An distance modulus.

In any case, both new studies make all extragalactic Cepheid distances distances closer. Since the Cepheid galaxies are the zero point calibrators for the secondary distance indicators, this makes all the secondary distance indicator distances closer and H0 larger.

Section 8.1 in Freedman et al. contains a discussion of the estimated LMC distance; Figure 5 is quite interesting, if somewhat daunting.

I guess the question that comes to mind, wrt these two, is why they should be given particular weight, as estimates of the LMC distance? Other than that they were published after the HKP paper, and so presumably address at least some of the reasons for the differences in estimated systematic errors published in previous papers.

More generally, what approaches to combining many independent estimates are robust?I'll return to this later.

To get a more balanced view of recent LMC distance determinations than dgruss' cherry picking, here's a post HKP review from Alves (2004) who found a distance modulus of 18.50 ± 0.02 as a weighted average of 14 studies.

http://adsabs.harvard.edu/abs/2004NewAR..48..659A

I found these post Alves determinations from ADS:

18.39 ± 0.05 van Leeuwen, cepheids 2007
18.48 ± 0.03 Ngeow, cepheids 2007
18.40 ± 0.04 Grocholski, red clump stars 2007
18.34 ± 0.06 An, cepheids 2007
18.48 ± 0.02 McNamara, d Scuti stars 2007
18.40 ± 0.05 Benedict, cepheids 2007

18.41 ± 0.10 2006ApJ...652.1133M, cepheids & maser distance to NGC 4258
18.50 ± 0.05 2006MmSAI..77..261S, cepheids
18.46 ± 0.03 2006MmSAI..77..214D, RR Lyrae
18.54 ± 0.02 2006ApJ...642..834K, cepheids
18.40 ± 0.04 2006PhDT........14G, populous clusters

18.56 ± 0.04 2005ApJ...627..224G, cepheids
18.39 ± 0.05 2005A&A...434.1077M, Eclipsing Binary
18.51 ± 0.02 2005A&A...434.1077M, Eclipsing Binary
18.32 ± 0.08 2005tdug.conf..707R, RR Lyrae

The errors given are the random errors only. There have been four 18.3x results and four 18.5x results in 2005-2007, others have been 18.4x. The weighted average of these 15 determinations is 18.48 ± 0.01 mag. This is in good agreement with the weighted average of 14 determinations (18.50 ± 0.02) that Alves found in 2004. Adopting 18.48 ± 0.01 mag as the LMC distance modulus would then slightly increase HKP's value for H0 to ~73 and cut down their errors, but I don't bother to calculate how much exactly.

antoniseb
2007-Sep-06, 11:32 AM
It is possible that the LMC's depth from our point of view is on the order of ten or twenty thousand light years, and so this error or discrepancy we are seeing might well be a real reflection of how spread the cepheids are in that system.

Zahl
2007-Sep-06, 11:37 AM
Originally Posted by Zahl

If you want to reliably describe errors in the general case, you have to use parameter values that apply to the general case, not the worst case.

I don't see no reason to be optimistic when doing error analysis. But if this is commonly done as you say, then I don't wonder why there's talk about "precision cosmology". I don't see any point in reporting errors as they are in a "good day".

Those calculations don't describe "good day" optimistic errors. On a "good day" the errors would be zero as the target velocities would exactly correspond to their true cosmological velocities, but this is about as unlikely as your one in a billion "worst case". Instead of these "could be" errors, what we are interested in are the actual errors in the general case and they are given by statistical error analysis as I have desribed.



What you have offered as errors is in conflict with the HKP determination. Of course HKP's error analysis is not "correct by default", but it is not "wrong by hunch" either. I know that the quality of argumentation can be low on forums like this, but it still amazes me how low it can be sometimes.

I still haven't made any claims on HKP's error analysis. I haven't said, nor implied that HKP is wrong. Why do you insist I am doing that?

You have. You have specifically argued against statistical error analysis, suggesting instead that we should just take the maximum observed deviation, use it for all samples in a dataset and call the resulting maximum possible error as the most appropriate regardless of how stunningly unlikely it might be. But I don't think the scientific community will ditch statistics any time soon when doing error analysis.

Nereid
2007-Sep-06, 12:45 PM
Hi Ari,

You may be having the same difficulty I have in getting used to quoted accuracy in astronomy measurements and data. I think one has to assume that when an error bar is quoted, it is understood that it is a 1 sigma value. So a Hubble value of 72 +/- 8 for example, means the the value is 64 to 80 with a certainty of 66.7%. Some one please correct me if I am wrong on this.

Being an engineer, I am used to a design with a 3 sigma error analysis, and then adding in a factor of safety of 3 to 5 depending on the situation. So the worst case scenario is very important, as is knowing the broad error limits.

With this said, I think it is still remarkable that astronomers can come up with the numbers, equipment, and techniques they do. And since the accuracy is also dependent on the validity of the theory the numbers are being applied to, I can't see the reason for the emotion that sometimes arises over questions and critiques about the numbers.I agree with everything you said, Tom.

One thing I wonder is that are there any benefits of that 1 sigma analysis over worst case scenario, other than making your results look more accurate than they are?This too has been addressed earlier in this thread, in several posts.

However, I think it's worth repeating one aspect: the benefits (or otherwise) of one measure of estimated uncertainty over another are in the eyes of the beholder ... just as in engineering I suspect.

Here's one view, or example: if your view of astronomy/cosmology as a science is founded on (naive) falsificationism, then you will look at the published results in terms of ways to find a single, credible one that falsifies a theory. BAUT member Jerry seems to take this approach (or some close variant of it). However, if you are more of Lakatosian (http://galilean-library.org/academy/viewtopic.php?t=451), then you'll likely be looking for ways to fruitfully extend a research programme, and will check things like consistency and how to design tests that will probe new (but accessible) regions of the parameter space of the core theories.

BTW, how do the benefits of 1 sigma differ from those of worst case, across a random sample of engineering disciplines?

Zahl
2007-Sep-06, 04:50 PM
It is possible that the LMC's depth from our point of view is on the order of ten or twenty thousand light years, and so this error or discrepancy we are seeing might well be a real reflection of how spread the cepheids are in that system.

Those are distances to the LMC barycenter.

TomT
2007-Sep-06, 05:01 PM
BTW, how do the benefits of 1 sigma differ from those of worst case, across a random sample of engineering disciplines?

I don't quite understand what you mean by a "across a random sample of engineering disciplines". For a specific example, I have an acquaintance who does strength of materials calculations for airplane designs at McDonnell Boeing Company. In estimating the failure point of a structural section they use a worst-worst estimation and then add a factor of safety. In other words they calculate the worst case they can think of, and then design for an even worse case. This is much safer than a factor of safety applied to a 1 sigma case. You wouldn't want it otherwise if you were boarding an airplane.
Another example would be, how reliable would you want the Hubble Telescope to be? A 66.7% probability of working, a 99.6% or even higher?

Nereid
2007-Sep-06, 05:35 PM
I don't quite understand what you mean by a "across a random sample of engineering disciplines". For a specific example, I have an acquaintance who does strength of materials calculations for airplane designs at McDonnell Boeing Company. In estimating the failure point of a structural section they use a worst-worst estimation and then add a factor of safety. In other words they calculate the worst case they can think of, and then design for an even worse case. This is much safer than a factor of safety applied to a 1 sigma case. You wouldn't want it otherwise if you were boarding an airplane.
Another example would be, how reliable would you want the Hubble Telescope to be? A 66.7% probability of working, a 99.6% or even higher?Right ... but suppose you're designing, oh I don't know, cheap advertising props, either to give away for free, or to support a once-only presentation.

Wouldn't the bean counters in the company come down on you like a tonne of bricks if you, the CEO (Chief Engineering Officer), insisted that the production folk manufacture to specs based on worse than worst-case estimates?

More pertinently, in the telecoms industry, in the old circuit-switched world, networks were designed not to meet worse than worse case circumstances, but to have (for example) a 2% call failure rate in the hypothetical 'busy hour'. And these days those telecom folk are generally considered to have 'over-engineered' the networks.

More generally, in designing to meet certain MTBF requirements, for things with much lower safety threshholds than large civilian aircraft, do the relevant engineers make use of worse than worst case estimates, much less set them in stone?

Nereid
2007-Sep-06, 05:39 PM
I'd like to take credit for this, but I can't ...


Originally Posted by dgruss23 View Post
[snip]

No, but how much the difference is depends upon how you compare them.

The HKP adopted a LMC distance modulus of 18.50 +/- 0.10. van Leeuwen et al find a LMC distance modulus of 18.39 +/- 0.05. An et al found a LMC distance modulus of 18.34 +/- 0.06.

So you can see that the 1 sigma errors of the recent van Leeuwen and An studies overlap. The difference between the HKP value and van Leeuwen et al is 1.1 sigma, between HKP and An et al is 1.6 sigma.

Taking the results of the new studies, the adopted HKP LMC distance modulus is 2.2 sigma larger than the van Leeuwen distance modulus and 2.7 sigma larger than the An distance modulus.

In any case, both new studies make all extragalactic Cepheid distances distances closer. Since the Cepheid galaxies are the zero point calibrators for the secondary distance indicators, this makes all the secondary distance indicator distances closer and H0 larger.Section 8.1 in Freedman et al. contains a discussion of the estimated LMC distance; Figure 5 is quite interesting, if somewhat daunting.

I guess the question that comes to mind, wrt these two, is why they should be given particular weight, as estimates of the LMC distance? Other than that they were published after the HKP paper, and so presumably address at least some of the reasons for the differences in estimated systematic errors published in previous papers.

More generally, what approaches to combining many independent estimates are robust?I'll return to this later.To get a more balanced view of recent LMC distance determinations than dgruss' cherry picking, here's a post HKP review from Alves (2004) who found a distance modulus of 18.50 ± 0.02 as a weighted average of 14 studies.

http://adsabs.harvard.edu/abs/2004NewAR..48..659A

I found these post Alves determinations from ADS:

18.39 ± 0.05 van Leeuwen, cepheids 2007
18.48 ± 0.03 Ngeow, cepheids 2007
18.40 ± 0.04 Grocholski, red clump stars 2007
18.34 ± 0.06 An, cepheids 2007
18.48 ± 0.02 McNamara, d Scuti stars 2007
18.40 ± 0.05 Benedict, cepheids 2007

18.41 ± 0.10 2006ApJ...652.1133M, cepheids & maser distance to NGC 4258
18.50 ± 0.05 2006MmSAI..77..261S, cepheids
18.46 ± 0.03 2006MmSAI..77..214D, RR Lyrae
18.54 ± 0.02 2006ApJ...642..834K, cepheids
18.40 ± 0.04 2006PhDT........14G, populous clusters

18.56 ± 0.04 2005ApJ...627..224G, cepheids
18.39 ± 0.05 2005A&A...434.1077M, Eclipsing Binary
18.51 ± 0.02 2005A&A...434.1077M, Eclipsing Binary
18.32 ± 0.08 2005tdug.conf..707R, RR Lyrae

The errors given are the random errors only. There have been four 18.3x results and four 18.5x results in 2005-2007, others have been 18.4x. The weighted average of these 15 determinations is 18.48 ± 0.01 mag. This is in good agreement with the weighted average of 14 determinations (18.50 ± 0.02) that Alves found in 2004. Adopting 18.48 ± 0.01 mag as the LMC distance modulus would then slightly increase HKP's value for H0 to ~73 and cut down their errors, but I don't bother to calculate how much exactly.As it may be that dgruss23 won't see this unless someone else quotes it, I'm quoting it.

I will, however, re-state my last question:

What approaches to combining many independent estimates are robust?

Jerry
2007-Sep-06, 08:58 PM
Right ... but suppose you're designing, oh I don't know, cheap advertising props, either to give away for free, or to support a once-only presentation.

Wouldn't the bean counters in the company come down on you like a tonne of bricks if you, the CEO (Chief Engineering Officer), insisted that the production folk manufacture to specs based on worse than worst-case estimates?

Actually, for most civil engineering and electrical wiring applications, these are the design criteria. In the 60's, freeway overpasses were, in general, engineered to a worst-worst case 20 year life span (I guess they were expecting a nuclear holicaust or jesus or something). Many of them stood, or are still standing, 30-40 years later. In SLC, they replaced all of the 20 year structures with 100 year designs just before the Olympics...the old bridges were falling apart; especially one that was too low, and was constantly being wacked by high profile vehicles.

Your house wiring is generally designed to carry 2.5 times the maximum load allowed by the circuit breakers...(If this limit has changed, please let me know before I string out my Christmas lights).

Zahl
2007-Sep-06, 09:09 PM
BTW, how do the benefits of 1 sigma differ from those of worst case, across a random sample of engineering disciplines?

Engineers will almost never use worst case estimation when designing something, preferring to subscribe to the law of diminishing returns instead. Engineers designing hard disk drives can be quite satisfied if 1% of all drives fail in five years of use. Similarly, if you are designing earthquake-resistant houses, your goal will not be to design for the worst case, but something like "light damage from a magnitude 7 direct hit". It is not cost effective or even physically possible to build something that can withstand the worst case scenario - like a direct hit from a 9.0+ megathrust monster. In other words, you settle for something like a 99.99% probability that the house does not collapse in an earthquake during its lifetime. The worst case scenario in aviation is a catastrophic crash. No aircraft can be designed to withstand that. Just taking the worst case from the population and using that in error analysis as Ari Jokimaki suggested is simply crazy. Depending on factors such as "no risk to life at failure" or "high risk to life at failure", engineers can aim at anything from 99% to 99.999%+. Why don't cosmologists/astrophysicists do this also? Why not observe a huge number of stars/galaxies/whatever and proudly quote 3 sigma errors? Simply because it is a much better use of limited resources to do many independent projects, with different observables, techniques, crews etc., each with a relatively modest number of targets than to do one megaproject with a huge number of stars/galaxies/whatever, that all suffer from the same systematic errors. It is much better that there are as many independent projects as possible with independent systematic errors. Then you can easily do robust literature reviews like Alves did.

Zahl
2007-Sep-06, 09:12 PM
What approaches to combining many independent estimates are robust?

If the estimates are independent and the authors did not screw up when calculating their errors, then simply taking a weighted average will be robust.

TomT
2007-Sep-07, 02:52 AM
Right ... but suppose you're designing, oh I don't know, cheap advertising props, either to give away for free, or to support a once-only presentation.

Wouldn't the bean counters in the company come down on you like a tonne of bricks if you, the CEO (Chief Engineering Officer), insisted that the production folk manufacture to specs based on worse than worst-case estimates?

Sure. Where do you rank accuracy in astronomical numbers relative to cheap advertising props vs civilian aircraft?


More generally, in designing to meet certain MTBF requirements, for things with much lower safety threshholds than large civilian aircraft, do the relevant engineers make use of worse than worst case estimates, much less set them in stone?

A significant factor for most companies is product reliability, even in fairly mundane products. Getting on the wrong side of customers can kill you in a competitive market. Hence the envied reputation of General Electric Company for their 6 Sigma reliability program.

My purpose in bringing this up was only a comment that if you are from a different field than astronomy, you have to get used to the fierce defense of numerical results that have only 1 sigma accuracy. But it is also understood that much effort and engenuity is required to get numbers even that good at this stage of our knowledge.

Ari Jokimaki
2007-Sep-07, 04:51 AM
You have. You have specifically argued against statistical error analysis, suggesting instead that we should just take the maximum observed deviation,...
Rubbish. You claimed I was talking about HKP even before that. If I say I haven't been talking about HKP, then I haven't been talking about HKP. If I say I haven't implied anything about HKP, then I haven't implied it. If you think I have, and won't take my word for it, I don't know what else can I do anymore. I'll just give a brief summary about the situation, and after that I won't respond to you (and like dgruss23, I will use forum's ignore feature).

The summary: I started discussing two methods of determining cosmological redshifts of galaxy clusters that had been discussed in this thread. First method was to pick a single galaxy from six clusters and assume that each galaxy is a fair representative of cluster's cosmological redshift. Second method was to use the mean redshift of six clusters. This discussion with you started on the first method, of which I at one point said that I doubt six galaxies will be enough with all error sources. To this you replied that "Do you just think that your 2 cents trumps the HKP's error analysis that says N=6 results in an error of 4 km/s/Mpc in H0?". I then for the first time pointed out that I hadn't been talking about HKP. Only after that the worst case scenario vs. 1 sigma analysis discussion broke out elsewhere in the discussion. This HKP stuff you are insisting still derives from that original claim of yours that I have been talking about HKP and has nothing to do with the recent worst case scenario discussion.

I suggest that in the future you check things again to make sure before you leap to conclusions.

Zahl
2007-Sep-07, 05:29 PM
What you have offered as errors is in conflict with the HKP determination. Of course HKP's error analysis is not "correct by default", but it is not "wrong by hunch" either. I know that the quality of argumentation can be low on forums like this, but it still amazes me how low it can be sometimes.

I still haven't made any claims on HKP's error analysis. I haven't said, nor implied that HKP is wrong. Why do you insist I am doing that?

You have. You have specifically argued against statistical error analysis, suggesting instead that we should just take the maximum observed deviation, use it for all samples in a dataset and call the resulting maximum possible error as the most appropriate regardless of how stunningly unlikely it might be. But I don't think the scientific community will ditch statistics any time soon when doing error analysis.

Rubbish. You claimed I was talking about HKP even before that. If I say I haven't been talking about HKP, then I haven't been talking about HKP. If I say I haven't implied anything about HKP, then I haven't implied it. If you think I have, and won't take my word for it, I don't know what else can I do anymore. I'll just give a brief summary about the situation, and after that I won't respond to you (and like dgruss23, I will use forum's ignore feature).

The summary: I started discussing two methods of determining cosmological redshifts of galaxy clusters that had been discussed in this thread. First method was to pick a single galaxy from six clusters and assume that each galaxy is a fair representative of cluster's cosmological redshift. Second method was to use the mean redshift of six clusters. This discussion with you started on the first method, of which I at one point said that I doubt six galaxies will be enough with all error sources. To this you replied that "Do you just think that your 2 cents trumps the HKP's error analysis that says N=6 results in an error of 4 km/s/Mpc in H0?". I then for the first time pointed out that I hadn't been talking about HKP. Only after that the worst case scenario vs. 1 sigma analysis discussion broke out elsewhere in the discussion. This HKP stuff you are insisting still derives from that original claim of yours that I have been talking about HKP and has nothing to do with the recent worst case scenario discussion.

I suggest that in the future you check things again to make sure before you leap to conclusions.

You have been so much out of your depth in this thread that you don't even understand what your own arguments imply. Your arguments have not been in conflict with standard statistical error analysis (and thus HKP) just in your most recent posts, but also in your very first one in this thread: "I agree with dgruss23 that 6 galaxies doesn't seem to be even nearly enough". I then explained to you why 6 galaxies give small errors, but to this you again replied: "I rather doubt that 6 would be enough, but that's just my 2 cents." At this point I should have concluded that you are not going to get it and simply asked the moderators to take action on you when you started promoting ATM error analysis in a non-ATM thread. You got one point right in your first post though, writing that "Well, my pondering above probably contains lot of misunderstandings and false/too simplistic assumptions, but perhaps some knowledgeable members here are kind enough to correct me."

Nereid
2007-Sep-07, 05:55 PM
Sure. Where do you rank accuracy in astronomical numbers relative to cheap advertising props vs civilian aircraft?I don't; apples, oranges, and archaea ... astronomy beyond the solar system is entirely based on (passive) detection of photons*, what terrestrial engineers do couldn't be more different.
A significant factor for most companies is product reliability, even in fairly mundane products. Getting on the wrong side of customers can kill you in a competitive market. Hence the envied reputation of General Electric Company for their 6 Sigma reliability program.

My purpose in bringing this up was only a comment that if you are from a different field than astronomy, you have to get used to the fierce defense of numerical results that have only 1 sigma accuracy. But it is also understood that much effort and engenuity is required to get numbers even that good at this stage of our knowledge.How H.E.S.S. (and similar CATs) wring high quality astronomy from a few air showers among millions is a stunning achievement, to take just one example; one thing that I continue to find amazing is just how well it all hangs together - a handful or three of photons and you can estimate the mass distribution function of stars in galaxies a thousand Mpc away!

*OK, a few neutrinos too, plus an isotropic rain of cosmic rays. When GWR is detected - LIGO, LISA, and all that - a dramatic new window will open (even better would be if one or more of the DM telescopes starts making good observations!)

TomT
2007-Sep-08, 03:33 PM
I found these post Alves determinations from ADS:

18.39 ± 0.05 van Leeuwen, cepheids 2007
18.48 ± 0.03 Ngeow, cepheids 2007
18.40 ± 0.04 Grocholski, red clump stars 2007
18.34 ± 0.06 An, cepheids 2007
18.48 ± 0.02 McNamara, d Scuti stars 2007
18.40 ± 0.05 Benedict, cepheids 2007

18.41 ± 0.10 2006ApJ...652.1133M, cepheids & maser distance to NGC 4258
18.50 ± 0.05 2006MmSAI..77..261S, cepheids
18.46 ± 0.03 2006MmSAI..77..214D, RR Lyrae
18.54 ± 0.02 2006ApJ...642..834K, cepheids
18.40 ± 0.04 2006PhDT........14G, populous clusters

18.56 ± 0.04 2005ApJ...627..224G, cepheids
18.39 ± 0.05 2005A&A...434.1077M, Eclipsing Binary
18.51 ± 0.02 2005A&A...434.1077M, Eclipsing Binary
18.32 ± 0.08 2005tdug.conf..707R, RR Lyrae

The errors given are the random errors only. There have been four 18.3x results and four 18.5x results in 2005-2007, others have been 18.4x. The weighted average of these 15 determinations is 18.48 ± 0.01 mag.

This is just a question for clarification only. For these 15 determinations you calculate a "weighted" average to get 18.48 +/- .01. How did you calculate a "weighted" average, i.e. what is the weighting factor?

Zahl
2007-Sep-08, 05:28 PM
1/σi2 where σi are the errors as reported by the authors.

TomT
2007-Sep-09, 02:14 AM
1/σi2 where σi are the errors as reported by the authors.

Thanks, I follow your calculation which is a standard method.

TomT
2007-Sep-09, 03:31 AM
To get a more balanced view of recent LMC distance determinations than dgruss' cherry picking, here's a post HKP review from Alves (2004) who found a distance modulus of 18.50 ± 0.02 as a weighted average of 14 studies.

http://adsabs.harvard.edu/abs/2004NewAR..48..659A

I found these post Alves determinations from ADS:

18.39 ± 0.05 van Leeuwen, cepheids 2007
18.48 ± 0.03 Ngeow, cepheids 2007
18.40 ± 0.04 Grocholski, red clump stars 2007
18.34 ± 0.06 An, cepheids 2007
18.48 ± 0.02 McNamara, d Scuti stars 2007
18.40 ± 0.05 Benedict, cepheids 2007

18.41 ± 0.10 2006ApJ...652.1133M, cepheids & maser distance to NGC 4258
18.50 ± 0.05 2006MmSAI..77..261S, cepheids
18.46 ± 0.03 2006MmSAI..77..214D, RR Lyrae
18.54 ± 0.02 2006ApJ...642..834K, cepheids
18.40 ± 0.04 2006PhDT........14G, populous clusters

18.56 ± 0.04 2005ApJ...627..224G, cepheids
18.39 ± 0.05 2005A&A...434.1077M, Eclipsing Binary
18.51 ± 0.02 2005A&A...434.1077M, Eclipsing Binary
18.32 ± 0.08 2005tdug.conf..707R, RR Lyrae

The errors given are the random errors only. There have been four 18.3x results and four 18.5x results in 2005-2007, others have been 18.4x. The weighted average of these 15 determinations is 18.48 ± 0.01 mag. This is in good agreement with the weighted average of 14 determinations (18.50 ± 0.02) that Alves found in 2004. Adopting 18.48 ± 0.01 mag as the LMC distance modulus would then slightly increase HKP's value for H0 to ~73 and cut down their errors, but I don't bother to calculate how much exactly.

I am trying to follow the logic in the above example of stars in a galaxy vs the earlier discussion of number of galaxies in a cluster needed to obtain accurate distance results used in the calculation of H0 values.
In the LMC discussion it appears that dgruss23 argues that using one value from an individual star distance determination method to get the LMC distance gives significantly different results than using another star in the LMC. This is then referred to as "cherry picking" and it is pointed out that two studies using 14 and 15 stars for a total of 29 gives a statistically accurate weighted average which is what should be used.
However earlier it was argued by dgruss23 that using only one galaxy from a cluster to determine the cluster distance is not enough for good accuracy, and this was countered that one is enough in that case.
Isn't there a contradiction here? Why wouldn't a weighted average of 29 or more galaxies be needed to get an accurate value of the cluster distance.

Zahl
2007-Sep-09, 11:19 AM
Your argument is so vague that it is difficult to decipher what you are trying to say. I don't know what you mean by "two studies using 14 and 15 stars", but if this refers to Alves' 2004 review of 14 2002-2004 distance determinations and my review of 15 2005-2007 distance determinations, then "two studies using 14 and 15 stars" is completely wrong as it would imply that these papers used only 1 star each to determine the LMC distance modulus. You also appear to be arguing that because many stars are needed to get "accurate" results when determining a distance to the LMC, then one SBF galaxy distance can't possibly be enough for "good accuracy". But such a goofy apples to oranges comparison involving entirely different methods, observables and physics doesn't make any sense. Also what exactly you mean by "accurate" in each case is anybody's guess. It would be appropriate that you would argue quantitatively against arguments that were presented quantitatively instead of using vague word salad.

As for dgruss' cherry picking, he used two of the lowest LMC distance determinations in recent years to argue against HKP's 18.50 ± 0.10 and ignored the others that were higher. This constitutes cherry picking.

TomT
2007-Sep-09, 03:58 PM
Your argument is so vague that it is difficult to decipher what you are trying to say. I don't know what you mean by "two studies using 14 and 15 stars", but if this refers to Alves' 2004 review of 14 2002-2004 distance determinations and my review of 15 2005-2007 distance determinations, then "two studies using 14 and 15 stars" is completely wrong as it would imply that these papers used only 1 star each to determine the LMC distance modulus. You also appear to be arguing that because many stars are needed to get "accurate" results when determining a distance to the LMC, then one SBF galaxy distance can't possibly be enough for "good accuracy". But such a goofy apples to oranges comparison involving entirely different methods, observables and physics doesn't make any sense. Also what exactly you mean by "accurate" in each case is anybody's guess. It would be appropriate that you would argue quantitatively against arguments that were presented quantitatively instead of using vague word salad.

As for dgruss' cherry picking, he used two of the lowest LMC distance determinations in recent years to argue against HKP's 18.50 ± 0.10 and ignored the others that were higher. This constitutes cherry picking.

A number of points:
(1) As I understand it, this is a Q/A forum where people can ask questions and expect an answer given in a civil manner and tone. Its purpose includes education of the less experienced and if their questions contain errors in understanding or wording, these can be pointed out also in a civil manner.
(2) In my question above, I used the word "star" instead of the correct word "determination" in referring to the LMC distance calculation. So with that corrected, my first point was that dgruss23 pointed out that the distance result obtained by using only two determinations, the van Leeuwen study and an An study, gave a significantly different distance modulus for the LMC than that from the HKP. You then went on to show that a weighted average for the LMC distance modulus from 15 determinations gave a value close to that from the HKP. My conclusion from this is that many observations are needed to get an accurate result, in this case a weighted avarage value with sigma = +/- .01.
(3) My understanding of the earlier discussion is that dgruss23 argues that using a distance determination from only one galaxy in a cluster is not enough to get an accurate measure of the cluster distance. I thought that you argued that using one per cluster is enough. So it occurred to me to ask why one determination is enough in the galaxy cluster case, but one is not enough in the LMC calculation.

Note: this is a question, not some kind of challenge. Please refer to my point (1) above before responding.

Zahl
2007-Sep-09, 09:40 PM
1) van Leeuwen's 18.39 ± 0.05 and An's 18.34 ± 0.06 do not differ significantly from HKP's 18.50 ± 0.10. This has already been pointed out in this thread and An et al. point it out themselves several times in their paper, even in the abstract.

2) HKP got their 18.50 ± 0.10 from over 30 different determinations (7 different methods), not from a single determination. This is described in Freedman's paper and quoted by Nereid in this thread. Any one or two determinations can't trump this record.

3a) Question: Why is one determination enough in the galaxy cluster case, but one is not enough in the LMC calculation? Answer: They involve entirely different methods, observables and physics and comparing them doesn't make any sense.

3b) I'm not sure what "not enough" is supposed to mean in the above question. The quoted error is smaller in any modern LMC distance determination than in a single SBF distance determination.

4) I have already shown quantitatively why it makes little difference where the SBF galaxy is in the cluster.

TomT
2007-Sep-10, 03:34 AM
I have already shown quantitatively why it makes little difference where the SBF galaxy is in the cluster.

I think you have stated that the distance to a galaxy in a cluster is determined by some method such as SBF. Then the redshifts from many galaxies in the cluster are determined and an average is calculated to get the mean redshift, and thus velocity, for the cluster. From these H0 is determined. So my question boils down to, why dont you also calculate the distance to each galaxy for which you have a redshift, and then find the mean distance to use for the cluster distance. Wouldn't you get a more accurate value using the mean distance to calculate H0?

Nereid
2007-Sep-10, 01:36 PM
[snip]
In particular, Table 12 (I'm not going to try to reproduce it here) and Figure 3.

My impression is that the biggest single aspect missing from the otherwise good overview in the OP is an analysis of what Freedman et al. call uncertainties and errors.

Starting with "Error (random)", I think it is pertinent to ask how much the OP's summary in the form of "only {x} used ..." is blind to the frequentist, Bayesian, and Monte Carlo analyses which are reported in the Freedman et al. paper. Specifically, in the absence of any alternative analyses of the random error, is it reasonable to ignore such comments (in the OP)?

Moving on to the outlier in the HKP paper (the FP): a snippet from Figure 3 may serve as an appropriate sound bite "The systematic uncertainties for each method are indicated by the horizontal bars near the peak of each Gaussian" - the horizontal [FP] bar overlaps the horizontal bars of each of the four other methods.The point of this thread is to discuss whether or not H0 could be in the 80's. Do you see the above as something that establishes H0 could not be in the 80's?

Your point about the overlap of the FP uncertainty is very valid, but keep in mind - as I explained to Zahl - the fundamental plane result is not why I have suggested H0 could still be in the 80's.Tim Thompson (post #4 (http://www.bautforum.com/1030661-post4.html) "It certainly appears, based on these plotted data, that 84 km/sec/Mpc is an unreasonably high value, and significantly unlikely.") and StupendousMan (post #59 (http://www.bautforum.com/1044089-post59.html) "It is possible that H0 could be in the 80s. Unlikely, in my opinion, with the current weight of evidence against it, but possible.") have both addressed the question here (and in the OP) adequately I think.

More interesting, to me anyway, is a somewhat different question - how to adequately convey the "±" parts of the many research papers' results?

I raised this already, in post #70 (http://www.bautforum.com/1044780-post70.html), and again in post #84 (http://www.bautforum.com/1047005-post84.html).

This is not quite the same as the couple of pages of posts earlier, on the details of one or two particular methods; rather, it is about:

-> how to convey both the estimates of random and systematic error in meta-analyses, in a way that is not misleadingly precise or understates its strength

-> how to assess the strengths and weaknesses of meta-analyses.

In post #172 (http://www.bautforum.com/1065503-post172.html), Zahl answered a narrower question of mine:

What approaches to combining many independent estimates are robust?If the estimates are independent and the authors did not screw up when calculating their errors, then simply taking a weighted average will be robust.The estimates of LMC distance provide, IMHO, a good example to look at this more deeply.

That there are plenty of independent estimates* is clear.

What may not be so clear is:

- to what extent are the reported errors calculated consistently?

Or, how necessary is it to look, in some detail, at how each paper's author(s) calculated the stated errors? Many will, I suspect, assume that the random errors reported are calculated consistently (across papers). However, is that the case for reported systematic errors?

- to what extent do independent estimates using the same method share common systematic errors?

Cepheids, for example, or RR Lyrae: what common assumptions are made in all papers using Cepheids? to what extent do those papers differ, in the input parameters, especially the estimated errors on those inputs?

- to what extent is it possible to compare (estimates of) systematic errors in independent research using different methods?

I suspect it's not really ...

- are there techniques one could use to explore how differences between the various independent estimates impact a meta-analysis' conclusions?

This is one thing that I think the Freedman et al. HKP paper did do. It's also an aspect that hasn't really been looked at, so far, in this thread.

Note that these questions are not unique to research into H0.

*Let's assume that none of the authors got their sums wrong when estimating their reported errors.

Zahl
2007-Sep-10, 08:39 PM
I think you have stated that the distance to a galaxy in a cluster is determined by some method such as SBF. Then the redshifts from many galaxies in the cluster are determined and an average is calculated to get the mean redshift, and thus velocity, for the cluster. From these H0 is determined. So my question boils down to, why dont you also calculate the distance to each galaxy for which you have a redshift, and then find the mean distance to use for the cluster distance. Wouldn't you get a more accurate value using the mean distance to calculate H0?

No. See my calculation earlier in this thread.

dgruss23
2007-Sep-10, 08:49 PM
I'd like to take credit for this, but I can't ...As it may be that dgruss23 won't see this unless someone else quotes it, I'm quoting it.

I will, however, re-state my last question:

What approaches to combining many independent estimates are robust?

Are you sure you want to take credit for this?:


To get a more balanced view of recent LMC distance determinations than dgruss' cherry picking, here's a post HKP review from Alves (2004) who found a distance modulus of 18.50 ± 0.02 as a weighted average of 14 studies.

http://adsabs.harvard.edu/abs/2004NewAR..48..659A

I found these post Alves determinations from ADS:

18.39 ± 0.05 van Leeuwen, cepheids 2007
18.48 ± 0.03 Ngeow, cepheids 2007
18.40 ± 0.04 Grocholski, red clump stars 2007
18.34 ± 0.06 An, cepheids 2007
18.48 ± 0.02 McNamara, d Scuti stars 2007
18.40 ± 0.05 Benedict, cepheids 2007

18.41 ± 0.10 2006ApJ...652.1133M, cepheids & maser distance to NGC 4258
18.50 ± 0.05 2006MmSAI..77..261S, cepheids
18.46 ± 0.03 2006MmSAI..77..214D, RR Lyrae
18.54 ± 0.02 2006ApJ...642..834K, cepheids
18.40 ± 0.04 2006PhDT........14G, populous clusters

18.56 ± 0.04 2005ApJ...627..224G, cepheids
18.39 ± 0.05 2005A&A...434.1077M, Eclipsing Binary
18.51 ± 0.02 2005A&A...434.1077M, Eclipsing Binary
18.32 ± 0.08 2005tdug.conf..707R, RR Lyrae

The errors given are the random errors only. There have been four 18.3x results and four 18.5x results in 2005-2007, others have been 18.4x. The weighted average of these 15 determinations is 18.48 ± 0.01 mag. This is in good agreement with the weighted average of 14 determinations (18.50 ± 0.02) that Alves found in 2004. Adopting 18.48 ± 0.01 mag as the LMC distance modulus would then slightly increase HKP's value for H0 to ~73 and cut down their errors, but I don't bother to calculate how much exactly.

Nereid, Zahl's made a serious mistake when calculating his "weighted average" and it illustrates a number of points I've patiently tried to make on this thread. And of course I see Zahl can't help but accuse me of "Cherry picking" - which is not what my citation of the van Leeuwen et al and An et al studies was, but I need to explain what is wrong with what Zahl has done in order to make clear why.

Zahl's mistake is very simple - some of the cepheid studies he cited applied metallicity corrections and some did not apply metallicity corrections. However, the studies that applied a metallicity correction also provided the LMC distance modulus without the metallicity correction. If I remember how to format a table correctly in this software this table will illustrate with the studies Zahl cited that used cepheids (the studies for which the actual source paper could be tracked down given that he did not provide authors or recognizeable journals for several of the studies).




Study distance modulus distance modulus
no metallicity corr. with metallicity corr

Benedict et al 18.50 +/- 0.03 18.40 +/- 0.05
(2007)

Ngeow&Kanbur 18.48 +/-0.03 N/A
(2007)

An et al 18.48 18.34 +/-0.06
(2007)

van Leeuwen et al 18.52 +/-0.03 18.39 +/- 0.05
(2007)

McNamara et al 18.48 +/-0.15 N/A
(2007)

Keller&Wood 18.54 +/-0.02 N/A
(2006)

Macri et al 18.41 +/-0.10
(2006)

Gieren et al 2005 18.56 +/-0.04 N/A



Macri et al (2006) demonstrated the need for the metallicity corrections. So the appropriate method of using cepheids for the distance scale must include metallicity corrections. Note that there are two effects from the metallicity corrections: a lower distance modulus for the LMC by about -0.12 mag and a slight increase in the reported random uncertainty in the LMC distance modulus.

In the table, I've highlighted in red the distance moduli used by Zahl. Noticed that 4 of the studies he cited utilized metallicity corrections and 4 studies did not. Also notice that there is a very small range in distance modulus within both columns but the clear systematic difference between the columns.

On procedural grounds, the values in the first column are wrong because they do not account for metallicity as Macri et al (2006) demonstrated must be done. But Zahl's weighted average introduces another problem. Since, the reported uncertainty is smaller for the cepheid distances w/o metallicity corrections, they are given a higher weight in Zahl's weighted average, and they skew the value toward the incorrect values uncorrected for metallicity. Statistically weighting the studies by reported uncertainty in this instance drives the result away from the result that is derived from the best procedures!

The proper procedure would be for Zahl to calculate a weighted average with and without the metallicity corrections. There is also an important note to make about the McNamara et al (2007) study. They actually used delta scuti stars, not cepheids. delta scuti's are also known as "dwarf cepheid's" and have much shorter periods (in hours). McNamara et al demonstrated that they appear to fit the same P-L as the cepheids, but made no metallicity corrections. Whether or not one chooses to include this study in the table does nothing to change my point.

There is also the tudy by Moskalik & Dziembowski (2005) that was based upon two cepheids in the LMC. One of those cepheids gives 18.34 and the other gives 18.53. In Zahl's "weighted average" these individual stars count as much as any other study that utilized dozens to hundreds of cepheids. I leave it to the readers to decide what relevance that study should have.

Finally, there is the study of Grocholski et al (2007) that is independent of cepheids. They find LMC distance modulus = 18.40+/-0.04. Keller&Wood (2006) pointed to a study by Hiditch et al (2005) that found a SMC distance modulus of 18.91 +/-0.03 which with the differential distance modulus with the LMC of +0.5 mag (Udalski et al 1999) implies a LMC distance modulus of 18.41. Benedict et al (2007) cite the cepheid independent eclipsing binary distance modulus by Fitzpatrick et al (2003) of 18.42+/-0.04.

So when you take the best cepheid procedures including metallicity corrections you get a distance modulus of ~18.40 - which is consistent with recent cepheid independent distances.

And you can see now why I was not cherry picking. van Leeuwen et al used improved Hipparcos parallax results and metallicity corrections. An revised the galactic cepheid scale using open clusters and applied metallicity corrections. I saw no reason to go back and cite studies that do not apply metallicity corrections and outdated procedures.

Finally, I would note that this illustrates another point I've repeatedly made in this thread regarding simply averaging the H0 results of a bunch of studies and claiming that it tells us where H0 is. Studies use different procedures and assumptions and some of those procedures are better than others. To blindly average all those H0 results gets you a result of limited value just as Zahl's blind average of a bunch of results abstract searched from ADS or arXiv is erroneous. You have to actually dig into the papers and look at procedures, find common approaches and differing approaches.

dgruss23
2007-Sep-10, 08:49 PM
I'd like to take credit for this, but I can't ...As it may be that dgruss23 won't see this unless someone else quotes it, I'm quoting it.

I will, however, re-state my last question:

What approaches to combining many independent estimates are robust?

Are you sure you want to take credit for this?:


To get a more balanced view of recent LMC distance determinations than dgruss' cherry picking, here's a post HKP review from Alves (2004) who found a distance modulus of 18.50 ± 0.02 as a weighted average of 14 studies.

http://adsabs.harvard.edu/abs/2004NewAR..48..659A

I found these post Alves determinations from ADS:

18.39 ± 0.05 van Leeuwen, cepheids 2007
18.48 ± 0.03 Ngeow, cepheids 2007
18.40 ± 0.04 Grocholski, red clump stars 2007
18.34 ± 0.06 An, cepheids 2007
18.48 ± 0.02 McNamara, d Scuti stars 2007
18.40 ± 0.05 Benedict, cepheids 2007

18.41 ± 0.10 2006ApJ...652.1133M, cepheids & maser distance to NGC 4258
18.50 ± 0.05 2006MmSAI..77..261S, cepheids
18.46 ± 0.03 2006MmSAI..77..214D, RR Lyrae
18.54 ± 0.02 2006ApJ...642..834K, cepheids
18.40 ± 0.04 2006PhDT........14G, populous clusters

18.56 ± 0.04 2005ApJ...627..224G, cepheids
18.39 ± 0.05 2005A&A...434.1077M, Eclipsing Binary
18.51 ± 0.02 2005A&A...434.1077M, Eclipsing Binary
18.32 ± 0.08 2005tdug.conf..707R, RR Lyrae

The errors given are the random errors only. There have been four 18.3x results and four 18.5x results in 2005-2007, others have been 18.4x. The weighted average of these 15 determinations is 18.48 ± 0.01 mag. This is in good agreement with the weighted average of 14 determinations (18.50 ± 0.02) that Alves found in 2004. Adopting 18.48 ± 0.01 mag as the LMC distance modulus would then slightly increase HKP's value for H0 to ~73 and cut down their errors, but I don't bother to calculate how much exactly.

Nereid, Zahl's made a serious mistake when calculating his "weighted average" and it illustrates a number of points I've patiently tried to make on this thread. And of course I see Zahl can't help but accuse me of "Cherry picking" - which is not what my citation of the van Leeuwen et al and An et al studies was, but I need to explain what is wrong with what Zahl has done in order to make clear why.

Zahl's mistake is very simple - some of the cepheid studies he cited applied metallicity corrections and some did not apply metallicity corrections. However, the studies that applied a metallicity correction also provided the LMC distance modulus without the metallicity correction. If I remember how to format a table correctly in this software this table will illustrate with the studies Zahl cited that used cepheids (the studies for which the actual source paper could be tracked down given that he did not provide authors or recognizeable journals for several of the studies).




Study distance modulus distance modulus
no metallicity corr. with metallicity corr

Benedict et al 18.50 +/- 0.03 18.40 +/- 0.05
(2007)

Ngeow&Kanbur 18.48 +/-0.03 N/A
(2007)

An et al 18.48 18.34 +/-0.06
(2007)

van Leeuwen et al 18.52 +/-0.03 18.39 +/- 0.05
(2007)

McNamara et al 18.48 +/-0.15 N/A
(2007)

Keller&Wood 18.54 +/-0.02 N/A
(2006)

Macri et al 18.41 +/-0.10
(2006)

Gieren et al 2005 18.56 +/-0.04 N/A



Macri et al (2006) demonstrated the need for the metallicity corrections. So the appropriate method of using cepheids for the distance scale must include metallicity corrections. Note that there are two effects from the metallicity corrections: a lower distance modulus for the LMC by about -0.12 mag and a slight increase in the reported random uncertainty in the LMC distance modulus.

In the table, I've highlighted in red the distance moduli used by Zahl. Noticed that 4 of the studies he cited utilized metallicity corrections and 4 studies did not. Also notice that there is a very small range in distance modulus within both columns but the clear systematic difference between the columns.

On procedural grounds, the values in the first column are wrong because they do not account for metallicity as Macri et al (2006) demonstrated must be done. But Zahl's weighted average introduces another problem. Since, the reported uncertainty is smaller for the cepheid distances w/o metallicity corrections, they are given a higher weight in Zahl's weighted average, and they skew the value toward the incorrect values uncorrected for metallicity. Statistically weighting the studies by reported uncertainty in this instance drives the result away from the result that is derived from the best procedures!

The proper procedure would be for Zahl to calculate a weighted average with and without the metallicity corrections. There is also an important note to make about the McNamara et al (2007) study. They actually used delta scuti stars, not cepheids. delta scuti's are also known as "dwarf cepheid's" and have much shorter periods (in hours). McNamara et al demonstrated that they appear to fit the same P-L as the cepheids, but made no metallicity corrections. Whether or not one chooses to include this study in the table does nothing to change my point.

There is also the tudy by Moskalik & Dziembowski (2005) that was based upon two cepheids in the LMC. One of those cepheids gives 18.34 and the other gives 18.53. In Zahl's "weighted average" these individual stars count as much as any other study that utilized dozens to hundreds of cepheids. I leave it to the readers to decide what relevance that study should have.

Finally, there is the study of Grocholski et al (2007) that is independent of cepheids. They find LMC distance modulus = 18.40+/-0.04. Keller&Wood (2006) pointed to a study by Hiditch et al (2005) that found a SMC distance modulus of 18.91 +/-0.03 which with the differential distance modulus with the LMC of +0.5 mag (Udalski et al 1999) implies a LMC distance modulus of 18.41. Benedict et al (2007) cite the cepheid independent eclipsing binary distance modulus by Fitzpatrick et al (2003) of 18.42+/-0.04.

So when you take the best cepheid procedures including metallicity corrections you get a distance modulus of ~18.40 - which is consistent with recent cepheid independent distances.

And you can see now why I was not cherry picking. van Leeuwen et al used improved Hipparcos parallax results and metallicity corrections. An revised the galactic cepheid scale using open clusters and applied metallicity corrections. I saw no reason to go back and cite studies that do not apply metallicity corrections and outdated procedures.

Finally, I would note that this illustrates another point I've repeatedly made in this thread regarding simply averaging the H0 results of a bunch of studies and claiming that it tells us where H0 is. Studies use different procedures and assumptions and some of those procedures are better than others. To blindly average all those H0 results gets you a result of limited value just as Zahl's blind average of a bunch of results abstract searched from ADS or arXiv is erroneous. You have to actually dig into the papers and look at procedures, find common approaches and differing approaches.

Zahl
2007-Sep-10, 09:39 PM
Or, how necessary is it to look, in some detail, at how each paper's author(s) calculated the stated errors? Many will, I suspect, assume that the random errors reported are calculated consistently (across papers). However, is that the case for reported systematic errors?

Unfortunately some authors do not describe in sufficient detail how they got their errors and it is unclear if they contain systematics or not. I've looked into their derivations and in a couple of cases it appears that the quoted errors are highly dubious. E.g., McNamara et al. give 18.48 ± 0.02 where the error is the "standard deviation of the weighted average of the three above solutions" (18.46 ± 0.19, 18.48 ± 0.15 and 18.50 ± 0.22), but IMO this is a wrong way to give combined errors. It should be ± 0.10 or even higher if the "three solutions" are not independent. As a result I think the formal ± 0.01 for the combined error from 15 papers is too aggressive.


to what extent do independent estimates using the same method share common systematic errors?

Good question, but very difficult to answer. E.g., van Leeuwen uses HST & re-reduced Hipparcos parallaxes for galactic Cepheids while Benedict uses HST FGS parallaxes only. So they don't have the exact same systematics even though both work with Cepheids, but the systematics can't be fully independent either.


- are there techniques one could use to explore how differences between the various independent estimates impact a meta-analysis' conclusions?

This is one thing that I think the Freedman et al. HKP paper did do. It's also an aspect that hasn't really been looked at, so far, in this thread.

I'm not quite sure what techniques you are referring to, but IMO HKP's calculation of the RMS dispersion and standard error from the means of the 7 different methods was not quite correct because it gave equal weight to a method that has been studied extremely well over many decades (Cepheids) and another for which only a few determinations were available (Miras). Moreover, this technique can't be used for the present compilation because TRGB, Miras and SN 1987A distances have not been published in recent years.

Nereid
2007-Sep-10, 11:28 PM
Are you sure you want to take credit for this?:



Nereid, Zahl's made a serious mistake when calculating his "weighted average" and it illustrates a number of points I've patiently tried to make on this thread. And of course I see Zahl can't help but accuse me of "Cherry picking" - which is not what my citation of the van Leeuwen et al and An et al studies was, but I need to explain what is wrong with what Zahl has done in order to make clear why.

Zahl's mistake is very simple - some of the cepheid studies he cited applied metallicity corrections and some did not apply metallicity corrections. However, the studies that applied a metallicity correction also provided the LMC distance modulus without the metallicity correction. If I remember how to format a table correctly in this software this table will illustrate with the studies Zahl cited that used cepheids (the studies for which the actual source paper could be tracked down given that he did not provide authors or recognizeable journals for several of the studies).




Study distance modulus distance modulus
no metallicity corr. with metallicity corr

Benedict et al 18.50 +/- 0.03 18.40 +/- 0.05
(2007)

Ngeow&Kanbur 18.48 +/-0.03 N/A
(2007)

An et al 18.48 18.34 +/-0.06
(2007)

van Leeuwen et al 18.52 +/-0.03 18.39 +/- 0.05
(2007)

McNamara et al 18.48 +/-0.15 N/A
(2007)

Keller&Wood 18.54 +/-0.02 N/A
(2006)

Macri et al 18.41 +/-0.10
(2006)

Gieren et al 2005 18.56 +/-0.04 N/A



Macri et al (2006) demonstrated the need for the metallicity corrections. So the appropriate method of using cepheids for the distance scale must include metallicity corrections. Note that there are two effects from the metallicity corrections: a lower distance modulus for the LMC by about -0.12 mag and a slight increase in the reported random uncertainty in the LMC distance modulus.

In the table, I've highlighted in red the distance moduli used by Zahl. Noticed that 4 of the studies he cited utilized metallicity corrections and 4 studies did not. Also notice that there is a very small range in distance modulus within both columns but the clear systematic difference between the columns.

On procedural grounds, the values in the first column are wrong because they do not account for metallicity as Macri et al (2006) demonstrated must be done. But Zahl's weighted average introduces another problem. Since, the reported uncertainty is smaller for the cepheid distances w/o metallicity corrections, they are given a higher weight in Zahl's weighted average, and they skew the value toward the incorrect values uncorrected for metallicity. Statistically weighting the studies by reported uncertainty in this instance drives the result away from the result that is derived from the best procedures!

The proper procedure would be for Zahl to calculate a weighted average with and without the metallicity corrections. There is also an important note to make about the McNamara et al (2007) study. They actually used delta scuti stars, not cepheids. delta scuti's are also known as "dwarf cepheid's" and have much shorter periods (in hours). McNamara et al demonstrated that they appear to fit the same P-L as the cepheids, but made no metallicity corrections. Whether or not one chooses to include this study in the table does nothing to change my point.

There is also the tudy by Moskalik & Dziembowski (2005) that was based upon two cepheids in the LMC. One of those cepheids gives 18.34 and the other gives 18.53. In Zahl's "weighted average" these individual stars count as much as any other study that utilized dozens to hundreds of cepheids. I leave it to the readers to decide what relevance that study should have.

Finally, there is the study of Grocholski et al (2007) that is independent of cepheids. They find LMC distance modulus = 18.40+/-0.04. Keller&Wood (2006) pointed to a study by Hiditch et al (2005) that found a SMC distance modulus of 18.91 +/-0.03 which with the differential distance modulus with the LMC of +0.5 mag (Udalski et al 1999) implies a LMC distance modulus of 18.41. Benedict et al (2007) cite the cepheid independent eclipsing binary distance modulus by Fitzpatrick et al (2003) of 18.42+/-0.04.

So when you take the best cepheid procedures including metallicity corrections you get a distance modulus of ~18.40 - which is consistent with recent cepheid independent distances.

And you can see now why I was not cherry picking. van Leeuwen et al used improved Hipparcos parallax results and metallicity corrections. An revised the galactic cepheid scale using open clusters and applied metallicity corrections. I saw no reason to go back and cite studies that do not apply metallicity corrections and outdated procedures.

Finally, I would note that this illustrates another point I've repeatedly made in this thread regarding simply averaging the H0 results of a bunch of studies and claiming that it tells us where H0 is. Studies use different procedures and assumptions and some of those procedures are better than others. To blindly average all those H0 results gets you a result of limited value just as Zahl's blind average of a bunch of results abstract searched from ADS or arXiv is erroneous. You have to actually dig into the papers and look at procedures, find common approaches and differing approaches.Here's an extract of Alves (2004):
The distance indicators reviewed are the red clump, the tip of the red giant branch, Cepheid, RR Lyrae, and Mira variable stars, cluster main-sequence fitting, supernova 1987A, and eclipsing binaries.Of the 'post-Alves', nine include either cepheid or delta scuti (per Zahl), six do not (and one has both).

Alves (2004) reports only three Cepheid-based distance estimates (out of 15).

Two things then: how should one consider An (2007) and van Leeuwen (2007)? Both are 'cepheid', but are just two of several.

So, to re-state:
What may not be so clear is:

- to what extent are the reported errors calculated consistently?

Or, how necessary is it to look, in some detail, at how each paper's author(s) calculated the stated errors? Many will, I suspect, assume that the random errors reported are calculated consistently (across papers). However, is that the case for reported systematic errors?

- to what extent do independent estimates using the same method share common systematic errors?

Cepheids, for example, or RR Lyrae: what common assumptions are made in all papers using Cepheids? to what extent do those papers differ, in the input parameters, especially the estimated errors on those inputs?

- to what extent is it possible to compare (estimates of) systematic errors in independent research using different methods?

I suspect it's not really ...

- are there techniques one could use to explore how differences between the various independent estimates impact a meta-analysis' conclusions?

This is one thing that I think the Freedman et al. HKP paper did do. It's also an aspect that hasn't really been looked at, so far, in this thread.I'd like us to take a closer look at Freedman et al. (the 2001 final HKP paper); they considered this question - or at least some aspects of it - in some detail.

Zahl
2007-Sep-10, 11:43 PM
Zahl's mistake is very simple - some of the cepheid studies he cited applied metallicity corrections and some did not apply metallicity corrections.

It is not the business of a reviewer to change the results reported in the literature. The authors report the results that they think are correct, using corrections as they see appropriate and I report what I found. One could also ask why dgruss does not give his own weighted average. Is it because the result changes only a few hundredths of a magnitude if all Cepheid papers are metallicity corrected as in Macri? I will explore this in detail when I have decided what to do with those dubious errors I discussed earlier and will also add post HKP results from SN 1987A, TRGB, Miras and others if there are any.


(the studies for which the actual source paper could be tracked down given that he did not provide authors or recognizeable journals for several of the studies).

The codes I gave are the standard Bibliographic Codes ADS uses.


McNamara et al 18.48 +/-0.15

My review did not include such a result.


There is also the tudy by Moskalik & Dziembowski (2005) that was based upon two cepheids in the LMC. One of those cepheids gives 18.34 and the other gives 18.53. In Zahl's "weighted average" these individual stars count as much as any other study that utilized dozens to hundreds of cepheids. I leave it to the readers to decide what relevance that study should have.

This is a different and potentially very accurate method that utilizes rare triple mode Cepheids.

dgruss23
2007-Sep-11, 01:14 AM
It is not the business of a reviewer to change the results reported in the literature. The authors report the results that they think are correct, using corrections as they see appropriate and I report what I found.

You're avoiding admitting your error. Macri et al demonstrated the need for the metallicity corrections. Studies consistently find a distance modulus of ~18.50 without the metallicity correction and 18.40 with the metallicity correction. I showed that with my table in my last post. Your weighted average is erroneous. You can find a weighted average for distance determinations without the metallicity correction or you can find the weighted average with the metallicity correction, but it is nonsense to weight an average with both for reasons I explained.

The metallicity correction is the better procedure, but uncertainty in the metallicity correction increases the typical distance modulus uncertainty from ~+/- 0.03 mag to +/-0.05 mag. When you weight the averages using these uncertainties you favor the non-metallicity corrected distances which in fact suffer from a systematic error due to metallicity effects. The end result is an incorrect LMC distance modulus.



One could also ask why dgruss does not give his own weighted average. Is it because the result changes only a few hundredths of a magnitude if all Cepheid papers are metallicity corrected as in Macri?

You're right - but not as you intend it. I see no reason to calculate a weighted mean for the cepheid distances that are not metallicity corrected, nor for the metallicity corrected. They are separate categories and one study has the same uncertainty as the next - with a very small observed scatter anyway. You've erroneously attempted to find a LMC distance modulus by adopting the small reported errors for the distances estimates without the metallicity corrections. This gives you the absurd result in which the most recent findings - ie the need for metallicity corrections - are invalidated because of the smaller reported errors of the older (and incorrect) procedure (no metallicity correction).


I will explore this in detail when I have decided what to do with those dubious errors I discussed earlier and will also add post HKP results from SN 1987A, TRGB, Miras and others if there are any.

I thought it was the reviewers job just to report and not change the results reported in the literature? Or so you said above.



The codes I gave are the standard Bibliographic Codes ADS uses.

Yes, providing author names would help. This one - was unusual - never seen it before as it is a very obscure journal:

18.50 ± 0.05 2006MmSAI..77..261S, cepheids. I finally managed to find the abstract. (http://adsabs.harvard.edu/abs/2006MmSAI..77..261S)

Anybody care to predict whether or not this paper uses metallicity corrections? .... Ok, it doesn't, but that's not a surprise since they get a distance modulus of 18.50.



My review did not include such a result.

The error you gave was different. The +/- 0.15 was for a single star that they reported on in that study. When merged with previous results, they report an error of +/-0.02. Should we calculate a weighted mean with the earlier results and this paper as separate studies? Or should we replace the earlier results with this one?


This is a different and potentially very accurate method that utilizes rare triple mode Cepheids.

One of the two stars gives 18.34 and the other gives 18.53. Which of the two stars should we go with? And since they are cepheids are the metallicity corrections needed?

dgruss23
2007-Sep-11, 01:17 AM
Here's an extract of Alves (2004):Of the 'post-Alves', nine include either cepheid or delta scuti (per Zahl), six do not (and one has both).

Alves (2004) reports only three Cepheid-based distance estimates (out of 15).

Two things then: how should one consider An (2007) and van Leeuwen (2007)? Both are 'cepheid', but are just two of several.

So, to re-state:I'd like us to take a closer look at Freedman et al. (the 2001 final HKP paper); they considered this question - or at least some aspects of it - in some detail.

If you have something specific to say about the Freedman et al paper, please feel free to share those thoughts.

Nereid
2007-Sep-12, 03:04 AM
First, Figure 5 is a nice visual summary of the published estimates of the LMC distance modulus.

In choosing what to include, for their analysis, Freedman et al. selected "[o]nly the single most recent revision from a given author and method".

Then comes the summarised conclusion: "At present, there is no single method with demonstrably lower systematic errors, and we find no strong reason to prefer one end of the distribution over the other."

The authors use three methods to combine all the inputs - cumulative probability distributions, Bayesian probability distributions, and "estimate the overall average and the standard error of the mean, based on a mean distance for different methods, and giving each technique unit weight."

So, generalising:
* multiple methods
* no single method with demonstrably lower systematic errors
* frequentist and Bayesian analyses agree
* take a mean of 'method means', giving each method equal weight
* compare this mean with the cumulative probability distribution statistics.

Section 8.1.2 briefly discusses how the zero-point uncertainty might be resolved, in the (then) near future. The authors are not optimistic.

dgruss23
2007-Sep-15, 11:29 AM
First, Figure 5 is a nice visual summary of the published estimates of the LMC distance modulus.

It should be clarified that "the published estimates" are from the period 1998-1999 as compiled with Gibson. The actual relevance of each study in that compilation to the current best estimate for the value of the LMC distance modulus would need to be evaluated.

Freedman et al also state:


It is clear from the wide range of moduli compared to the quoted internal errors in Figure 5 that systematic errors affecting individual methods are still dominating the determinations of LMC distances.

I would note that Freedman et al confirm one of my points - you can estimate the internal systematic errors and the correct value could still fall outside those error bars. Obviously if this is the case there are still unknown systematics in the method. I raised this as a possibility with the Coma cluster SBF distance and was treated as a crackpot for it.


In choosing what to include, for their analysis, Freedman et al. selected "[o]nly the single most recent revision from a given author and method".

I think they have adopted the right approach. Sometimes researchers refine an earlier analysis. It might lead to a slight change in the best estimate, but it would be absurd to average the latest analysis with the earlier analysis. The newer analysis is the authors best current estimate.


Then comes the summarised conclusion: "At present, there is no single method with demonstrably lower systematic errors, and we find no strong reason to prefer one end of the distribution over the other."

Now we have a more interesting issue where cepheid distances are concerned. Metallicity corrections are being employed as advocated by Macri (one of the HKP team members) et al. I specifically argued that Zahl's weighted mean was erroneous because he included both distances with metallicity corrections and those without metallicity corrections in his weighted average. What made this worse is the addition of metallicity corrections leads to a slight increase in the uncertainty - so using Zahl's approach the less appropriate procedure (no metallicity correction) is actually given more weight.

What Zahl should do is calculate weighted means using no metallicity correction results and then using metallicity correction results.

What was also noted is that the most recent estimates with other Cepheid independent methods agree with the LMC distance when using Cepheid metallicity corrections.



The authors use three methods to combine all the inputs - cumulative probability distributions, Bayesian probability distributions, and "estimate the overall average and the standard error of the mean, based on a mean distance for different methods, and giving each technique unit weight."

These methods are fine, but in order to avoid GIGO one has to be concerned with the input distances that are being selected. Zahl's weighted mean was correctly calculated for the results he selected, but he failed to properly account for the cepheid metallicity issue and thus GIGO is front and center in his weighted average.

Note that I'm not suggesting the individual analyses are garbage. What I am saying is that if one does not recognize the difference between the procedure of correcting for metallicity and not correcting for metallicity, then a weighted average that includes both approaches starts with a flawed selection of input data and is a GIGO result.


Section 8.1.2 briefly discusses how the zero-point uncertainty might be resolved, in the (then) near future. The authors are not optimistic.

I don't know that I would say that. They point to future missions that will improve the distance scale. The only part that is not optimistic is that they suggest a definitive resolution will not occur "any time soon". It is now over 6 years later. What did they mean by any time soon?

At any rate, I think the issue they were addressing is that right now we'd say the LMC distance they adopted is perhaps at the +/- 10% level. A significant improvement would be to get that to +/-5% uncertainty or better. That is where they would not be so optimistic.

At any rate, until those new missions come out, it is still possible to re-evaluate and revise the distance within the current uncertainty data allows. That is what the An et al and van Leeuwen et al studies that have been discussed do. The newer studies suggest that unless we go sans metallicity corrections for Cepheids, the LMC distance modulus is ~ 18.40 not 18.50.

Jerry
2007-Sep-15, 11:05 PM
IAOTO there is a degree of foot dragging in accepting the new cepheid distances to the MSC that include metallicity corrections; and the reason may be that any movement of Ho towards a greater value moves Ho away from the value that is in best agreement with the age of the universe as determined by CMB theoriests. There is danger of overconfidence when a value is locked in by more than one constraint, and the assumptions made creating this parametric crossroads are quite broad. Ho should be adjusted to reflect the greater degree of certainty in the new Cepheid distances.

Zahl
2007-Sep-16, 02:02 PM
You're avoiding admitting your error. Macri et al demonstrated the need for the metallicity corrections. Studies consistently find a distance modulus of ~18.50 without the metallicity correction and 18.40 with the metallicity correction. I showed that with my table in my last post. Your weighted average is erroneous. You can find a weighted average for distance determinations without the metallicity correction or you can find the weighted average with the metallicity correction, but it is nonsense to weight an average with both for reasons I explained.

I looked into why there were no metallicity corrections in the three papers (Ngeow & Kanbur, Keller & Wood, Gieren et al., the fourth was a different method) and it turns out that the cepheids used by these authors were in the LMC itself. As metallicity correction is done relative to the LMC metallicity, there are no metallicity corrections in these papers by definition. The weighted average is valid as it stands with the caveats I gave.



I will explore this in detail when I have decided what to do with those dubious errors I discussed earlier and will also add post HKP results from SN 1987A, TRGB, Miras and others if there are any.

I thought it was the reviewers job just to report and not change the results reported in the literature? Or so you said above.

The physics used in the papers can not be touched, but the errors must be derived consistently. Alves had to fix the errors in two papers in his review.



The codes I gave are the standard Bibliographic Codes ADS uses.

Yes, providing author names would help. This one - was unusual - never seen it before as it is a very obscure journal:

18.50 ± 0.05 2006MmSAI..77..261S, cepheids. I finally managed to find the abstract. (http://adsabs.harvard.edu/abs/2006MmSAI..77..261S)

One can simply search the ADS for 2006MmSAI..77..261S to fetch that paper.

Edit: I found yet another post-Alves LMC distance determination:

18.54 ± 0.02 Marconi & Clementini 2005, RR Lyrae

http://adsabs.harvard.edu/abs/2005AJ....129.2257M

Zahl
2007-Sep-16, 02:11 PM
And as for the Coma distance, dgruss, you were criticized for giving distance estimates without their errors in the usual crackpot fashion, not for the reason you give above.

dgruss23
2007-Sep-16, 03:24 PM
I looked into why there were no metallicity corrections in the three papers (Ngeow & Kanbur, Keller & Wood, Gieren et al., the fourth was a different method) and it turns out that the cepheids used by these authors were in the LMC itself. As metallicity correction is done relative to the LMC metallicity, there are no metallicity corrections in these papers by definition. The weighted average is valid as it stands with the caveats I gave.

From van Leeuwen et al (2007)


Combining the LMC PL(W) relation (equation 5) with our derived value of y(-2.58) gives directly the true modulus of the LMC uncorrected for metallicity effects. We thus find a modulus of 18.52 +/-0.03. Adopting the results of Andrievsky et al (2002) and Sakai et al (2004), as discussed by S2006 the LMC Cepheids are metal deficient by delta[O/H] = 0.26 on the "Te" abundance scale. As already noted Macri et al (2006) found a metallicity effect, applicable to our PL(W) results of -0.49(+/-0.15) mag/dex. Applying this leads to a metallicity corrected LMC modulus of 18.39 +/-0.05.

From Benedict et al (2007):


Note that none of the LMC distance moduli derived above (Table 15) have metallicity corrections applied. Macri et al (2006) demonstrate that a metallicity correction is necessary when comparing metal-rich Cepheids with metal-poor Cepheids in NGC 4258.

(snip)

Returning to the issue of the true distance modulus to the LMC, our lowest error estimate is derived from the OGLE photometry (Section 6.3.2.4, OGL: m-M = 18.50 +/- 0.04). Combined with the metallicity correction (-0.10+/-0.03 magnitude) we obtain an LMC modulus of 18.40 +/-0.05.

Note that one of the studies from Table 15 is the Gieren et al study!

From An et al (2007):


As shown in Figure 20, our best fit solution yields (m-M)0 = 18.34 +/-0.06 +/-0.16 (P-L zero point).
... (snip) ...
As in the NGC 4258 case the zero point error of the Galactic P-L relations dominates the combined error in the LMC distance modulus. Without metallicity corrections, we would derive (m-M)0 = 18.48.

As I've already shown with the table in the earlier post - the same studies that report the LMC distance modulus with metallicity corrections also report (except for Macri et al) the distance modulus without the metallicity corrections. The results they get without the metallicity corrections are in line with the results the studies such as Gieren et al get without metallicity corrections.

This is an undeniable aspect of current research into the LMC distance using Cepheids. And it is cleanly shown in the table. As I also pointed out, the uncertainty increases slightly (~+/-0.03 to +/- 0.05 mag) when you add the metallicity corrections.

The mistake you have made with your weighted average is to lump the metallicity corrected and the non-metallicity corrected distances into the same weighted mean. This is further compounded by the fact that the distances without metallicity corrections have smaller reported errors - but contain a systematic effect from the failure to correct for metallicity.

This mistake is no different than if you calculated a weighted mean value for the Hubble Constant that incorporated studies that used different values for the LMC distance modulus to fix the zero point of the distance scale. The situation is relatively clean now. I believe most studies adopt the Freedman et al distance modulus to the LMC. Thus studies can be directly compared. However, if a study wishes, they can point to the van Leeuwen et al results (for example) and point out that if their revision to the LMC distance is adopted, then all distance moduli are reduced by 0.12 mag with a resulting increase of H0. But you would not want to include such a revised value of H0 into a weighted mean with the studies that adopted the Freedmen et al distance scale.

And for the exact same reason you should not calculate a weighted mean by including LMC distances that are sometimes metallicity corrected and sometimes not metallicity corrected.



The physics used in the papers can not be touched, but the errors must be derived consistently. Alves had to had to fix the errors in two papers in his review.

Just as Cepheid distances should be reported consistently - either with or without metallicity corrections, not a mixing of both.


One can simply search the ADS for 2006MmSAI..77..261S to fetch that paper.

I did fetch the paper - normally I search by author name or journal when I recognize the journal. This journal is obscure and so an author name would've made it easier to find the paper. That's all.

dgruss23
2007-Sep-16, 03:40 PM
And as for the Coma distance, dgruss, you were criticized for giving distance estimates without their errors in the usual crackpot fashion, not for the reason you give above.

No - I wasn't criticized. I was called/compared with crackpots. Criticism would be if you said something like this:

"dgruss, you didn't provide the uncertainty on that Coma cluster distance, could you please do so and provide uncertainty on any numbers you're giving in the future?"

That is an example of criticizing politely and leads to a much better level of discourse.

As for the specific instance, this was the quote you were responding to:


"Yes, all galaxies within a cluster should have the same cosmological redshift, but they're not all at the exact same distance due to the depth effect. Since you have been advocating that you can calculate H0 from a single galaxy within a cluster, it is not clear to me that you understand the potential influence of the depth effect on the H0 calculation from an individual galaxy within a cluster.

The Hubble relation is linear. For the Coma cluster the individual galaxy distances calculated from the Tully-Fisher relation range from ~70 Mpc to ~100 Mpc. The HKP adopted 7143 km s-1 for the Coma cluster. If I only pick one galaxy in the cluster to calculate H0, then if it is the 70 Mpc galaxy I get H0=102.0. If my single galaxy happens to be the galaxy at 100 Mpc I get H0=71.4.

Again, that is why we take multiple galaxies to get both the redshift and the distance for the cluster"

And this was your response:


Again you are giving distance estimates without their errors in the usual crackpot fashion. I really don't understand why you repeatedly mislead BAUT readers like this. If you had ever taken a freshman physics laboratory course you would have been taught that just giving a result without its error is useless.

Of course part of the absurdity about this reaction on your part is that the numbers I provided were clearly not a "result" they were a generalization about the Coma cluster. TFR distances range from ~70 to ~100 Mpc. No actual distances were being provided, so there were no uncertainties to report.

And as for why I didn't give the uncertainty in the distances and measurements prior to that point - it is rarely the case on this board that people ask for the uncertainty in the numbers. If you wanted me to provide the errors for all distances, measurements ... all you had to do was ask. It was not necessary for you to throw in yet another of your infamous insults.

Zahl
2007-Sep-16, 06:20 PM
dgruss, you lack even the most basic knowledge of astrophysics research, such as how metallicity corrections are done. Benedict, van Leeuwen and An in their respective papers work with classical Milky Way and NGC 4258 cepheids and as such their papers require metallicity corrections. Benedict et al. for example got their metallicity correction of 0.10±0.03 mag dex-1 by "taking the weighted mean of the Kennicutt and Macri [metallicity correction] values and using the difference in metallicity of LMC and Galactic Cepheids" (0.36 dex) as explained in section 6.4.1 of their paper. Ngeow, Keller and Gieren work with LMC cepheids and there will be no metallicity corrections to their results by definition. The true distance modulus is defined (see Equation [5] in section 3.3 of the final Freedman et al. HKP paper) as

µ0=µV-R(µV-µI)+δµz

where δµz is the metallicity correction and is defined as δµz=γVI([O/H]-[O/H]LMC). Macri et al. found that γVI = -0.29±0.09±0.05 mag dex-1. However, if [O/H]=[O/H]LMC as in Ngeow, Keller and Gieren papers, then the metallicity correction term δµz is zero. Got that?

TomT
2007-Sep-16, 06:34 PM
Originally Posted by dgruss23
"Yes, all galaxies within a cluster should have the same cosmological redshift, but they're not all at the exact same distance due to the depth effect. Since you have been advocating that you can calculate H0 from a single galaxy within a cluster, it is not clear to me that you understand the potential influence of the depth effect on the H0 calculation from an individual galaxy within a cluster.

The Hubble relation is linear. For the Coma cluster the individual galaxy distances calculated from the Tully-Fisher relation range from ~70 Mpc to ~100 Mpc. The HKP adopted 7143 km s-1 for the Coma cluster. If I only pick one galaxy in the cluster to calculate H0, then if it is the 70 Mpc galaxy I get H0=102.0. If my single galaxy happens to be the galaxy at 100 Mpc I get H0=71.4.

Again, that is why we take multiple galaxies to get both the redshift and the distance for the cluster"

Hi dgruss23,

Do you have the error bars associated with the 2 Coma Cluster galaxies listed above? It would be may be helpful knowing this to get a better estimate of the error resulting in using only one galaxy for the cluster distance.
TomT

Zahl
2007-Sep-16, 06:40 PM
No - I wasn't criticized. I was called/compared with crackpots. Criticism would be if you said something like this:

"dgruss, you didn't provide the uncertainty on that Coma cluster distance, could you please do so and provide uncertainty on any numbers you're giving in the future?"

Giving distance estimates or any measured quantities without their errors for that matter is one of the gravest mistakes one can do in science and should never be trusted. Comparing such estimates and suggesting there is a difference is crackpottery. Plain and simple.

dgruss23
2007-Sep-16, 08:00 PM
dgruss, you lack even the most basic knowledge of astrophysics research, such as how metallicity corrections are done.

These sorts of claims against your opponents are better followed by correct statements than more misunderstanding. But this seems to be your approach. When shown to be wrong you immediately claim I'm ignorant, a crackpot, or whatever other irrelevant nonsense you can come up with and then proceed to ignore the salient points that so clearly indicate your error.



Benedict, van Leeuwen and An in their respective papers work with classical Milky Way and NGC 4258 cepheids and as such their papers require metallicity corrections. Benedict et al. for example got their metallicity correction of 0.10±0.03 mag dex-1 by "taking the weighted mean of the Kennicutt and Macri [metallicity correction] values and using the difference in metallicity of LMC and Galactic Cepheids" (0.36 dex) as explained in section 6.4.1 of their paper. Ngeow, Keller and Gieren work with LMC cepheids and there will be no metallicity corrections to their results by definition.

Well then apparently Benedict et al (2007) lack "the most basic knowledge of astrophysics research" as well because they specifically point out:


Note that none of the LMC distance moduli derived above (Table 15) have metallicity corrections applied. Macri et al demonstrate that a metallicity correction is necessary by comparing metal-rich Cepheids with metal poor cepheids in NGC 4258.

The Gieren et al study is one of those studies in Table 15. Why would they mention this - and then go on to derive a metallicity corrected distance to the LMC - if the metallicity corrections were of no relevance?

It is also worth noting that the Gieren et al analysis is not completely independent of the galactic cepheids. If you read section 4 of their paper you will see that they identify a systematic difference in the distances of long and short period cepheids in the LMC. In order to correct this they go back to the galactic cepheids:


We therefore investigate the recalibration of the relation between the p-factor land the period sufficient to reconcile the short and long period Cepheids distances. We do this by i) demanding that the period dependence of the distance moduli in Fig. 9 disappears and ii) demanding that the mean difference between the observed ISB distances and ZAMS distances to Galactic cluster Cepheids becomes zero at the same time. The use of Galactic cluster Cepheids seems the most reasonable approach to fix the zero point of the p-factor law in a solid empirical way.

If you start fixing the LMC distance by using Galactic cepheids, you once again are faced with the metallicity issue - a well established fact of the cepheid distance scale.

TomT
2007-Sep-16, 08:00 PM
Quote:
Originally Posted by dgruss23
No - I wasn't criticized. I was called/compared with crackpots. Criticism would be if you said something like this:

"dgruss, you didn't provide the uncertainty on that Coma cluster distance, could you please do so and provide uncertainty on any numbers you're giving in the future?"



Giving distance estimates or any measured quantities without their errors for that matter is one of the gravest mistakes one can do in science and should never be trusted. Comparing such estimates and suggesting there is a difference is crackpottery. Plain and simple.

Zahl,
Didn't you notice that the dgruss23 statement above isn't arguing against the importance of error bars, it is showing that there is a polite way to make your statement without the ad hominems. If you don't realize it yet, your ad hominems detract from your credibility, are unscholarly, and make the reader just want to ignore you, regardless of the merit of what you say.

TomT

dgruss23
2007-Sep-16, 08:03 PM
Hi dgruss23,

Do you have the error bars associated with the 2 Coma Cluster galaxies listed above? It would be may be helpful knowing this to get a better estimate of the error resulting in using only one galaxy for the cluster distance.
TomT

Hi Tom, No - because I was not referring to specific galaxies - only to the range of distances in the Coma cluster. The point was that if you select a single galaxy in a cluster with a 30 Mpc depth, you're could have a significant error in H0 if that galaxy is not near the center of the cluster.

dgruss23
2007-Sep-16, 08:06 PM
Giving distance estimates or any measured quantities without their errors for that matter is one of the gravest mistakes one can do in science and should never be trusted. Comparing such estimates and suggesting there is a difference is crackpottery. Plain and simple.

Zahl, I thought I made myself quite clear. I'm well aware of that - but on BAUT people don't often demand the error bars. Since you've made it clear you like to see them, I've been providing them for you.

I would also note that the focus of much of the discussion at that time was your own incorrect understanding of how the HKP found the distance and redshift of the SBF clusters. You never did admit those mis-understandings. It is also crackpottery to persist in your mistakes.

Zahl
2007-Sep-17, 12:18 AM
Zahl:
Benedict, van Leeuwen and An in their respective papers work with classical Milky Way and NGC 4258 cepheids and as such their papers require metallicity corrections. Benedict et al. for example got their metallicity correction of 0.10±0.03 mag dex-1 by "taking the weighted mean of the Kennicutt and Macri [metallicity correction] values and using the difference in metallicity of LMC and Galactic Cepheids" (0.36 dex) as explained in section 6.4.1 of their paper. Ngeow, Keller and Gieren work with LMC cepheids and there will be no metallicity corrections to their results by definition.

Well then apparently Benedict et al (2007) lack "the most basic knowledge of astrophysics research" as well because they specifically point out:

You blatantly ignore the distance modulus equation from the HKP paper that unambiguously says that there will be no metallicity correction for LMC cepheids and then refer to Table 15 from Benedict et al. that you do not understand. That is pure crackpottery.



Originally Posted by Benedict et al
Note that none of the LMC distance moduli derived above (Table 15) have metallicity corrections applied. Macri et al demonstrate that a metallicity correction is necessary by comparing metal-rich Cepheids with metal poor cepheids in NGC 4258.

The Gieren et al study is one of those studies in Table 15. Why would they mention this - and then go on to derive a metallicity corrected distance to the LMC - if the metallicity corrections were of no relevance?

The Gieren et al. study is not in Table 15. They are using Gieren's PLR slope with their own galactic cepheids (that must be metallicity corrected) to get the LMC distance moduli reported in Table 15 (18.52±0.06 etc.; Gieren et al. obtained 18.56±0.04 from their LMC cepheids that do not need metallicity corrections) as it specifically says in Table 15: "Zero points obtained by fitting the data plotted in Fig. 5, but with slopes constrained to those from G05, Per04, and OGLE." Fig. 5 plots Benedict's galactic cepheids, not Gieren's. You simply have no idea what you are talking about.

Zahl
2007-Sep-17, 12:25 AM
Hi Tom, No - because I was not referring to specific galaxies - only to the range of distances in the Coma cluster. The point was that if you select a single galaxy in a cluster with a 30 Mpc depth, you're could have a significant error in H0 if that galaxy is not near the center of the cluster.

Clusters have a depth of a couple of Mpc in mainstream cosmology. ATM ideas of 30 Mpc cluster depths have no place in a non-ATM thread such as this one. No such evidence has ever been presented in this thread anyway.

Zahl
2007-Sep-17, 12:29 AM
Originally Posted by TomT

Please put me on your ignore list right now.

Nereid
2007-Sep-18, 01:06 PM
Can Old Galaxies at High Redshifts and Baryon Acoustic Oscillations Constrain H_0? - that's the title of a new preprint on arXiv (http://arxiv.org/abs/0709.2195).

Here's the abstract:
A new age-redshift test is proposed in order to constrain $H_0$ with basis on the existence of old high redshift galaxies (OHRG). As should be expected, the estimates of $H_0$ based on the OHRG are heavily dependent on the cosmological description. In the flat concordance model ($\Lambda$CDM), for example, the value of $H_0$ depends on the mass density parameter $\Omega_M=1 - \Omega_{\Lambda}$. Such a degeneracy can be broken trough a joint analysis involving the OHRG and baryon acoustic oscillation (BAO) signature. In the framework of the $\Lambda CDM$ model our joint analysis yields a value of $H_0=71^{+4}_{-4}\kms$ Mpc$^{-1}$ ($1\sigma$) with the best fit density parameter $\Omega_M=0.27\pm0.03$. Such results are in good agreement with independent studies from the {\it{Hubble Space Telescope}} key project and the recent estimates of WMAP, thereby suggesting that the combination of these two independent phenomena provides an interesting method to constrain the Hubble constant.Perhaps there are even more independent methods for estimating H0, still in the pipeline?

Zahl
2007-Sep-21, 07:05 PM
That's not an independent method, it is model (LCDM) dependent as it says in the abstract.

Zahl
2007-Sep-21, 07:37 PM
I compiled yet another set of LMC distance determinations, this time using the HKP "mean of the mean" method (see 8.1. in http://adsabs.harvard.edu/abs/2001ApJ...553...47F). This involves the big seven distance estimation techniques (Cepheids, Eclipsing Binaries, SN 1987A, Tip of the Red Giant Branch, Red Clump stars, RR Lyrae stars and Mira stars), giving each technique unit weight. I searched the ADS for all papers published after January 2002 that had at least one LMC distance modulus determination done with any of these seven techniques. As in the HKP final paper, only one paper per author per method (the latest) was chosen and if that paper had several results, I took the one that the author(s) identified as their best, otherwise I calculated the arithmetic mean. Finally I calculated the arithmetic mean of all determinations done with the same technique.

The results:

http://www.sci.fi/~draxl/lmc_distance.png

The .ods spreadsheet containing the references is here (http://www.sci.fi/~draxl/final_lmc_compilation.ods).

The result is again 18.48 with similar errors as in my weighted average. Taking the mean of the three error determinations (weighted average, RMS dispersion, standard error) gives 18.48 ± 0.02 that I think is quite correct.

It would be interesting to do a similar literature review for the other major error sources (WFPC2 zero point, bulk flows, crowding, etc.) to see how it would affect HKP's final H0 result and errors.

Nereid
2007-Sep-21, 08:34 PM
That's not an independent method, it is model (LCDM) dependent as it says in the abstract.Yes, that's so.

However, the method does produce an estimate of H0, from inputs quite separate from those used in the HKP, the SZE, and gravitational lens time delays.

And it suggests a way to estimate this parameter, breaking the degeneracy in the analysis of WMAP data (per Spergel et al.).

dgruss23
2007-Sep-22, 03:22 AM
You blatantly ignore the distance modulus equation from the HKP paper that unambiguously says that there will be no metallicity correction for LMC cepheids and then refer to Table 15 from Benedict et al. that you do not understand. That is pure crackpottery.

No, I wasn't ignoring it, but as I pointed out, Gieren et al had to use galactic cepheids to fix the p-factor in their LMC distances. Did they account for the metallicity difference between the galactic and LMC distances when they did this?:



We therefore investigate the recalibration of the relation between the p-factor land the period sufficient to reconcile the short and long period Cepheids distances. We do this by i) demanding that the period dependence of the distance moduli in Fig. 9 disappears and ii) demanding that the mean difference between the observed ISB distances and ZAMS distances to Galactic cluster Cepheids becomes zero at the same time. The use of Galactic cluster Cepheids seems the most reasonable approach to fix the zero point of the p-factor law in a solid empirical way.


I didn't find where they make any mention of accounting for metallicity when they tied their LMC distances to the Galactic cepheids in this fashion. If you feel the metallicity corrections were still unnecessary, then please explain why.


The Gieren et al. study is not in Table 15. They are using Gieren's PLR slope with their own galactic cepheids (that must be metallicity corrected) to get the LMC distance moduli reported in Table 15 (18.52±0.06 etc.; Gieren et al. obtained 18.56±0.04 from their LMC cepheids that do not need metallicity corrections) as it specifically says in Table 15: "Zero points obtained by fitting the data plotted in Fig. 5, but with slopes constrained to those from G05, Per04, and OGLE." Fig. 5 plots Benedict's galactic cepheids, not Gieren's. You simply have no idea what you are talking about.

First, you're right. I missed that when I was reading the paper. They applied the G05 slopes to their galactic cepheids. Second, you're rude to suggest I have no idea what I'm talking about. Despite our disagreements, this is a much higher level discussion than the "no idea crackpottery" description you've applied. Very few threads on BAUT are at the level of discussion that we're having. And the discussion could be much more polite if you would stop with the rude behavior.

Third, are you going to actually take pause to consider the metallicity issue in anything other than a knee-jerk fashion? I showed quite clearly with the earlier table that the metallicity corrections lead to LMC distance moduli of ~18.40 +/-0.05 for the studies with metallicity corrections.

Both Gieren and Benedict are authors of this paper (http://xxx.lanl.gov/abs/0709.3255) which does not discuss the metallicity effect on the zero point but concludes that their study sets an upper limit of 18.50 for the LMC distance modulus before considering metallicity corrections - which we know lower the LMC distance modulus.


This important result shows that applying the well determined LMC slopes to galaxies of different metallicity contents is warranted. Possible metallicity effects on the zero-point of the relations are not studied in the present work, and may still prevent a precise determination of galaxy distances using Cepheids. In the case of the LMC, the true distance modulus (corrected for metallicity effects) appears to be smaller than 18.50.

And of course there are two metallicity issues here. This is not the first paper I've seen to suggest that the slope appears not to be significantly effected by metallicity. However, the studies that apply metallicity corrections are applying it to the zero point. And again we have authors suggesting that when the LMC distance is corrected for metallicity it's distance modulus will be less than 18.50. Gieren seems to understand this despite the G05 paper that found 18.56 +/-0.04.

So Gieren seems to have signed off on the probability that the LMC distance modulus is lower than his 18.56 from G05.

And why do the studies that apply metallicity corrections get the same distances to the LMC as Gieren et al, Ngeow et al ... when they don't apply the metallicity corrections to their sample? If the metallicity issue is unimportant to the Ngeow and Gieren studies, then shouldn't all these other studies get a LMC distance modulus of ~18.60 before applying the metallicity corrections so that they arrive at a value of ~18.50 after correcting for metallicity?

But that is not what is happening. They get ~18.50 without the metallicity corrections and then with the metallicity corrections the distance modulus drops to 18.40, systematically offset from the Gieren and Ngeow type studies.

dgruss23
2007-Sep-22, 04:14 AM
Clusters have a depth of a couple of Mpc in mainstream cosmology. ATM ideas of 30 Mpc cluster depths have no place in a non-ATM thread such as this one. No such evidence has ever been presented in this thread anyway.

Wow, we only have to go as far as the Virgo cluster to demonstrate you're wrong on this one. Just the Cepheid distances alone to Virgo galaxies range from 14-22 Mpc. When you start applying Tully-Fisher distances, some of the Virgo galaxies may be at ~28 Mpc.

Tully&Pierce (2000) provided the data for their Tully-Fisher sample. As I noted earlier in this thread, with the HKP final Cepheid distances their zero point is revised from 21.57 to 21.50+/-0.23.

Applying this zero point to the Coma Cluster sample in TP00 and further restricting their sample by selecting only galaxies with inclinations between 45 and 80 degrees and rotational velocities in excess of 155 km s-1 results in distance moduli ranging from 34.18 to 35.03.

Sorry, TP00 do not provide the full set of data needed to calculate uncertainty on these distances. Magnitude uncertainty is not provided and the rotational velocity uncertainty is only provided for the raw uncorrected magnitudes. Uncertainty in the inclinations will be greatest source of uncertainty in the distance moduli. Uncertainty from the rotational velocities will be less than +/0.10 mag before inclination uncertainty. Assuming a 5 degree inclination uncertainty and the TP00 slope of 8.11 would result in an uncertainty of ~+/-0.24 mag from inclination and rotational velocity uncertainty - which is close to the observed scatter in the zero point.

At any rate, the distance moduli of the Coma cluster members identified by TP00 range from 68.5 +/-8 Mpc to 101.4 Mpc +/-12 Mpc. This is quite a bit more than a "couple Mpc". This is not "ATM" (more poor behavior on your part to characterize it that way). This is right from the TP00 paper.

Of course a better way might be to look at the distance distribution binned by 0.10 mag distance modulus bins. That looks like this for the 14 Coma cluster galaxies that meet the more restrictive inclination and Vrot criteria:

34.10-34.19 --> 1
34.20-34.29 --> 0
34.30-34.39 --> 1
34.40-34.49 --> 2
34.50-34.59 --> 0
34.60-34.69 --> 2
34.70-34.79 --> 3
34.80-34.89 --> 2
34.90-34.99 --> 2
35.00-35.09 --> 1

If we take the last 5 bins there are 10 galaxies ranging in distance from 86.7 to 101.3 Mpc - a range of ~15 Mpc.

There are 5 galaxies in the Centaurus cluster with SBF distances in Tonry et al 2000 that have reported accuracy of +/-0.24 or better. These range in distance modulus from 32.38+/-0.18 to 32.98+/-0.24 - a distance range of almost 10 Mpc. The same study gives distances to Virgo cluster galaxies ranging from 13.5 to 27 Mpc.

Even if you want to argue that the bulk of this variation comes from errors in distance estimates (unlikely and you'd need to show that is the case), the above examples illustrate the large range in individual distances to galaxies in clusters you get when using Cepheids, the TFR, and the SBF method. And that is the reason I originally stated the SBF analysis of the HKP is "irrelevant" because only one galaxy was selected per cluster. However, I'm willing to revise that from "irrelevant" to "negligibly compelling". Of course I'm just a "crackpot" for suggesting that we need more SBF distances to individual galaxies for the more distant clusters the HKP used. Asking for more data - that's some real woo-woo science there.

dgruss23
2007-Sep-22, 04:16 AM
The .ods spreadsheet containing the references is here (http://www.sci.fi/~draxl/final_lmc_compilation.ods).



The spreadsheet doesn't open for me. Could you just paste in the references - or even just the author name and year of publication. Thanks.

dgruss23
2007-Sep-22, 04:19 AM
Yes, that's so.

However, the method does produce an estimate of H0, from inputs quite separate from those used in the HKP, the SZE, and gravitational lens time delays.

And it suggests a way to estimate this parameter, breaking the degeneracy in the analysis of WMAP data (per Spergel et al.).

If you empirically derive H0 from methods independent of the assumptions of the Lamda-CDM model, the value of H0 becomes an actual constraint on the parameter space of the cosmology.

Zahl
2007-Sep-23, 08:39 PM
Originally Posted by Zahl

You blatantly ignore the distance modulus equation from the HKP paper that unambiguously says that there will be no metallicity correction for LMC cepheids and then refer to Table 15 from Benedict et al. that you do not understand. That is pure crackpottery.

No, I wasn't ignoring it, but as I pointed out, Gieren et al had to use galactic cepheids to fix the p-factor in their LMC distances. Did they account for the metallicity difference between the galactic and LMC distances when they did this?:

You ignored it and you still keep ignoring it. The metallicity corrections in the four papers (Benedict, An, van Leeuwen, Macri) you made such a fuss about were all to the zero point of the Period-Luminosity relation. The equation you have repeatedly ignored unambiguously says that there are no metallicity corrections to the zero point of the PL relation if the Cepheids are in the LMC, thus there are no metallicity corrections in the other papers by definition. You were dead wrong. When I pointed this out, you started talking about the p-factor that is used to derive pulsation velocities of cepheids from their observed radial velocities. That is again pure crackpottery.



Originally Posted by Gieren et al 2005
We therefore investigate the recalibration of the relation between the p-factor land the period sufficient to reconcile the short and long period Cepheids distances. We do this by i) demanding that the period dependence of the distance moduli in Fig. 9 disappears and ii) demanding that the mean difference between the observed ISB distances and ZAMS distances to Galactic cluster Cepheids becomes zero at the same time. The use of Galactic cluster Cepheids seems the most reasonable approach to fix the zero point of the p-factor law in a solid empirical way.

I didn't find where they make any mention of accounting for metallicity when they tied their LMC distances to the Galactic cepheids in this fashion. If you feel the metallicity corrections were still unnecessary, then please explain why.

No, it is the other way round. If you think that some kind of metallicity correction should be applied to the p-factor, you must describe quantitatively how the p-factor equation should be modified and the physics behind this. It is not my job to show why some vague undefined idea is not necessary.


you're rude to suggest I have no idea what I'm talking about.

You can show right now that you know what you are talking about by describing how the p-factor equation should be modified to account for metallicity. If you can't do this, it will be yet another indication that you don't know what you are talking about.


Third, are you going to actually take pause to consider the metallicity issue in anything other than a knee-jerk fashion?

When are you going to take pause to consider the actual physics involved in the metallicity issue?


And why do the studies that apply metallicity corrections get the same distances to the LMC as Gieren et al, Ngeow et al ... when they don't apply the metallicity corrections to their sample?

Why do Benedict, van Leeuwen, An and Macri get ~18.40 after metallicity corrections? This is an open question in astrophysics and these professionals and their colleagues are actively researching it. One candidate explanation is that the metallicity correction as in Macri et al. is too aggressive (overestimated metallicity difference between local cepheids and LMC cepheids and/or too large correction term).

This explanation is supported by Rizzi et al. (2007) who write that "It is found that our zero-point is in fine agreement with the Cepheids scale for 15 comparison objects (μCeph − μTRGB = −0.01 ± 0.03). However, this good agreement does not require the currently assumed metallicity dependence in the Cepheids PL relation." 2007ApJ...661..815R. This is further supported by the fact that excluding all cepheid papers from the previous "mean of the mean" review does not change the LMC distance modulus - it is still 18.48 mag with similar errors.

Zahl
2007-Sep-23, 09:11 PM
Originally Posted by Zahl
Clusters have a depth of a couple of Mpc in mainstream cosmology. ATM ideas of 30 Mpc cluster depths have no place in a non-ATM thread such as this one. No such evidence has ever been presented in this thread anyway.

Wow, we only have to go as far as the Virgo cluster to demonstrate you're wrong on this one. Just the Cepheid distances alone to Virgo galaxies range from 14-22 Mpc. When you start applying Tully-Fisher distances, some of the Virgo galaxies may be at ~28 Mpc.

What I wrote in the above quote is a well known fact of astronomy and any good astronomy textbook (such as Fundamental Astronomy (http://www.amazon.com/Fundamental-Astronomy-H-Karttunen/dp/3540001794)) will give you the quantitative rationale behind it.

And what did I say about comparing distance determinations without errors? The Virgo cluster has a diameter of about 10 degrees according to NED and a distance of 15.8 ± 0.5 Mpc according to this (http://www.aoc.nrao.edu/~smyers/courses/astro12/distances.html) source. This corresponds to a diameter of about 3 Mpc. When the errors in your numbers are considered, they will not be in conflict with this figure.


At any rate, the distance moduli of the Coma cluster members identified by TP00 range from 68.5 +/-8 Mpc to 101.4 Mpc +/-12 Mpc. This is quite a bit more than a "couple Mpc". This is not "ATM" (more poor behavior on your part to characterize it that way). This is right from the TP00 paper.

That's not from the authors of that paper, it is your own attempt. ± 0.24 mag is too small of an error, especially for such an old determination. Even modern Tully-Fisher distance determinations have errors of ~0.40 mag on average for individual galaxies that are at Coma distances (source: NED database). This would also be in line with the "15-20% RMS" error for individual galaxies given by T&P. Your error calculation would be more in line with both T&P and NED if the errors were added in quadrature, giving 0.34 mag, but even this is somewhat too precise. In any case, with proper error treatment the Coma depth will be compatible with the mainstream and my quote above. If you still want to argue for 30+ Mpc cluster depths you need to start a thread in the ATM section.

Zahl
2007-Sep-23, 09:19 PM
The spreadsheet doesn't open for me. Could you just paste in the references - or even just the author name and year of publication. Thanks.

The spreadsheet contains more than 100 author names and 30+ ADS literature codes that I am not going to spam this thread with. If you absolutely can't open the Open Office (available for free here (http://www.openoffice.org/)) .ods file I'll see if I can save it in Excel's format.

ToSeek
2007-Sep-24, 02:22 PM
That is again pure crackpottery.

Zahl has been banned for a week for continuing to use this sort of indecorous language despite repeated warnings to cease doing so.

Jerry
2007-Sep-26, 05:03 AM
arXiv:0709.3924


In the last 20 years, much progress has been made and estimates now range between 60 and 75 km/s/Mpc, with most now between 70 and 75km/s/Mpc, a huge improvement over the factor-of-2 uncertainty which used to prevail. Further improvements which gave a generally agreed margin of error of a few percent rather than the current 10% would be vital input to much other interesting cosmology. There are several programmes which are likely to lead us to this point in the next 10 years.

New paper - NOS.

folkhemmet
2007-Sep-26, 05:49 AM
dgruss23,

Is your position that there is currently a paucity of high quality data--say of the quality needed to "nail down" H_O, or, is your position that a precise value of H_O will remain forever elusive? Are you advocating a "Hubble constant uncertainty principle" of sorts? How would you suggest we, as you said, "fix the problems with the HKP results" so as to help reduce the error bars associated with this study's measurement of H_O?

Before you rudely insulted me a while back (because I brought up some interesting philosophical questions, and submitted a poem, surrounding your style of questioning professional cosmology) I was merely pointing out to you to be more mindful in your posts that it is easier to cherry pick evidence and criticize others' work than to produce your own results. By analogy, it is easier to destroy a beautiful painting than it is to create one of your own. And, if you do believe that whopping uncertainties will forever plague astrophysical science as your posts in this thread seem to indicate, and we are now really no closer to understanding anything about the large scale Universe than what we were during the neolithic, then what is the point of spending so much time and effort engaging in an ultimately futile endeavour? In other words, if astrophysics is akin to literary critical theory, then what is the point of engaging in high-tech expensive astrophysical research? But wait, I almost forgot, you are not actually engaging in the serious and challenging research, those idiotic professionals (who have it all wrong..yadda.. yadda) are the ones doing such research.

Maybe you should try, if you feel so strongly that professional astronomers have got it all wrong, try to submit a paper clearly outlining why the HKP has gotten it wrong. If you are right, then the tide will eventually turn in your favor. If you are wrong, it won't-- and then you'll have to either reevaluate your position and resubmit or accept your defeat and move on. A pattern common among critics of mainstream astronomy is that they spend a lot of time criticizing professional astronomers' work in online forums and very little time coming up with meritable, or at the very least substantive, results of their own. I can't help but wonder that at least part of the reason for this troubling phenomenon is that many of their quasi-ATM or full-fledged ATM criticisms are less careful/accurate than the analyses of the professional astronomers they so harshly criticize.

Jerry,

Thanks man. Yeah, I also was going to provide a link to the Jackson paper for this thread. You beat me to it! LOL.

Ari Jokimaki
2007-Sep-26, 06:05 AM
A pattern common among critics of mainstream astronomy is that they spend a lot of time criticizing professional astronomers' work in online forums and very little time coming up with meritable, or at the very least substantive, results of their own.
Once again you are (rudely) jumping to conclusions about people, dgruss23 has published several papers in peer reviewed journals (and this is quite commonly known in BAUT forum), and from my personal experience with him, I'd say he spends more time with his research than in online forums criticizing work of others.

I suggest you finally start to think twice before you post your opinions on other people. :mad:

folkhemmet
2007-Sep-26, 11:23 AM
Actually, Ari, my last post contained several legitimate questions which were not rudely posed. I guess you just chose to ignore the first paragraph of my last post in which I politely posed several questions to dgruss23 regarding his position on the subject of the H_O. The next paragraph also posed several important philosophical questions having to do with the nature of astrophyiscal science vis a vis critics of the discipline.

Maybe you were unaware of the fact that dgruss23 was ruder to me than I ever was to him earlier in this thread, but I guess that's just fine in your book if some people are rude but it's not okay if the people who they are rude to respond. You should not continue to hold this double standard.

Also, Ari, where in this thread are there links to published results authored by dgruss23? Where are the links to most of the published resulted authored by BAUT members in these threads? I think most people in BAUT agree with me that the common pattern is for members to cite papers which support their hunches and then their opponents counter-cite with other papers.

It is easily fair to say that a pattern common among critics of mainstream astronomy is that they spend a lot of time criticizing professional astronomers' work in online forums and very little time coming up with meritable, or at the very least substantive, results of their own. I can't help but wonder that at least part of the reason for this troubling phenomenon is that many of their quasi-ATM or full-fledged ATM criticisms are less careful/accurate than the analyses of the professional astronomers they so harshly criticize.

Let's give the professional astronomers some credit once and a while, as they are the ones providing us with a wealth of data and beautiful pictures. Their work is not perfect, nor is their work immune from scruntiny. However, the vast majority of these people are PHDs and have spent years involved honing their skills as observors and analyists; lets not deny the importance of experience. Only 1 in 10000 people, or something of that order, is a professional astronomer. Most of the critics of mainstream astronomy are not accomplished PHDs and do indeed spend more time leveling criticism than doing substantive research of their own-- that's just an indisputable fact. Most BAUT critics lack the proper credentials and experience with interpreting data and drawing conclusions from it relative to the professional astronomers. If you don't believe me, lets do a BAUT poll to see what percentage of BAUT members are actually professional astronomers.

Jim
2007-Sep-26, 01:02 PM
Hey, here's a radical idea! Instead of discussing how rude everyone is being, let's all politely stay on topic.

If someone seems to be stepping on toes, report him/her. Don't discuss it here, please.

Ari Jokimaki
2007-Sep-26, 04:50 PM
I guess you just chose to ignore...
I have no obligation to respond to all of your sayings. I just pointed out that if you're trying to use dgruss23 as an example of someone who doesn't do his own research and instead just criticizes here the work of others, then you're simply wrong. It doesn't go away by throwing accusations at him or me. Amount of my standards, depending apparently of which posts I have responded or not (there are hundreds of similar posts I haven't responded to), don't have anything to do with that.

Most of dgruss23's papers can't be linked to in this thread because they contain discussion about ATM subjects. I'll give a link to one older paper just as an example that the claimed papers are there: here it is (http://adsabs.harvard.edu/abs/2002ApJ...565..681R), and how about that, HKP is discussed there as well. Consider that in the light of your sayings in your post #224:


Maybe you should try, if you feel so strongly that professional astronomers have got it all wrong, try to submit a paper clearly outlining why the HKP has gotten it wrong.
Maybe you should try to find out about things before you start making accusations about people.

folkhemmet
2007-Sep-29, 06:31 PM
Jim said: "Hey, here's a radical idea! Instead of discussing how rude everyone is being, let's all politely stay on topic"

Hi Jim,

Nice to meet you. Actually, I had several questions specifically devoted to the subject of this thread none of which were rudely posed to dgruss23. dgruss23 appears to be a decent scholar whom I respect, and I am more than willing to concede that he probably knows way more about astronomy than I do. Having said that, his more advanced knowledge does not preclude me from asking him questions about his position. I thank Ari for providing a link to his paper.

On the other hand, I still maintain that in general my assumption is correct--that is, I think most people in BAUT agree with me that the common pattern is for members to cite papers which support their hunches and then their opponents counter-cite with other papers. A lot of the threads are critical of the work done by professional astronomers, but the criticism is mostly not constructive e.g. it is laden with radical skepticism, disbelief in the scientific method when it is used by professional astronomers, and it hardly ever (sometimes it does and I would like to see more of this) says "here is how we can improve this or that measurement or here is a better way to study a given phenomenon. Mostly its just more of the same: "astronomers are dead wrong, modern astrophysics is all wrong, they don't know what they are doing," etc. I doubt I am the only BAUT member who finds this behavior frustrating and tiresome. The fact is that most of the critics of mainstream astronomy are not accomplished astronomy PHDs and do indeed spend more time leveling criticism than doing substantive research of their own-- that's just an indisputable fact. Most BAUT critics lack the proper credentials and experience with interpreting data and drawing conclusions from it relative to the professional astronomers.

TomT
2007-Sep-30, 03:20 AM
The fact is that most of the critics of mainstream astronomy are not accomplished astronomy PHDs and do indeed spend more time leveling criticism than doing substantive research of their own-- that's just an indisputable fact. Most BAUT critics lack the proper credentials and experience with interpreting data and drawing conclusions from it relative to the professional astronomers.

Hi folkhemmet

I am not an accomplished PHD but I do think that one of dgruss23’s points in this thread makes complete sense, namely that you get a much more accurate distance estimate to a galaxy cluster center by taking the distance measurements to a number of galaxies in the cluster to estimate the cluster average, than to just take the distance of only one. This proposition has been argued against vehemently by one of the main stream participants. Maybe you can tell me where I am going wrong in the following reasoning, or Zahl can explain if he returns.

I voiced this opinion in the following exchange.

Quote:
Originally Posted by TomT
I think you have stated that the distance to a galaxy in a cluster is determined by some method such as SBF. Then the redshifts from many galaxies in the cluster are determined and an average is calculated to get the mean redshift, and thus velocity, for the cluster. From these H0 is determined. So my question boils down to, why dont you also calculate the distance to each galaxy for which you have a redshift, and then find the mean distance to use for the cluster distance. Wouldn't you get a more accurate value using the mean distance to calculate H0?

Zahl: No. See my calculation earlier in this thread.

In reviewing the referenced earlier calculation, it amounted to claiming that the depth of galaxy clusters is only a couple of Mpc’s, and considering the great distance to clusters, this amount of depth is negligible.

Later we were presented this information by the same person regarding the Virgo Cluster.

Zahl:

And what did I say about comparing distance determinations without errors? The Virgo cluster has a diameter of about 10 degrees according to NED and a distance of 15.8 ± 0.5 Mpc according to this source. This corresponds to a diameter of about 3 Mpc. When the errors in your numbers are considered, they will not be in conflict with this figure.

The "source" referred to is at:

http://www.aoc.nrao.edu/~smyers/courses/astro12/distances.html

8 observations and calculation methods are given for galaxies in Virgo. The distances range from 14.9 +/- 1.2 Mpc to 21.1 +/- 3.9 Mpc. The distance of 15.8 +/- 0.5 Mpc quoted above and used by Zahl is the weighted average of the 8 observations with 1/sigma used as the weighting factor. So now Zahl is using the distance measurements of many galaxies to get the average for the cluster just as dgruss23 claims is necessary, and just as I questioned him on in the quote above, to which he responded that this wasn’t necessary. I have no idea if Zahl is an accomplished PHD in astronomy, but he contradicts himself here.

The reason why only one measurement or observation is insufficient can be illustrated by referring again to Zahl’s source. If, for example, the measurement of 21.1+/- 3.9 Mpc was chosen, this in conjunction with the Virgo velocity of 1142+/- 61 km/sec would yield a value for H0 of 56.6 with a range of 43.2 to 69.9. So, in my mind, this verifies that distances to many galaxies in a cluster are required to get a meaningful value for the cluster center. Correct me if I am wrong.

TomT

dgruss23
2007-Sep-30, 03:45 PM
Maybe you were unaware of the fact that dgruss23 was ruder to me than I ever was to him earlier in this thread, but I guess that's just fine in your book if some people are rude but it's not okay if the people who they are rude to respond. You should not continue to hold this double standard.

Let's start with the above. folkhemmet made some comments on this thread in post #34. I responded in post #35 - with detailed and referenced explanation. folkhemmet responded with this in post #38:


Maybe I am reading him/her wrong, but I get the impression that dgruss23 thinks the Hubble constant is 80 or above, or, its actual value will forever remain elusive due to....

It's just not fair.
There's no way of getting at what's out there.
There's no objective truth.
Don't fool yourself mortal, forget about proof.
All analyses have their flaws.
Careful thought will only run up against nature's impervious walls!

There is a conspiracy afoot to suppress all alternatives.
It is easier to criticize than to produce your own results.
We are now really no closer to understanding anything about the large scale Universe than what we were during the neolithic.
But if astrophysics is akin to literary critical theory, then what is the point of engaging in expensive astrophysical research?
The cosmos is shrouded in mystery.
Let's be reluctant cosmologists and radical skeptics.
Cosmology has no direct benefit to humanity even close to research in the life sciences.
And if there really is some fundamental reason why we will never be close to understanding anything about the large scale properties of the Universe, then why should we trust the assumptions/methods behind the ATM Universe-students?

Or, alternatively, we could accept that astrophysicists are making significant progress assembling a giant "Universe story jigsaw" and some unforeseen technological breakthrough which may benefit life will come from the practice of astrophysics.

folkhemmett you did not respond to any of the science I discussed but started in on this pseudo-philosophical accusation about my motives. In response I told you that if you were uninterested in discussing scientific evidence you could always ignore the thread (post #39). I stand by my response in that earlier post.

I'm not sure what response you expected from your "poem". Was I supposed to respond to each line?

However, if you have some arguments that you feel justify the comments I've quoted above as anything more than a juvenile attempt on your part to characterize my motives, then please feel free to explain yourself this time around. You didn't explain yourself several months ago when you posted those comments.

dgruss23
2007-Sep-30, 04:06 PM
dgruss23,

Is your position that there is currently a paucity of high quality data--say of the quality needed to "nail down" H_O, or, is your position that a precise value of H_O will remain forever elusive?

Now this is more constructive than your July poem.

Neither. My position is simply that it is still possible that H0 could be in the 80's rather than the prevailing ~70 that is the standard used in current cosmological analysis. I do not believe that there is insufficient data to derive a reasonable value of H0. Nor do I subscribe to some weird philosophal notion that H0 is forever elusive.



Are you advocating a "Hubble constant uncertainty principle" of sorts?

No. What I've pointed out throughout this discussion is that researchers adopt methods with underlying assumptions - and that the resulting H0 value one gets is as reliable as the assumptions on which it is founded. For example, in the debate about the SBF distances, I argued that researchers should be cautious about the SBF H0 result of the HKP because only 6 SBF galaxies were used. The assumption in that instance was that the individual SBF galaxies were at the center (or nearly so) of their respective clusters. If that assumption is valid, then the SBF distances (and H0) are as good as the data uncertainty. If that assumption is invalid, then no matter how accurate the SBF distance, the H0 result is flawed.


How would you suggest we, as you said, "fix the problems with the HKP results" so as to help reduce the error bars associated with this study's measurement of H_O?

That depends upon the method in question. I've strongly recommended that where the SBF method is concerned, more than 1 SBF distance per cluster is needed.


Before you rudely insulted me a while back (because I brought up some interesting philosophical questions, and submitted a poem, surrounding your style of questioning professional cosmology) I was merely pointing out to you to be more mindful in your posts that it is easier to cherry pick evidence and criticize others' work than to produce your own results. By analogy, it is easier to destroy a beautiful painting than it is to create one of your own. And, if you do believe that whopping uncertainties will forever plague astrophysical science as your posts in this thread seem to indicate, and we are now really no closer to understanding anything about the large scale Universe than what we were during the neolithic, then what is the point of spending so much time and effort engaging in an ultimately futile endeavour? In other words, if astrophysics is akin to literary critical theory, then what is the point of engaging in high-tech expensive astrophysical research? But wait, I almost forgot, you are not actually engaging in the serious and challenging research, those idiotic professionals (who have it all wrong..yadda.. yadda) are the ones doing such research.

And here is where your mistaken attitude shines through. On what basis do you conclude that you actually know anything about what I'm doing? On what basis do you conclude that I have so little respect for professional researchers to conclude that they are "idiotic professionals". Your attitude is exactly why I need to spend even less time here than the already limited time I spend. You have provided yet another great example of the big problem with BAUT - people making assumptions about others on BAUT rather than discussing ideas.


Maybe you should try, if you feel so strongly that professional astronomers have got it all wrong, try to submit a paper clearly outlining why the HKP has gotten it wrong.

First, where have I advocated the position that the Hubble Key Project got it all wrong? All wrong about what? I've suggested that H0 could still be in the 80's. If H0 is to be in the 80's then there must be some unaccounted for systematics in the HKP final results. That is not the same thing as saying they "got it all wrong" and that is not what I believe. This is exactly why I told you to feel free to ignore the thread back in July. How many of these nonsense accusations and mischaracterizations of what I'm saying must I respond to? As many as you can dream up?



If you are right, then the tide will eventually turn in your favor. If you are wrong, it won't-- and then you'll have to either reevaluate your position and resubmit or accept your defeat and move on. A pattern common among critics of mainstream astronomy is that they spend a lot of time criticizing professional astronomers' work in online forums and very little time coming up with meritable, or at the very least substantive, results of their own. I can't help but wonder that at least part of the reason for this troubling phenomenon is that many of their quasi-ATM or full-fledged ATM criticisms are less careful/accurate than the analyses of the professional astronomers they so harshly criticize.

And what is the reason for the troubling phenomenon on internet forums whereby people - rather than discuss evidence - would prefer to make assumptions about people's motives and create laundry lists of incorrect characterizations of said motives?

parejkoj
2007-Oct-01, 03:31 AM
Not to fan the flames (though some people could certainly stand to be a bit more polite), but this was just posted to astro-ph:

http://arxiv.org/abs/0709.4531

Jerry
2007-Oct-01, 03:34 AM
A Problem with the Clustering of Recent Measures of the Distance to the Large Magellanic Cloud

Bradley E. Schaefer
Astronomical Journal in press

http://xxx.lanl.gov/abs/0709.4531


.. Before the year 2001, the many measures spanned a wide range (roughly 18.1 < \mu < 18.8) with the quoted error bars being substantially smaller than the spread, and hence the consensus conclusion being that many of the measures had their uncertainties being dominated by unrecognized systematic problems. In 2001, the Hubble Space Telescope Key Project (HSTKP) on the distance scale made an extensive analysis of earlier results and adopted the reasonable conclusion that the distance modulus is 18.50+-0.10 mag, and the community has generally accepted this widely popularized value.

After 2002, 31 independent papers have reported new distance measures to the LMC, and these cluster tightly around \mu=18.50 mag. Indeed, these measures cluster too tightly around the HSTKP value, with 68% of the measures being within 0.5-sigma of 18.50 mag. A Kolmogorov-Smirnov test proves that this concentration deviates from the expected Gaussian distribution at a >3-sigma probability level. This concentration is a symptom of a worrisome problem. Interpretations considered include correlations between papers, widespread over-estimation of error bars, and band-wagon effects. This note is to alert workers in the field that this is a serious problem that should be addressed.

It looks like Dgruss is not alone in raising his hand on this issue. This sudden tightening of distance estimates about a published and 'preferred' mean does not bode well: It means objectivity has been lost.

How so? If the HSTKP had not published a value which is essentially a weighted mean; what are the odds 31 different (and supposedly independant) teams would find the same value without knowing the answer before hand? If a freshman chem lab produced this kind of numbers, they would all be kicked out of school for collaborative cheating.

Zahl
2007-Oct-02, 09:54 PM
I do think that one of dgruss23’s points in this thread makes complete sense, namely that you get a much more accurate distance estimate to a galaxy cluster center by taking the distance measurements to a number of galaxies in the cluster to estimate the cluster average, than to just take the distance of only one.

True, but you are missing the point. The point is not to estimate the distance to a galaxy cluster center, but to estimate H0 as well as possible with a limited number of SBF measurements. One SBF measurement per cluster is the best course of action as I have quantitatively explained several times in this thread, but these explanations have fallen on deaf ears. Consider this practical example: there are six clusters at a distance of 60 Mpc with redshifts of 4300 km/s (corresponding to H0≈72 km/s/Mpc) and one SBF galaxy measurement is taken of each. Let's say the clusters have a diameter of 12 Mpc, their galaxy distributions are Gaussian and the SBF galaxies have random locations. A Gaussian random number generator gives the following distances for the six galaxies (mean=60 Mpc, SD=2 Mpc):

#1 - 63.27
#2 - 56.95
#3 - 61.46
#4 - 61.19
#5 - 61.50
#6 - 58.20

Our estimation for H0 is then (4300/63.27+4300/56.95+4300/61.46+4300/61.19+4300/61.50+4300/58.20)/6=71.25 km/s/Mpc. Working with the clusters themselves and doing a huge number of measurements to fix the distances to their centers would have given us 4300/60=71.67 km/s/Mpc for H0. I hope this finally clears it up.


This proposition has been argued against vehemently by one of the main stream participants.

No. I have argued that where the galaxies are located in their respective clusters has no practical effect on the final H0 estimation with six galaxies, one per cluster.


Later we were presented this information by the same person regarding the Virgo Cluster.

Zahl:

And what did I say about comparing distance determinations without errors? The Virgo cluster has a diameter of about 10 degrees according to NED and a distance of 15.8 ± 0.5 Mpc according to this source. This corresponds to a diameter of about 3 Mpc. When the errors in your numbers are considered, they will not be in conflict with this figure.

The "source" referred to is at:

http://www.aoc.nrao.edu/~smyers/courses/astro12/distances.html

8 observations and calculation methods are given for galaxies in Virgo. The distances range from 14.9 +/- 1.2 Mpc to 21.1 +/- 3.9 Mpc. The distance of 15.8 +/- 0.5 Mpc quoted above and used by Zahl is the weighted average of the 8 observations with 1/sigma used as the weighting factor. So now Zahl is using the distance measurements of many galaxies to get the average for the cluster just as dgruss23 claims is necessary, and just as I questioned him on in the quote above, to which he responded that this wasn’t necessary. I have no idea if Zahl is an accomplished PHD in astronomy, but he contradicts himself here.

The reason why only one measurement or observation is insufficient can be illustrated by referring again to Zahl’s source. If, for example, the measurement of 21.1+/- 3.9 Mpc was chosen, this in conjunction with the Virgo velocity of 1142+/- 61 km/sec would yield a value for H0 of 56.6 with a range of 43.2 to 69.9. So, in my mind, this verifies that distances to many galaxies in a cluster are required to get a meaningful value for the cluster center. Correct me if I am wrong.

Apples and oranges. The above was not about estimating H0 with SBF measurements, it was about showing why dgruss' idea that clusters have depths of 30+ Mpc is ATM. If you want to know a cluster diameter, you need to know its distance as well. As it turns out, the depths of the clusters offered as examples by dgruss himself are nowhere near 30 Mpc.

Zahl
2007-Oct-02, 10:24 PM
Not to fan the flames (though some people could certainly stand to be a bit more polite), but this was just posted to astro-ph:

http://arxiv.org/abs/0709.4531

There are several weaknesses in this pre-print. He uses multiple results per author per method, but if he wants to argue in favor of bandwagon effects he first needs to limit his sample to one paper per author to see if any bias in the results is not attributable to one or few authors/teams. But let's take his compilation at face value and look at a q-q plot of the best values from the 31 papers:

http://www.sci.fi/~draxl/q-q_plot_of_LMC_distances_from_Schaefer's_paper.png

This plot compares the distribution of the 31 best values with a theoretical normal distribution. If the values are normally distributed, they should follow the 45 degree line. It turns out that they do follow it, save for that one outlier, so we can conclude that there does not appear to be anything out of ordinary with the reported best values. In particular we see no evidence of "peakedness" that one could expect if the best values were biased toward HKP's 18.50. Unfortunately Schaefer does not explore the distribution of the best values at all in his pre-print, only the errors. I will look into that matter tomorrow.

Jerry
2007-Oct-03, 04:32 AM
Is the 'best value' cut above the year 2001 really relevant to this study? He is arguing that the range of the best values suddenly swerved towards a mean and that the error bars tightened, without corresponding changes is technology that would warrant this.

It seems to me that a proper q test should include both the data before and after HKP.

It may be possible to argue that there has been an improvement in technology that Schaefer is ignoring - Didn't adaptive optics start to come on line ~ 2001?

Personally, I (obviously) think the trend is worrisome. Metallicity is a known Cephied parametric, and it seems to me like the more recent methods that incorporate metallicity in the determination of the Cephied distances are being 'underweighed' simply because the published results do not agree as closely as the HKP standard value.

Also, I understand the reasoning that you have applied concerning whether or not it is a reasonable assumption to assume that one galaxy is representitive of the mean cluster attributes, but this also assumes that the selection of the single galaxy within the cluster is purely random. This is not a good assumption: within a cluster, the closest galaxies to our limited observation point are more likely to be selected than more distant galaxies within the same cluster.

Nereid
2007-Oct-03, 01:29 PM
Is the 'best value' cut above the year 2001 really relevant to this study? He is arguing that the range of the best values suddenly swerved towards a mean and that the error bars tightened, without corresponding changes is technology that would warrant this.

It seems to me that a proper q test should include both the data before and after HKP.

It may be possible to argue that there has been an improvement in technology that Schaefer is ignoring - Didn't adaptive optics start to come on line ~ 2001?

Personally, I (obviously) think the trend is worrisome. Metallicity is a known Cephied parametric, and it seems to me like the more recent methods that incorporate metallicity in the determination of the Cephied distances are being 'underweighed' simply because the published results do not agree as closely as the HKP standard value.FWIW, I think your concerns are over-blown.

I think I mentioned, in an earlier post in this thread, that the final HKP paper looked at the various techniques used to estimate the LMC distance modulus, and (the authors) commented that a significant improvement in the robustness of estimates of that distance modulus would really come only with a new technique (several possible of which they mentioned), none of which were likely any time soon ... that's my paraphrase anyway.

So if you look at the work since 2001 in this light, you could conclude that there's been a lot of incremental improvements in existing techniques, and some reduction in the apparent inconsistencies between estimates determined by different methods.

Finally, if you're that worried about (possible) subtle biases in post-2001 work, you can always use just 2001 estimates of the LMC distance modulus, and gratuitously bump the errors up a notch*.
Also, I understand the reasoning that you have applied concerning whether or not it is a reasonable assumption to assume that one galaxy is representitive of the mean cluster attributes, but this also assumes that the selection of the single galaxy within the cluster is purely random. This is not a good assumption: within a cluster, the closest galaxies to our limited observation point are more likely to be selected than more distant galaxies within the same cluster.This objection/concern is, I think, downright silly.

All the clusters from which an SBF test galaxy were chosen are close enough that even all SMC-sized dwarfs (or even Fornax-sized ones) have long been found^.

But surely the concern is easily addressed? If you were to list the integrated magnitude of all SBF candidate galaxies, in the relevant clusters, in rank order, where would the ones actually used appear?

*By choosing more conservative assumptions about how to combine pre-2001 estimates, for example.
^Unless, of course, said cluster were in the ZOA; none are.

TomT
2007-Oct-04, 02:13 AM
Zahl, you obviously have a lot of knowledge in astronomy, but I differ with your analysis for the following reasons.

True, but you are missing the point. The point is not to estimate the distance to a galaxy cluster center, but to estimate H0 as well as possible with a limited number of SBF measurements.[\QUOTE]

I was arguing for how to find the most accurate answer, not how to find an answer with limited data. I understand that obtaining a lot of data is easier said than done.

[QUOTE]One SBF measurement per cluster is the best course of action as I have quantitatively explained several times in this thread, but these explanations have fallen on deaf ears.
If you mean best as far as working with limited data, you may be right. But working with one measurement per cluster is not the best.


Consider this practical example: there are six clusters at a distance of 60 Mpc with redshifts of 4300 km/s (corresponding to H0≈72 km/s/Mpc) and one SBF galaxy measurement is taken of each. Let's say the clusters have a diameter of 12 Mpc, their galaxy distributions are Gaussian and the SBF galaxies have random locations. A Gaussian random number generator gives the following distances for the six galaxies (mean=60 Mpc, SD=2 Mpc):

#1 - 63.27
#2 - 56.95
#3 - 61.46
#4 - 61.19
#5 - 61.50
#6 - 58.20

Our estimation for H0 is then (4300/63.27+4300/56.95+4300/61.46+4300/61.19+4300/61.50+4300/58.20)/6=71.25 km/s/Mpc. Working with the clusters themselves and doing a huge number of measurements to fix the distances to their centers would have given us 4300/60=71.67 km/s/Mpc for H0. I hope this finally clears it up.

It clears up your method, but I don't think it proves the H0 thus determined is correct.
First, as you have claimed necessary, you have not provided error bars for your redshift number.
Second, I have looked up the estimated distances to many galaxies in Virgo, and the variation is amazing. I have found distance estimates for a single galaxy in Virgo that vary by a factor of two. I have found distance values for galaxies in Virgo that are more than 30 Mpc apart. I am really doubtful that data exist that reliably show clusters at a distance of 60 Mpc, as in your example, that reliably have a SD of only 2 Mpc.


If you want to know a cluster diameter, you need to know its distance as well. As it turns out, the depths of the clusters offered as examples by dgruss himself are nowhere near 30 Mpc.

This doesn't square with your analysis given here.

Zahl:

And what did I say about comparing distance determinations without errors? The Virgo cluster has a diameter of about 10 degrees according to NED and a distance of 15.8 ± 0.5 Mpc according to this source. This corresponds to a diameter of about 3 Mpc. When the errors in your numbers are considered, they will not be in conflict with this figure.

Here you assumed that a cluster is spherical and all you have to know is it's diameter, assumed to be the arc length as determined from the angle seen by us and its distance. Then the source you gave us had distances to some galaxies in Virgo that varied from 14.9+/-1.2 to 21.1+/-3.9 Mpc. So your theoretical calculation gave less than 3 Mpc and the data showed over 6 Mpc. And that was for a cluster near us for which we have the most observations and is only 15.8 Mpc away. Actually, a search of the literature reveals that Virgo has a triaxial shape, with one of the axes about twice the length of one of the others.

folkhemmet
2007-Oct-04, 02:59 PM
Here is a new technique which could help disconfirm or confirm the current favored value for H_0. Unfortunately, its implementation depends on the funding for the LISA experiment, and as we know, space science has taken a hit as money has been diverted to a plan to create a human presence in the solar system. Nevertheless, LISA may someday have her day measuring gravitational waves, if they are to be found, up in space. Here is a paper which basically says how it should be possible to use gravitational waves to measure the Hubble constant. The detection of gravitational waves will be momentous by itself, but using gravitational waves to measure H_O seems clever since such a measurement would rely on different physics and be completely independent of HKP and other electromagnetic wave-based determinations. Here is the abstract:

Determining the Hubble constant from gravitational wave observations


Bernard F. Schutz


Department of Applied Mathematics and Astronomy, University College Cardiff, PO Box 78, Cardiff CF1 1XL, UK


"I report here how gravitational wave observations can be used to determine the Hubble constant, H 0. The nearly monochromatic gravitational waves emitted by the decaying orbit of an ultra−compact, two−neutron−star binary system just before the stars coalesce are very likely to be detected by the kilometre−sized interferometric gravitational wave antennas now being designed1−4. The signal is easily identified and contains enough information to determine the absolute distance to the binary, independently of any assumptions about the masses of the stars. Ten events out to 100 Mpc may suffice to measure the Hubble constant to 3% accuracy."

This thread, among others, illustrates of Jerry's confirmation bias. The preprint archive is voluminous. As we know many papers of varying quality come out most days of the week. A consistent observed behavior is that Jerry tends to cherry pick the papers that serve his purpose which is to show that scientific uncertainty plagues modern astrophysics and very little progress has been made in terms of understanding the basic structure of the Universe. Although, if that's true (uncertainty, lack of progress, etc.), what about the uncertainty contained within the papers he cites? What about the assumptions contained within some of these papers-- assumptions which are often times even more grand and sweeping than the one's found in mainstream papers? His behavior demonstrates that he is very willing to ignore the uncertainties in papers that agree with his point of view. Mainstream astronomers probably also do this, but this is precisely why Jerry should be more careful and mindful; he engages in a behavior that he criticizes others for comitting.

This is not some grand accusation about his motives, rather it is my observation of how he operates on this forum. In fact, what makes confirmation bias so interesting is that more often than not it is unconscious and not motive-based; it becomes so deeply ingrained in one's style of thinking such that talk of motives becomes irrelevant much in the same way that it would be silly to attribute specific motives to any other habitual behavior. So, again, I'm describing actions and making observations of behavior--these actions speak loudly and clearly for themselves-- who knows about the original motive(s) for them.

Jerry
2007-Oct-04, 06:09 PM
A consistent observed behavior is that Jerry tends to cherry pick the papers that serve his purpose which is to show that scientific uncertainty plagues modern astrophysics and very little progress has been made in terms of understanding the basic structure of the Universe.

And this is a bad thing? Why should anyone champion scientific certainty?
Should we just delete the millions being spent on gravity wave research because we are certain Einstein is right anyway?

This paper I just posted, seconds after Parejk.. posted the same reference, is new as of that day, and has subject matter exactly pertinent to this thread. If the information would have been different, but still relevant, I might have posted it as well.

I don't have a 'preferred' value for Ho - be it 7 or 7000 - it is a size parameter. What I worry about is artificial constraints drawn not from observations, but expectations. Schaefer has the same concern, and has articulated this concern in a statistical study of Ho methodology. Schaefer has concluded there is a bandwagon effect evident in the reduced data that is not scientific. Is he right?

Zahl
2007-Oct-05, 12:25 AM
Is the 'best value' cut above the year 2001 really relevant to this study?

Well, the fact that the distribution of the best values does not appear to be non-Gaussian is counter-evidence against his claim that there is a bandwagon effect in the papers. Why would it be evident only in the errors and not in the best values themselves? Doesn't make sense to me.


Personally, I (obviously) think the trend is worrisome. Metallicity is a known Cephied parametric, and it seems to me like the more recent methods that incorporate metallicity in the determination of the Cephied distances are being 'underweighed' simply because the published results do not agree as closely as the HKP standard value.

I am not sure what "underweighed" is supposed to mean here.


Also, I understand the reasoning that you have applied concerning whether or not it is a reasonable assumption to assume that one galaxy is representitive of the mean cluster attributes, but this also assumes that the selection of the single galaxy within the cluster is purely random. This is not a good assumption: within a cluster, the closest galaxies to our limited observation point are more likely to be selected than more distant galaxies within the same cluster.

Are you aware that such a selection effect would lead the derived H0 to be biased low, not high? Besides, you have not offered any evidence for it.

Zahl
2007-Oct-05, 12:36 AM
True, but you are missing the point. The point is not to estimate the distance to a galaxy cluster center, but to estimate H0 as well as possible with a limited number of SBF measurements.

I was arguing for how to find the most accurate answer, not how to find an answer with limited data.

You would get the most accurate answer for the distance to cluster centers, but not for H0. For H0 estimation, the difference is less than 1 km/s/Mpc. See above. You have provided no evidence that would refute this.



One SBF measurement per cluster is the best course of action as I have quantitatively explained several times in this thread, but these explanations have fallen on deaf ears.

If you mean best as far as working with limited data, you may be right. But working with one measurement per cluster is not the best.

Data is always limited in the real world. But even if 100 SBF measurements could be done, it would probably still be most effective to do only one measurement per cluster because 100 different cluster and flow environments would thus be sampled and any unaccounted for systematics specific to these environments would have minimal weight. I want to stress the importance of this point. 100 measurements taken from a single cluster would all be subject to the same local systematics and H0 thus derived could be wildly off the mark.



Consider this practical example: there are six clusters at a distance of 60 Mpc with redshifts of 4300 km/s (corresponding to H0≈72 km/s/Mpc) and one SBF galaxy measurement is taken of each. Let's say the clusters have a diameter of 12 Mpc, their galaxy distributions are Gaussian and the SBF galaxies have random locations. A Gaussian random number generator gives the following distances for the six galaxies (mean=60 Mpc, SD=2 Mpc):

#1 - 63.27
#2 - 56.95
#3 - 61.46
#4 - 61.19
#5 - 61.50
#6 - 58.20

Our estimation for H0 is then (4300/63.27+4300/56.95+4300/61.46+4300/61.19+4300/61.50+4300/58.20)/6=71.25 km/s/Mpc. Working with the clusters themselves and doing a huge number of measurements to fix the distances to their centers would have given us 4300/60=71.67 km/s/Mpc for H0. I hope this finally clears it up

It clears up your method, but I don't think it proves the H0 thus determined is correct.

It is not "my method", just the standard method in the business, used by HKP and others.


First, as you have claimed necessary, you have not provided error bars for your redshift number.

You are missing the point. The point was not to assess the combined effect of measurement errors and galaxy locations on H0 but to see the effect from galaxy locations only.


Second, I have looked up the estimated distances to many galaxies in Virgo, and the variation is amazing. I have found distance estimates for a single galaxy in Virgo that vary by a factor of two. I have found distance values for galaxies in Virgo that are more than 30 Mpc apart.

This is too vague to say anything about. You first need to provide these details: galaxy ID, distance, errors and the reference used.


I am really doubtful that data exist that reliably show clusters at a distance of 60 Mpc, as in your example, that reliably have a SD of only 2 Mpc.

No, because a) clusters do not have a diameter of even 12 Mpc on average and b) the number density of galaxies in clusters from the center to the edge falls off more rapidly than a Gaussian curve. So far you have not even acknowledged the fact that the number density actually falls off significantly toward the edge.


Here you assumed that a cluster is spherical and all you have to know is it's diameter, assumed to be the arc length as determined from the angle seen by us and its distance. Then the source you gave us had distances to some galaxies in Virgo that varied from 14.9+/-1.2 to 21.1+/-3.9 Mpc.

They are not distances to "some galaxies", they are all distances to Virgo itself. You can't use these figures to estimate the Virgo diameter, only the distance.


So your theoretical calculation gave less than 3 Mpc

It is not a theoretical calculation. It comes straight from the measured diameter and cluster distance.


and the data showed over 6 Mpc.

Wrong.


And that was for a cluster near us for which we have the most observations and is only 15.8 Mpc away. Actually, a search of the literature reveals that Virgo has a triaxial shape, with one of the axes about twice the length of one of the others.

I bet this is from Mei 2007, ApJ, 655, 144. Care to elaborate on what she says about the lengths of these axes?

TomT
2007-Oct-05, 04:26 AM
Hi Zahl,

I am travelling on the road for a number of days, so can't get to all of your points immediately. But the following stood out as a no-brainer.

I stated that:

Quote:
So your theoretical calculation gave less than 3 Mpc

You replied:


It is not a theoretical calculation. It comes straight from the measured diameter and cluster distance.

The measured diameter you referred to was 11 degrees, which you used to calculate the diameter of the Virgo Cluster, assuming it is spherical in shape.
Diam = distance*theta*pi/180 = 15.8*11*3.1416/180 = 3


If you look at Table 2, of the "Final Results ..........." paper by Freedman, et al that we have been discussing, there are 6 Cepheid galaxies listed from Virgo. Of these, NGC 4321 is located at RA 185.73, Dec 15.82 and Ngc 4536 is located at RA 188.61 and Dec 2.19 (values in deg). These are taken from a list of only 6 of the 2000+ galaxies in Virgo.

The central angle between these two galaxies is no less than the difference in declinations, 15.82 - 2.19 = 13.6 deg. Accounting for the difference in Right Ascension increases this angle. So how can you say that the measured diameter of the Virgo Cluster is 11 degrees?

When considering all the members of this cluster, I am willing to speculate that the size of the cluster is much larger.

Jerry
2007-Oct-05, 05:46 AM
Well, the fact that the distribution of the best values does not appear to be non-Gaussian is counter-evidence against his claim that there is a bandwagon effect in the papers. Why would it be evident only in the errors and not in the best values themselves? Doesn't make sense to me.
The standard deviation in the best value after 2001 is much tighter than it was before; that won't show up unless you plot both data sets.

I am not sure what "underweighed" is supposed to mean here.
Whenever you talk about a bandwagon effect, you also have to take into account those who do not jump on.

For example, Efforts to remove systemics from Tully-Fisher analysis are not weighted the same by astrophysicist as papers that produce the consensus in the Hubble value. While there may be TF systemics that are biasing the data, it could be this worrysome bandwagon effect.


Are you aware that such a selection effect would lead the derived H0 to be biased low, not high? Besides, you have not offered any evidence for it.Don't care. If there is a potential for selection bias, the error bands should reflect this. Neried seems to think selection bias is highly unlikely, and this is one of her strong areas.

TomT
2007-Oct-05, 06:03 PM
Quote: TomT
So your theoretical calculation gave less than 3 Mpc


It is not a theoretical calculation. It comes straight from the measured diameter and cluster distance.

More on this. Here is a link to a study of galaxy distances.

http://arxiv.org/ftp/astro-ph/papers/0503/0503440.pdf

Table III gives distances to 17 spiral galaxies in the Virgo Cluster determined using the Tully Fisher method (type dependent) and calibrated using the HKP Cepheids. Note the distance of 28.1 Mpc to NGC 4343 and 12.6 to NGC 4569. The difference of 15.5 Mpc is approximately the distance between 2 galaxies within Virgo (the actual distance is slightly larger when accounting for the Right Ascension and Declination of each galaxy). This distance is far greater than "a couple of Mpc". So now we have reviewed a total of only about 20 of the 2000+ galaxies in Virgo, and found the cluster size to be at least 15.5 Mpc. This should suffice to illustrate the point.

Zahl
2007-Oct-06, 05:45 PM
Hi Zahl,

I am travelling on the road for a number of days, so can't get to all of your points immediately. But the following stood out as a no-brainer.

I stated that:

Quote:
So your theoretical calculation gave less than 3 Mpc

You replied:



The measured diameter you referred to was 11 degrees, which you used to calculate the diameter of the Virgo Cluster, assuming it is spherical in shape.
Diam = distance*theta*pi/180 = 15.8*11*3.1416/180 = 3


If you look at Table 2, of the "Final Results ..........." paper by Freedman, et al that we have been discussing, there are 6 Cepheid galaxies listed from Virgo. Of these, NGC 4321 is located at RA 185.73, Dec 15.82 and Ngc 4536 is located at RA 188.61 and Dec 2.19 (values in deg). These are taken from a list of only 6 of the 2000+ galaxies in Virgo.

The central angle between these two galaxies is no less than the difference in declinations, 15.82 - 2.19 = 13.6 deg. Accounting for the difference in Right Ascension increases this angle. So how can you say that the measured diameter of the Virgo Cluster is 11 degrees?

When considering all the members of this cluster, I am willing to speculate that the size of the cluster is much larger.

The angle is cos-1(cos(2.88°)*cos(15.82°)*cos(2.19°)+sin(15.82°)*si n(2.19°))=13.9°. I already gave you the source for that 10 degrees, but here it is again: NASA/IPAC Extragalactic Database.

http://nedwww.ipac.caltech.edu/cgi-bin/nph-objsearch?objname=virgo+cluster&extend=no&out_csys=Equatorial&out_equinox=J2000.0&obj_sort=RA+or+Longitude&of=pre_text&zv_breaker=30000.0&list_limit=5&img_stamp=YES

NGC 4536 belongs to the Virgo II galaxy cloud, also called Southern Extension, a structure located to the south of the main Virgo cluster and it is debatable if it is a true member of the Virgo cluster. See 2001A&A...375..770F.

The calculation I gave earlier assumed an average cluster diameter of 12 Mpc or 43 degrees in case of the Virgo cluster. I don't think you can refute that. To validate dgruss' ATM idea of 30 Mpc cluster depths you would need to demonstrate an angle of more than 100 degrees between two members of the Virgo cluster.

Zahl
2007-Oct-06, 05:58 PM
Well, the fact that the distribution of the best values does not appear to be non-Gaussian is counter-evidence against his claim that there is a bandwagon effect in the papers. Why would it be evident only in the errors and not in the best values themselves? Doesn't make sense to me.


The standard deviation in the best value after 2001 is much tighter than it was before; that won't show up unless you plot both data sets.

?? That the standard deviation is tighter has no relevance to what I wrote above. The question was why is the distribution of the best values Gaussian if there is a bandwagon effect in the papers (as demonstrated by the alleged non-Gaussianity in the errors)?


Whenever you talk about a bandwagon effect, you also have to take into account those who do not jump on.

For example, Efforts to remove systemics from Tully-Fisher analysis are not weighted the same by astrophysicist as papers that produce the consensus in the Hubble value. While there may be TF systemics that are biasing the data, it could be this worrysome bandwagon effect.

I'm still not sure what you meant by 'underweighed' when you wrote that "the more recent methods that incorporate metallicity in the determination of the Cephied distances are being 'underweighed'". I think you are just spreading UNDO (uncertainty & doubt) towards the mainstream with little to back it up.


If there is a potential for selection bias, the error bands should reflect this. Neried seems to think selection bias is highly unlikely, and this is one of her strong areas.

Are you saying that you have no evidence for selection bias?

Zahl
2007-Oct-06, 06:03 PM
Quote: TomT
So your theoretical calculation gave less than 3 Mpc



More on this. Here is a link to a study of galaxy distances.

http://arxiv.org/ftp/astro-ph/papers/0503/0503440.pdf

Table III gives distances to 17 spiral galaxies in the Virgo Cluster determined using the Tully Fisher method (type dependent) and calibrated using the HKP Cepheids. Note the distance of 28.1 Mpc to NGC 4343 and 12.6 to NGC 4569. The difference of 15.5 Mpc is approximately the distance between 2 galaxies within Virgo (the actual distance is slightly larger when accounting for the Right Ascension and Declination of each galaxy). This distance is far greater than "a couple of Mpc". So now we have reviewed a total of only about 20 of the 2000+ galaxies in Virgo, and found the cluster size to be at least 15.5 Mpc. This should suffice to illustrate the point.

Huh? There are no such galaxies anywhere in that paper. Your figures are useless anyway as you don't quote the errors. Please don't link to any papers that don't even give measurement errors, I'm tired of that unscientific nonsense.

TomT
2007-Oct-06, 07:02 PM
The angle is cos-1(cos(2.88°)*cos(15.82°)*cos(2.19°)+sin(15.82°)*si n(2.19°))=13.9°. I already gave you the source for that 10 degrees, but here it is again: NASA/IPAC Extragalactic Database.

http://nedwww.ipac.caltech.edu/cgi-bin/nph-objsearch?objname=virgo+cluster&extend=no&out_csys=Equatorial&out_equinox=J2000.0&obj_sort=RA+or+Longitude&of=pre_text&zv_breaker=30000.0&list_limit=5&img_stamp=YES


OK, you said 10 degrees not 11. And, as you point out, the 2 Cepheids in Virgo are 13.9 degrees apart. So are you still arguing that the diameter of Virgo is 10 degrees?