PDA

View Full Version : Expansion of the Universe



Copernicus
2018-Mar-26, 02:52 AM
I tend to think that the universe is static and not expanding. This article states that it is peer reviewed, that the universe is static. Any opinions. Is it in fact published. Are both the hypotheses of both static and expanding universes mainstream. Is the presentation easily refuted, and how? https://lppfusion.com/is-the-universe-really-expanding-observations-contradict-galaxy-size-predictions-based-on-expansion/ "IS THE UNIVERSE REALLY EXPANDING? OBSERVATIONS CONTRADICT GALAXY-SIZE PREDICTIONS BASED ON EXPANSION"

ShinAce
2018-Mar-26, 03:25 AM
Full disclosure:
-Didn't read the 'article'
-not going to

The mission of that website is to:
"LPPFusion’s mission is to provide environmentally safe, clean, cheap and unlimited energy for everyone through the development of Focus Fusion technology, based on the Dense Plasma Focus device and hydrogen-boron fuel."

The article was posted by Eric Lerner, he then points to evidence in a paper(written by Eric Lerner) that hints to it.

It is, in fact, not peer reviewed. What makes you think it's peer reviewed?

Copernicus
2018-Mar-26, 04:18 AM
Full disclosure:
-Didn't read the 'article'
-not going to

The mission of that website is to:
"LPPFusion’s mission is to provide environmentally safe, clean, cheap and unlimited energy for everyone through the development of Focus Fusion technology, based on the Dense Plasma Focus device and hydrogen-boron fuel."

The article was posted by Eric Lerner, he then points to evidence in a paper(written by Eric Lerner) that hints to it.

It is, in fact, not peer reviewed. What makes you think it's peer reviewed?

Hi ShinAce,

It is discussed on this physics forum. https://www.physicsforums.com/threads/observational-evidence-against-expanding-universe-in-mnras.943111/

Strange
2018-Mar-26, 06:37 AM
Here is a link to the published paper: https://academic.oup.com/mnras/advance-article/doi/10.1093/mnras/sty728/4951333

Even if there were something to this analysis (and I can’t comment on that) it wouldn’t, by itself, be enough to counter the overwhelming theoretical and evidential basis for expansion. (As an analogy it is a bit like looking at one odd photo from the Apollo mission and concluding that man didn’t go to the moon!)

Selfsim
2018-Mar-27, 01:50 AM
In an expanding universe, surface brightness is given by:

SB = L/A(1+z)⁴, where L is luminosity, A is the actual area of the galaxy, z is redshift or in other words;

in an expanding Universe, surface brightness is a function of redshift z, and varies as the inverse expansion factor to the fourth power.

At local scales, or in gravitationally bound systems such as galaxy clusters where expansion doesn’t occur, z=0 the equation reduces to SB=L/A.
For small values of z<<1, the SB=L/A is still a good approximation, but as z increases with increasing distance, SB is no longer considered as being distance independent.
(I should add that the math derivation which leads to the above formula, being based on a Euclidean geometry and uniform expansion, does not take into account the effects of dark energy and accelerating expansion (eg: an LCDM universe model)).

In static Universe models however, the surface brightness derivation always reduces to SB = L/A, and is therefore independent of the distance and redshift.
When tested against elliptical galaxy measurements, (Tolman test data), it is generally agreed there is a good fit of the data with SB ∝(1+z) ̄⁴ (Lerner, I think, disagrees with this in his paper).
In so-called 'tired light' mechanisms, hypothesized in many static universe models, SB ∝(1+z) ̄¹, which is not a good fit, and thus tired light mechanisms in static universe models, is generally considered to be contradicted by Tolman test data based on elliptical galaxy measurements.

Lerner, at least concurs with the 'in principle' point (the SB formula) above, in the opening sentence in his MNAS paper,
In a non-expanding universe surface brightness is independent of distance or redshift, while in an expanding universe it decreases rapidly with both.

Copernicus
2018-Mar-27, 07:31 AM
In an expanding universe, surface brightness is given by:

SB = L/A(1+z)⁴, where L is luminosity, A is the actual area of the galaxy, z is redshift or in other words;

in an expanding Universe, surface brightness is a function of redshift z, and varies as the inverse expansion factor to the fourth power.

At local scales, or in gravitationally bound systems such as galaxy clusters where expansion doesn’t occur, z=0 the equation reduces to SB=L/A.
For small values of z<<1, the SB=L/A is still a good approximation, but as z increases with increasing distance, SB is no longer considered as being distance independent.
(I should add that the math derivation which leads to the above formula, being based on a Euclidean geometry and uniform expansion, does not take into account the effects of dark energy and accelerating expansion (eg: an LCDM universe model)).

In static Universe models however, the surface brightness derivation always reduces to SB = L/A, and is therefore independent of the distance and redshift.
When tested against elliptical galaxy measurements, (Tolman test data), it is generally agreed there is a good fit of the data with SB ∝(1+z) ̄⁴ (Lerner, I think, disagrees with this in his paper).
In so-called 'tired light' mechanisms, hypothesized in many static universe models, SB ∝(1+z) ̄¹, which is not a good fit, and thus tired light mechanisms in static universe models, is generally considered to be contradicted by Tolman test data based on elliptical galaxy measurements.

Lerner, at least concurs with the 'in principle' point (the SB formula) above, in the opening sentence in his MNAS paper,



Hi Selfism,

I was wondering if the luminosity and actual area are determined independently of the Z value?

slang
2018-Mar-27, 08:38 AM
The mere fact that a paper is peer-reviewed and published, even if it is published in a respectable journal, does not make it mainstream science. So be careful about how you approach this.

Copernicus
2018-Mar-29, 06:44 PM
Here is a link to the published paper: https://academic.oup.com/mnras/advance-article/doi/10.1093/mnras/sty728/4951333

Even if there were something to this analysis (and I can’t comment on that) it wouldn’t, by itself, be enough to counter the overwhelming theoretical and evidential basis for expansion. (As an analogy it is a bit like looking at one odd photo from the Apollo mission and concluding that man didn’t go to the moon!)

Hi Strange,

Thanks for finding the link to the published article.

slang
2018-Mar-30, 08:01 AM
Moved as this thread isn’t a good fit for Q&A anymore

Selfsim
2018-Mar-30, 10:37 AM
Lerner proceeds assuming a linear distance-z relationship, ('Section 1: Introduction', page 2):

d = cz/H₀, where d is the proper distance, c the speed of light and H₀ is the Hubble constant.

However, this is only valid for very small z.

At larger z though, this appears to be in conflict with Special Relativity constraints, as follows:

Hubble’s law is v= H₀d, where v is the recession velocity.

Since Lerner’s Universe model is static (yet cites the above distance-z relationship), 'v' is interpreted as being the recession velocity of galaxies moving through space, whilst in the expansion (mainstream) model, 'v' is the expansion velocity of space-time, which can exceed c.

For all intents and purposes, not only can objects moving through space not exceed c, (in practical terms), but as they approach c, relativistic effects need to be taken into account and the distance-z relationship is no longer linear throughout the universe, (as Lerner assumes), without further 'tweaking' to his model.

The correct formula for the proper distance is:

d ≈ (c/H₀)*[z-0.5(1+q₀)z²], where q₀ is the deceleration parameter and is not a constant .. it varies according to the distribution of galaxies in space-time.

For eg: in the LSC (https://en.wikipedia.org/wiki/Virgo_Supercluster) (Local Supercluster), q₀=-1 and the above equation then reduces to d = cz/H₀, which happens to be the only place in the Universe we know of where the linear relationship can be applied.

In Lerner's model, z is a pure Doppler shift only and is defined as: z = √(c+v)/√(c-v) - 1.
However, where v=c, then z = ∞ ... which is nonsensical.

Copernicus
2018-Mar-30, 07:13 PM
Lerner proceeds assuming a linear distance-z relationship, ('Section 1: Introduction', page 2):

d = cz/H₀, where d is the proper distance, c the speed of light and H₀ is the Hubble constant.

However, this is only valid for very small z.

At larger z though, this appears to be in conflict with Special Relativity constraints, as follows:

Hubble’s law is v= H₀d, where v is the recession velocity.



I am wondering, if the universe is static, per Lerner, would there even be a recession velocity? or would it just be a distance, does his theory require that we are at the center of a static universe?

Selfsim
2018-Mar-30, 09:48 PM
I am wondering, if the universe is static, per Lerner, would there even be a recession velocity? or would it just be a distance ..Lerner's paper is focused on the lower end of cosmological distances. Doppler shifts are evident due to the motion of objects towards and away from us in that scale range. Doppler shift thus already accounts for redshifts (& blueshifts) inside his chosen limits of his model.

Unless he can somehow explain how Doppler shifting due to object motions somehow stops beyond the upper limits set down in his model, it must also apply at the larger end of cosmological distances, or higher z, which then leads to the nonsensical contradictions with SR, outlined in my previous post.


... does his theory require that we are at the center of a static universe?I don't believe he assumes this in his model .. what is your take on that?

Copernicus
2018-Mar-30, 11:47 PM
Lerner's paper is focused on the lower end of cosmological distances. Doppler shifts are evident due to the motion of objects towards and away from us in that scale range. Doppler shift thus already accounts for redshifts (& blueshifts) inside his chosen limits of his model.

Unless he can somehow explain how Doppler shifting due to object motions somehow stops beyond the upper limits set down in his model, it must also apply at the larger end of cosmological distances, or higher z, which then leads to the nonsensical contradictions with SR, outlined in my previous post.

I don't believe he assumes this in his model .. what is your take on that?

I was just thinking, if the model is non-expanding universe he would not necessarily have a v equal to or greater than the speed of light. Obviously we see z values of at least 13. I'm not sure what fraction of the speed of light that corresponds to.

Selfsim
2018-Mar-31, 12:43 AM
Ok then ... my past posts were to present the background mainstream position in response to his challenges on it.

The more pertinent conversation is about the integrity of the analysis in his two supporting papers here (https://arxiv.org/pdf/1405.0275.pdf) and here (https://arxiv.org/pdf/1803.08382.pdf). (They both appear to rely on the same flaw): he has not considered the impact on angular resolution, which results from the dissimilar filters used in the HUDF and Galex datasets. Lerner has provided no evidence that the resulting differences in resolution, (which such dissimilarity represents), would produce the same bottom line results he claims.

All that is needed to illustrate the flawed argument, is the formula which describes the angular resolution of a telescope ('θ'):

θ = 1.220λ/D where λ is the wavelength of light and D the diameter of the telescope.

I trust that Lerner (and yourself) would accept this physics?

Copernicus
2018-Mar-31, 02:19 AM
Ok then ... my past posts were to present the background mainstream position in response to his challenges on it.

The more pertinent conversation is about the integrity of the analysis in his two supporting papers here (https://arxiv.org/pdf/1405.0275.pdf) and here (https://arxiv.org/pdf/1803.08382.pdf). (They both appear to rely on the same flaw): he has not considered the impact on angular resolution, which results from the dissimilar filters used in the HUDF and Galex datasets. Lerner has provided no evidence that the resulting differences in resolution, (which such dissimilarity represents), would produce the same bottom line results he claims.

All that is needed to illustrate the flawed argument, is the formula which describes the angular resolution of a telescope ('θ'):

θ = 1.220λ/D where λ is the wavelength of light and D the diameter of the telescope.

I trust that Lerner (and yourself) would accept this physics?

I'll have to trust you on this. And I do believe you.

Selfsim
2018-Mar-31, 05:37 AM
I'll have to trust you on this. And I do believe you.No need to .. I've asked the man himself (https://www.physicsforums.com/threads/observational-evidence-against-expanding-universe-in-mnras.943111/page-3#post-5969715) at physicsforums.

Copernicus
2018-Apr-02, 12:18 AM
Hi Selfsim, Was wondering if he answered your question appropriately? It seems he should have used some mathematics to show his assertion if he is truly defending his study.

Selfism question

Others: please bear with me on this query about the 2014 paper .. we believe it has significant bearing on the conclusions of Eric's recent MNRAS paper. Eric; These are the cutoff radius results from your 2014 paper, Lerner et al said: For GALEX this cutoff is at a radius of 2.4 +/- 0.1 arcsec for galaxies observed in the FUV and 2.6 +/- 0.2 arcsec for galaxies observed in the NUV, while for Hubble this cutoff is at a radius of 0.066 +/- 0.002 arcsec, where the errors are the 1σ statistical uncertainty. While the Hubble cutoff of 0.066 arcsec compares with a theoretical resolution of 0.05 arcsec using the F435W filter, the Galex result of 2.4 arcsec is 30X higher than the theoretical value of 0.08 arcsec in FUV! Something appears to be in error here(?) I suppose it may be possible that the Galex optics were of catastrophically low quality in order to explain this major discrepency however, if this unlikely possibility were so, then also no useful science would be possible. This discrepency is more likely be due to an error elsewhere .. (?) Cheers

Eric's Answer,

Reference https://www.physicsforums.com/threads/observational-evidence-against-expanding-universe-in-mnras.943111/page-4

Self sim, Not only the 435 filter. For each redshift range, we match the HST filter with either FUV or NUV filter to get the same wavelength band in the rest frames. So we actually have eight separate matched sets. All described in the 2014 paper. Also on GALEX I guess you used the Dawes formula but it is way off. Look at the GALEX descriptions on their website--the resolution is arcseconds, not a tiny fraction of an arcsecond. Their pixels are much bigger than your value. Why is this?--you have to ask the designers of GALEX. This is just a guess on my part, but GALEX is a survey instrument. If they made the pixels too small, they would end up with too small a field of view, given limits on how big the detector chip can be.

Reference https://www.physicsforums.com/threads/observational-evidence-against-expanding-universe-in-mnras.943111/page-4

Selfsim
2018-Apr-02, 01:13 AM
Copernicus;

The HUDF (https://en.wikipedia.org/wiki/Hubble_Ultra-Deep_Field) is a colour image composed of combining monochromatic images using various filters (https://en.wikipedia.org/wiki/Hubble_Ultra-Deep_Field#Observations).
Lerner has used data from various filters, however as an example of the correct calculation, the F435W can be used, as this is the closest match to the Galex far and near ultra violet images.

Lerner’s method of determining the resolution ratio using the Galex and HUDF data is quite strange (to put it mildly ... and incorrect).

The correct method is to use the Rayleigh criterion for resolution formula (https://en.wikipedia.org/wiki/Angular_resolution#Explanation):

θ = 1.220λ/D
For Galex: λ= 152nm (centre value) D = 0.5 m, gives θ = 0.08 seconds of arc.
For Hubble: λ= 435nm (centre value) D = 2.4 m, gives θ = 0.05 seconds of arc.

So rather than Hubble resolving objects 1/38 smaller then Galex (as Lerner assumes across all datsets), it is the much more like a modest 5/8, (ie: 0.05/008=0.625) for this particular filter.

Thus far, he has provided the following curved-balls:

i) a fractal universe argument;
ii) a Galex 'too large' pixel argument - (the pixel size in arcseconds depends on the focal length of the telescope used);
iii) an inapplicable formula for the discussion at hand (Dawes');
iv) a method for detemining the ratio of the cut off limit of the Galex value to the HUDT value - which supposedly dispenses with considerations of the angular resolution issues introduced by different filter wavelengths (as per the Reyleigh formula above) - presently under discussion.

Shaula
2018-Apr-02, 05:42 AM
Selfsim, a minor point. The diffraction limit you are calculating is a theoretical maximum. You are effectively assuming that the sensors are oversampled or critically sampled and that the MTF of the system is narrower than the diffraction limit. For GALEX this doesn't appear to be the case as Caltech state the resolution is 4-5" depending on where you are in the spectrum. For Hubble this is the case and it achieves around 0.05-0.1" resolution. So GALEX has a resolution 40-100 times worse than Hubble. You can get around that to a degree (but not generally anywhere near as much as would be required to get GALEX to the diffraction limit) by drizzling or something similar but that makes things even more complex when it comes to intersensor comparisons.

That said I agree that the correction method used is one of the key flaws in the analysis. I skimmed it and saw no account being taken of the different noise floors, or of the error analysis required due to the uncertainty introduced by the larger GALEX pixels. Comparing these measurements is much harder than the paper acknowledges, which is why I am dubious about the result. The other concern I have is related to the data fitting methods used. Effectively by not weighting the data there seems to be a strong element of sampling bias in the results.

Selfsim
2018-Apr-02, 05:56 AM
Selfsim, a minor point. The diffraction limit you are calculating is a theoretical maximum. You are effectively assuming that the sensors are oversampled or critically sampled and that the MTF of the system is narrower than the diffraction limit. For GALEX this doesn't appear to be the case as Caltech state the resolution is 4-5" depending on where you are in the spectrum. For Hubble this is the case and it achieves around 0.05-0.1" resolution. So GALEX has a resolution 40-100 times worse than Hubble. You can get around that to a degree (but not generally anywhere near as much as would be required to get GALEX to the diffraction limit) by drizzling or something similar but that makes things even more complex when it comes to intersensor comparisons.Admittedly we're using the theoretical max as a kind of litmus test of Lerner's method ... we've asked the Galex folk for more details on the scope performance data.
The main issue is trying to validate Lerner's method and I think this should've been done by him/them in their analysis.


That said I agree that the correction method used is one of the key flaws in the analysis. I skimmed it and saw no account being taken of the different noise floors, or of the error analysis required due to the uncertainty introduced by the larger GALEX pixels. Comparing these measurements is much harder than the paper acknowledges, which is why I am dubious about the result. The other concern I have is related to the data fitting methods used. Effectively by not weighting the data there seems to be a strong element of sampling bias in the results.Thanks kindly for that .. a very helpful checklist ..
If I had a wish, it would have been to see Lerner cover all this in his paper.

Jean Tate
2018-Apr-02, 07:56 PM
Re the Lerner+ (2014) paper: it seems to be quite strange, making use of analysis/data reduction techniques I don't think I've seen elsewhere. I suspect, but do not know for sure, that at least some of these apparent non-standard techniques are contributing to the Results and Conclusions. If I were really interested, I'd try to reproduce the reported results, independently; however, I have better things to do with my time quite frankly. Selfsim, you may wish to ask elerner, in the PhysicsForums discussion thread, how easy he thinks it would be to reproduce the results of the 2014 paper, entirely independently (i.e. obtaining the data used, per the paper, and repeating the analyses).

Here are some things which struck me as odd (not that they are necessarily wrong, of course):


We have determined the minimum measurable angular radius of galaxies, \theta m, for each of the telescopes by plotting the abundance of galaxies (with stellarity index < 0.4) vs. angular radius for all GALEX MIS3-SDSSDR5 galaxies and for all HUDF galaxies and determining the lower - cutoff angular radius for each. We took this cutoff to be the point at which the abundance per unit angular radius falls to 1/5 of the modal value.

Huh? Why did they do that? And why 0.4 and 1/5? And, key question, what is/how did they determine the "minimum measurable angular radius"? In the literature, there are quite a few, apparently equivalent, metrics/parameters/whatever (Petrosian radius being a particularly common one). It would be nice to know why the authors decided to use something non-standard, and - more important - how theirs compares with standard ones.


In order to avoid effects due to the luminosity of galaxies, we limited objects in the samples to a narrow range of absolute magnitude M: -17.5 < M < -19.0

Huh? Although the authors may have made clear what they mean by "absolute magnitude" - if one reads hard enough between the lines - I'm pretty sure it's not the usual meaning (which incorporates both a particular cosmological model and specific parameter values). So it would be nice if they spelled out - in some detail - exactly how they derived these "absolute magnitude" values.


These UV data have the important advantage of being sensitive only to emissions from very young stars.

Um, no; just no. It seems that the authors are unaware of AGNs (QSOs, etc)! :surprised:


Therefore we are in no sense looking at progenitors of GALEX galaxies, but rather at galaxies whose stellar populations are comparable in age. By analogy we are looking at populations of “babies” at different epochs in history, not comparing younger and older adults born at the same time.

Wow! It may be minor, but the authors seem to be unaware that metalicity makes a difference.


Finally we restricted the samples to disk galaxies with Sersic number <2.5 so that radii could be measured accurately by measuring the slope of the exponential decline of SB within each galaxy.

Maybe someone else reading this can make sense of it; I can't. How important is it, in terms of what follows (analyses, results, etc)? I don't know, but suspect that it's not trivial.


For the GALEX sample, we measured radial brightness profiles and fitted them with a Sersic law, finding that nearly all these bright UV galaxies, as expected, had Sersic number <2.5

If the authors had a long track-record of this kind of thing, I'd give them the benefit of the doubt (in my experience, even experienced astronomers spell out how they do such measuring and fitting, even if only by reference to a standard tool); however, I suspect this isn't something they've done before, so I'm a bit perplexed as to why they thought it not worth saying anything about the how.

That'll do for now (there's a lot more I could have written).

Questions? Comments?

Shaula
2018-Apr-02, 09:16 PM
Questions? Comments?
I'll be blunter than you have been - there are a large number of arbitrary thresholds and untested methods in the paper. They have shown no modelled validation of the methods, nor have they performed a sensitivity study of their results. These two points alone make it very hard to accept a novel result.

Rereading it I have a bunch of other concerns...
1) The use of the median value for the binned data is likely problematic. Using the median like this is only meaningful if the dataset is much more constrained than it is. Ideally you want to be looking at different observations of very similar bodies, which is not the case here
2) The conversion of circularised to non-circularised values has me scratching my head and is probably something that has a big effect on the measured brightness
3) The bulk of the effect they are reporting looks like it may be being driven by results that are close to 2x the resolution of the systems. I suspect that this part of the analysis is highly unstable.
4) They show that Hubble and GALEX actually make for horrible radius comparisons. But then they do it anyway.

Selfsim
2018-Apr-02, 10:55 PM
I'm in total agreement with both JT's and Shaula's issues and share in some measure of frustration about grappling with the Lerner etal approach.

A couple of observations (and more personal reminders for myself), have been (albeit unsurprising):

i) they have attempted to deliberately exclude mainstream stellar evolution models (and are quite open about this);
ii) as per most EU-style thinking, they have chosen to be guided entirely by where the data leads them, (any data, although, it ends up being quite selective);
iii) the full details of their analysis have not been disclosed in the respective papers (although Lerner claims that they are);
iv) the style in (iii) leaves the reader with the age-old cry of: "just believe in us .. and in our methodology"

These approaches are pretty clearly deliberate and I think, require having to deal with the immediate information being analysed. Some very limited mainstream theory is obviously being invoked, which makes probing the way its being used, 'fair game'.

Reality Check
2018-Apr-02, 11:18 PM
he more pertinent conversation is about the integrity of the analysis in his two supporting papers here (https://arxiv.org/pdf/1405.0275.pdf) and here (https://arxiv.org/pdf/1803.08382.pdf).
Lerner states "Previously, the author and colleagues (Lerner, 2006, 2009; Lerner, Falomo, and Scarpa, 2014) have demonstrated that extensive SB data for disk galaxies from GALEX and HUDF is entirely compatible with a static universe where z is linearly proportional to distance for all z.". However his "papers" are actually conference presentations. The published paper UV surface brightness of galaxies from the local Universe to z ~ 5 (https://arxiv.org/abs/1405.0275) has at least 1 fatal flaw.
Look at the analysis that experts in the subject did. Tolman surface brightness test (https://en.wikipedia.org/wiki/Tolman_surface_brightness_test) has the results of 4 papers from Lori M. Lubin and Allan Sandage in 2001:
The Tolman Surface Brightness Test for the Reality of the Expansion. I. Calibration of the Necessary Local Parameters (http://adsabs.harvard.edu/abs/2001AJ....121.2271S)
The Tolman Surface Brightness Test for the Reality of the Expansion. II. The Effect of the Point-Spread Function and Galaxy Ellipticity on the Derived Photometric Parameters (http://adsabs.harvard.edu/abs/2001AJ....121.2289L)
The Tolman Surface Brightness Test for the Reality of the Expansion. III. Hubble Space Telescope Profile and Surface Brightness Data for Early-Type Galaxies in Three High-Redshift Clusters (http://adsabs.harvard.edu/abs/2001AJ....122.1071L)
The Tolman Surface Brightness Test for the Reality of the Expansion. IV. A Measurement of the Tolman Signal and the Luminosity Evolution of Early-Type Galaxies (http://adsabs.harvard.edu/abs/2001AJ....122.1084L)
Sandage followed up with a fifth paper in 2010:
The Tolman Surface Brightness Test for the Reality of the Expansion. V. Provenance of the Test and a New Representation of the Data for Three Remote Hubble Space Telescope Galaxy Clusters (http://adsabs.harvard.edu/abs/2010AJ....139..728S)
The Tolman surface brightness test is a complex test depending on many factors. A short paper on that test is dubious because it could be missing out factors. The Lerner. et al paper has a quarter or less pages than the Lubin and Sandage papers. Note that Lubin and Sandage use Keck data and calibrate the data against the physical properties of the Keck telescope (paper II).

The fatal flaw is ignoring that galaxies evolve. As galaxies age the abundance of types of stars changes and their surface brightness changes.

A mistake in the previous and current papers is the phrase "static Euclidean universe" when we know the universe is not Euclidean, i.e. non-Euclidean general relativity works.

The current paper by Lerner alone may have the same flaw because galaxies also physically evolve, e.g. through mergers and stripping stars from colliding galaxies. I also have suspicions about selecting "high luminosity disk galaxies". I would have expected a section showing that having a statistically biased sample would have no effects on his conclusion.
ETA: There is a "8. Does any physical mechanism correctly predict size evolution?" section which however is about mainstream predictions, not his static universe predictions. We know that the size of galaxies evolve through mergers because we see it happening, have good evidence of past mergers within the Milky Way and measure that the Andromeda Galaxy has a collision course with us. So size evolution is not really in dispute. Some predictions are not accurate though.

The selection is followed by "Such galaxies have young stellar populations and thus formed a relatively short time before the time at which they are observed". This is just wrong. That a galaxy has young stars says nothing about the age of the galaxy itself. The only way that could be correct is if the galaxy only had young stars in it.

Copernicus
2018-Apr-03, 02:26 AM
I am surprised Hubble was even able to determine that the Universe was expanding with how difficult it is to study it now.

Hornblower
2018-Apr-03, 03:16 AM
I am surprised Hubble was even able to determine that the Universe was expanding with how difficult it is to study it now.

Seeing that there was a large redshift in the light from extremely distant galaxies was relatively easy with the great light-gathering power of the Mount Wilson 100-inch reflector and a suitable spectrograph. Firming up the distance ladder and the value of the redshift parameter was and is the hard part.

Selfsim
2018-Apr-03, 05:37 AM
And here we go with the Galex figure:

Galex Performance, GR1 Mission Section 2.1 Optical Design (http://www.galex.caltech.edu/DATA/gr1_docs/GR1_Mission_Instrument_Overview_v1.htm):

"The design yields a field-averaged spot size of 1.6 arcsec (80%EE) for the FUV imagery and 2.5 arcsec (80%EE) for the FUV spectroscopy at 1600�. NUV performance is similar. There is no in-flight refocus capability".

So, the above 1.6 arcsec figure for the FUV imagery is much higher than the theoretical diffraction limited performance calculated earlier, but it is nowhere near the 4.2 arcsec FHWM in FUV figures used by Eric etal (as being indicative of actual performance)!

Strange
2018-Apr-03, 06:34 AM
I am surprised Hubble was even able to determine that the Universe was expanding with how difficult it is to study it now.

I don’t think Hubble liked the expanding universe idea. It wasn’t him that suggested it. It was Lemaitre a year or two earlier.

I think that the main difficulty today is in trying to find ever more accurate (and earlier) data.

Shaula
2018-Apr-03, 07:37 AM
So, the above 1.6 arcsec figure for the FUV imagery is much higher than the theoretical diffraction limited performance calculated earlier, but it is nowhere near the 4.2 arcsec FHWM in FUV figures used by Eric etal (as being indicative of actual performance)!
And there we hit another potential problem! There are a large number of ways to characterise optical performance. The 1.6-2.5" figures are 80% ensquared energy figures. But the Hubble numbers that have been quoted are variously a resolution (i.e. two proximate PSFs are statistically distinguishable) and a PSF 3 sigma point. Was the system modelling they used to get their 1/38 figure detailed enough or did they just pull numbers from papers and hope they were the same metric?

Selfsim
2018-Apr-03, 10:43 AM
And there we hit another potential problem! There are a large number of ways to characterise optical performance. The 1.6-2.5" figures are 80% ensquared energy figures. But the Hubble numbers that have been quoted are variously a resolution (i.e. two proximate PSFs are statistically distinguishable) and a PSF 3 sigma point. Was the system modelling they used to get their 1/38 figure detailed enough or did they just pull numbers from papers and hope they were the same metric?Yep .. thanks for pointing that out .. Put that one to directly to him.
(From the wording in the UV Surface Brightness paper: 'Section 3. The Samples definition'), it appears to me, to be the latter of what you say above(?))

Jean Tate
2018-Apr-03, 02:21 PM
I've started to go through the MNRAS paper (Lerner 2018), and find myself frequently wondering who the anonymous referees were. Whoever they were, I sure wish I could get them to review some of my draft papers ... I'd surely have a dozen or so published (in MNRAS) by now! :D

I'll be blunter than Shaula, this paper is garbage, not even worthy of posting to viXra. In many respects, the Lerner+ (2014) is the better paper.


[...] there are a large number of arbitrary thresholds and untested methods in the paper. They have shown no modelled validation of the methods, nor have they performed a sensitivity study of their results. These two points alone make it very hard to accept a novel result.

There may be some (other than where Lerner simply copied others' work), but I have yet to find a single example of a well-tested novel method (as in, appears in Lerner 2018 but I have not encountered in my own reading of the literature).

As has been noted several times, critical to Lerner (2018)'s results and conclusions is the robustness of the methods used in Lerner+ (2014), especially those using GALEX data. For example, in Figure 2:


The log of the median radii of UV-bright disk galaxies M ~-18 from Shibuya et al, 2016 and the GALEX point at z=0.027 from Lerner, Scarpa and Falomo, 2014 is plotted against log of H(z) ,the Hubble radius at the given redshift

Remove that "GALEX point at z=0.027" and I doubt that there's much left to say, in the whole Lerner (2018) paper.


1) The use of the median value for the binned data is likely problematic. Using the median like this is only meaningful if the dataset is much more constrained than it is. Ideally you want to be looking at different observations of very similar bodies, which is not the case here
[...]
3) The bulk of the effect they are reporting looks like it may be being driven by results that are close to 2x the resolution of the systems. I suspect that this part of the analysis is highly unstable.
4) They show that Hubble and GALEX actually make for horrible radius comparisons. But then they do it anyway.

And there we hit another potential problem! There are a large number of ways to characterise optical performance. The 1.6-2.5" figures are 80% ensquared energy figures. But the Hubble numbers that have been quoted are variously a resolution (i.e. two proximate PSFs are statistically distinguishable) and a PSF 3 sigma point. Was the system modelling they used to get their 1/38 figure detailed enough or did they just pull numbers from papers and hope they were the same metric?

A common, almost standard, approach is to make the PSF a central part of one's data reduction and analysis. Good practice is to derive a PSF empirically, from the data itself, using objects known to be stars (my comments here are not entirely relevant to the Hubble and its various cameras; as has already been noted, Lerner does not really do any of his own analyses on Hubble data, relying instead on others'). This is often checked against engineering and design test data, to flag any unexpected issues for example.

For radial profile, surface brightness, "size", etc estimates, one at least considers deconvolving the photometric data with a robust and relevant PSF. You would certainly want to do that if you are fitting Sersic profiles to objects which are ~a few times bigger than the PSF (for Lerner+ 2014, that's a substantial fraction of their entire GALEX sample). And, at least in my experience, getting robust results from such deconvolution-then-profile fitting on objects which are ~the same size as the PSF is nigh on impossible (again, excluding Hubble and many of its cameras). In particular, if the PSF is ~a Gaussian, then its Sersic index will be ~0.5, which is <2.5 (duh). Also, an AGN will lower the Sersic index of a galaxy (if you're trying to fit a single Sersic profile, and not trying to decompose the object into a PSF AGN and "the rest"), and Lerner+ (2014) seems to ignore the possibility of AGNs.

I think it's unusual that a paper as awful as Lerner (2018) gets published in MNRAS (yes, there certainly are some, um, poor-quality papers in MNRAS, but Lerner 2018 is well beyond "poor-quality"). But, it is what it is; myself, I'm not going to waste any more time on it.

Jean Tate
2018-Apr-03, 09:00 PM
OK, I wasn't quite done.

I had always intended to write a post in the PF thread on Lerner (2018), and had even written a draft. However, between draft and post there were many, um, slips. So I've only just now got my PF post up.

Selfsim
2018-Apr-03, 10:28 PM
Personally, I think Lerner's 'cutoff' excluding data when it shouldn't, totally lays waste to his methodology.

Hubble operates so closely to the theoretical limits that it exposes just how wrong his method is.

I'm just about done on this one, as a direct result of this show-stopper (and I suspect Lerner now realises it also).

Selfsim
2018-Apr-03, 11:11 PM
Just clarifying my last comment, Lerner's method allows more Galex data to be included in the analysis because his Galex data cutoffs are ~50% lower (2.4 and 2.6 arcsecs) than what he claims are the actual Galex scope resolution limits (ie: 4.2 and 5.3 arcsec FWHM for FUV and NUV respectively). His method doesn't correct for this.

Then, for the Hubble data, his cutoffs don't vary with the wavelength of the observations, as they approach the theoretical (Rayleigh) optical limits of the scope (and they do).

The 1/38 ratio figure he uses, may as well be some arbitrarily selected number because there is no sense to his cutoff values in the first place.

The Hubble data itself, thus refutes his methodology, due to its failure in finding resolution differences in the individual HUDT filter data.

Copernicus
2018-Apr-03, 11:26 PM
Just clarifying my last comment, Lerner's method allows more Galex data to be included in the analysis because his Galex data cutoffs are ~50% lower (2.4 and 2.6 arcsecs) than what he claims are the actual Galex scope resolution limits (ie: 4.2 and 5.3 arcsec FWHM for FUV and NUV respectively). His method doesn't correct for this.

Then, for the Hubble data, his cutoffs don't vary with the wavelength of the observations, as they approach the theoretical (Rayleigh) optical limits of the scope (and they do).

The 1/38 ratio figure he uses, may as well be some arbitrarily selected number because there is no sense to his cutoff values in the first place.

The Hubble data itself, thus refutes his methodology, due to its failure in finding resolution differences in the individual HUDT filter data.



Thanks for reviewing this Selfsim! I thought this science was much more clear cut.

Selfsim
2018-Apr-04, 02:22 AM
I thought this science was much more clear cut.Umm .. I think the basic Physics is clear cut for those who are familiar with it. This also makes the expanding universe model a cut-and-dried matter for these folk.

The usual frustration surfaces when other models are claimed to be 'simpler and therefore .. must be correct' whilst such claimants plunder along, sometimes deliberately concealing the fundamental Physics principles which can easily refute those models. I'm not saying whether this is the case here or not .. but time will tell, I suppose.

Shaula
2018-Apr-04, 05:26 AM
Thanks for reviewing this Selfsim! I thought this science was much more clear cut.
It is, which is why the vast majority of the community holds that the current models of an expanding universe are good. That doesn't stop people writing papers like this in support of other theories or ideas. You frequently see claims about how dogmatic scientists are and how you can't possibly publish alternative ideas. Well here is proof you can. And a demonstration of how peer review can work.

Selfsim
2018-Apr-04, 06:01 AM
.. That doesn't stop people writing papers like this in support of other theories or ideas. You frequently see claims about how dogmatic scientists are and how you can't possibly publish alternative ideas. Well here is proof you can. And a demonstration of how peer review can work.Its a pity we can't say the peer reviewers at the International Journal of Modern Physics and MNRAS did their work, though ..

Jean Tate
2018-Apr-04, 07:16 PM
Thinking this over last night, I've had a bit of a change of heart.

You see, "challenging the mainstream" papers in the likes of MNRAS are rare. And one's like Lerner+ (2014) - it's not published in MNRAS, but is critical to Lerner (2018) (L18) which is - offer a rare opportunity. To take citizen science beyond the likes of Galaxy Zoo. To do something that's close to what astronomers actually do when they "do" astronomy as a science. And to see - by one's own efforts - how to go about trying to independently, objectively verify (or not!) a "challenge the mainstream" published result.

To that end, I have just written several posts in the PF thread on L18 (https://www.physicsforums.com/threads/observational-evidence-against-expanding-universe-in-mnras.943111/) (on p5), outlining my idea.

Selfsim
2018-Apr-05, 10:54 PM
So, just an update Eric's post#92 (https://www.physicsforums.com/threads/observational-evidence-against-expanding-universe-in-mnras.943111/page-5#post-5972714Reference https://www.physicsforums.com/threads/observational-evidence-against-expanding-universe-in-mnras.943111/page-5) at PFs (in as far as addressing my particular issues) has clarified his evidently singular view of the spatial resolution parameter, via his usage of only the telescope/camera combo figure, (thus forsaking any consideration of the optical performance/wavelength performance issues in his method).

I suppose my post may have discovered one way of digging out some of the thinking behind his method, (given that he hasn't previosly elaborated on this aspect).

Following on from Shaula's prods/hints to me in this thread, weve put our theoretically based position, (the Rayleigh criteria for optical resolution), to a practical test, and generated results in post#94 (https://www.physicsforums.com/threads/observational-evidence-against-expanding-universe-in-mnras.943111/page-5#post-5972794Reference https://www.physicsforums.com/threads/observational-evidence-against-expanding-universe-in-mnras.943111/page-5) at PFs. They demonstrate that in all cases tested, the FWHM of selected stars were higher in the optically filtered 814W data. Thus, the lack of differentiation of this aspect by Eric's method, evidently introduces uncertainty which has not been addressed by him his analysis/papers.

slang
2018-Apr-05, 11:04 PM
Thanks for the updates on that PF thread. Normally we don't really like "carrying over" discussions from other forums, but I guess that in this case the participation of the original author makes it especially worthwhile.

Selfsim
2018-Apr-06, 12:15 AM
Thanks for the updates on that PF thread. Normally we don't really like "carrying over" discussions from other forums, but I guess that in this case the participation of the original author makes it especially worthwhile.Ok, thanks and understood.

For my part in this thread going forward, my main intention is only to report on areas where the author of the paper has altered/corrected my understandings of his analysis, as I've posted here. I'm also happy to continue side discussions with other CQ members about the paper topic here, (of course).

If any issues arise where only the author can resolve them, I think it only fair to give him the right of direct reply on the other forum, as I don't think he's an active member here(?)

(Just tryin' to do the right thing all round here).