PDA

View Full Version : Which one of these 2 mainstream models gets it right ?



Don J
2012-May-08, 07:17 PM
About the formation of Filamentary and Large Scale Structures of the Universe ?

That one based on magnetic field and gravity ?
http://adsabs.harvard.edu/cgi-bin/nph-bib_query?bibcode=1997A%26A...326...13B&db_key=AST

Or that gravitational (only) drived model based on dark matter ?

http://arxiv.org/abs/astro-ph/0504097

antoniseb
2012-May-08, 07:55 PM
This isn't really a Q&A topic. If you already have a formed opinion, you should be taking this someplace else, such as Astronomy if you want a mainstream discussion, or ATM if you have something you want to try out.

Personally I don't like your rhetorical style that implies that exactly one of these is exactly right. Odds are there's some truth in both papers, and neither is perfectly right.

Don J
2012-May-08, 08:01 PM
This isn't really a Q&A topic. If you already have a formed opinion, you should be taking this someplace else, such as Astronomy if you want a mainstream discussion,


Ok for a transfer to Astronomy.




Personally I don't like your rhetorical style that implies that exactly one of these is exactly right. Odds are there's some truth in both papers, and neither is perfectly right.

antoniseb
2012-May-08, 08:08 PM
I have moved it, and left a redirect that will expire in 24 hours.

tusenfem
2012-May-09, 06:05 AM
IMHO it's not and either/or question, it is just two different approaches, both of which will probably not get the completely correct large scale structures. Most likely both processes will work at the same time and a joint model will have to be created. It's sort of like the solar wind, it is not that we do not know how that is generated, it is that there are too many processes that can generate it and they have to be combined.

Don J
2012-May-10, 01:44 AM
IMHO it's not and either/or question, it is just two different approaches, both of which will probably not get the completely correct large scale structures. Most likely both processes will work at the same time and a joint model will have to be created. It's sort of like the solar wind, it is not that we do not know how that is generated, it is that there are too many processes that can generate it and they have to be combined.
Battaner in part 4 claim that CDM based model(s) in (1982) ,(1997) predicted a random distribution of Galaxies on very large scale. This is contradicted by observations which show rather a considerable regularity (Battaner's citation)...
See introduction chapter for details
Part 4
http://adsabs.harvard.edu/abs/1998A%26A...338..383B

parejkoj
2012-May-10, 02:33 AM
I wouldn't call Battaner's work mainstream at all. Certainly, it is not ever discussed in modern studies of the matter evolution of the Universe.

Springel et al.'s work on the Millennium Simulation is very close to the observed properties of the matter distribution. The differences can be mostly ascribed to their choices of initial parameters being slightly different.

Don J
2012-May-10, 03:31 AM
I wouldn't call Battaner's work mainstream at all. Certainly, it is not ever discussed in modern studies of the matter evolution of the Universe.

It seem that Battaner model (1997-98)is in reply to the failure of earlier CDM model(s) predictions made in (1982)and (1997) to match observation.
http://adsabs.harvard.edu/cgi-bin/nph-bib_query?bibcode=1997A%26A...326...13B&db_key=AST

see reference in post 6.



Springel et al.'s work on the Millennium Simulation is very close to the observed properties of the matter distribution. The differences can be mostly ascribed to their choices of initial parameters being slightly different.
Yep ! But there is a trick...

If you look at Page 19 you will see that the parameters they used for making their model were specifically selected to match surveys results ..."our parameter adopted values are consistent with a combined analysis of the 2dFGRS surveys and first WMAP data".(Citation)

http://arxiv.org/abs/astro-ph/0504097

Reality Check
2012-May-10, 04:02 AM
The trick is that they selected the "cosmological parameters of our LCDM-simulation" that were "consistent with a combined analysis of the 2dFGRS surveys and first WMAP data". They want to match the results of their simulation with the actual observations (e.g. the 2dFGRS surveys) so this selection is not surprising.

Don J
2012-May-10, 04:17 AM
The trick is that they selected the "cosmological parameters of our LCDM-simulation" that were "consistent with a combined analysis of the 2dFGRS surveys and first WMAP data". They want to match the results of their simulation with the actual observations (e.g. the 2dFGRS surveys) so this selection is not surprising.
No ,that is not surprising at all. However how they can pretend that they are making predictions?.Are they talking about the evolution of the large scale structures in n billions years?

Because as pointed out in post 6

CDM based model(s) made in (1982) ,(1997) predicted a random distribution of Galaxies on very large scale. This (was) contradicted by observations which show rather a considerable regularity

Shaula
2012-May-10, 05:30 AM
If you look at Page 19 you will see that the parameters they used for making their model were specifically selected to match surveys results ..."our parameter adopted values are consistent with a combined analysis of the 2dFGRS surveys and first WMAP data".(Citation)
You mean... They are using evidence to refine their model?! What a disgrace!

Generally with these sorts of models you can count it as a success if you get more out than you put it. If you can explain a lot of things you see with a small number of tuned parameters in a model then it is a useful one. Just look at the Standard Model.

BTW - saying that a 30 year old model made in the early history of LCDM modelling got the answers wrong is not a good argument against modern models.

Don J
2012-May-10, 05:45 AM
You mean... They are using evidence to refine their model?! What a disgrace!

Generally with these sorts of models you can count it as a success if you get more out than you put it. If you can explain a lot of things you see with a small number of tuned parameters in a model then it is a useful one. Just look at the Standard Model.


But the point is that they claim that they are making predictions with their model, they don't.



If you can explain a lot of things you see with a small number of tuned parameters in a model then it is a useful one.

Can you point about these new -'lot of things'- that this model can explain which was not already know at the moment of the simulation ?

Shaula
2012-May-10, 06:34 AM
My point about the lot of things was that predictions are hard in astrophysical terms. We have made a lot of observations and we do not have the luxury of being able to wait a billion years and see if we are right. A predictive success for models like this can include it explaining something you did not give it as a starting condition. So (as a random example) if you just gave it a few parameters relating to density and temperature, tuned it to match the CMBR and it produced galaxies then that would be a predictive success. The model did not know about galaxies but it produced them.

Sometimes you get lucky and a model predicts something you have not looked for - but in general the data has already been collected, now we need models to tie it together.

Don J
2012-May-10, 08:31 AM
My point about the lot of things was that predictions are hard in astrophysical terms. We have made a lot of observations and we do not have the luxury of being able to wait a billion years and see if we are right.

You are in agreement that their so called "prediction" in the introduction chapter relate in fact to the description of the initial conditions in the early Universe which finally leaded to the formation of large scale structures...



A predictive success for models like this can include it explaining something you did not give it as a starting condition. So (as a random example) if you just gave it a few parameters relating to density and temperature, tuned it to match the CMBR and it produced galaxies then that would be a predictive success. The model did not know about galaxies but it produced them.

Wrong the model was adjusted all along to match galaxies specific characteristics.
see page 7
..."the modelling assumption and parameters are adjusted by trials and errors in order to fit the observed properties of low redshifts galaxies"

tusenfem
2012-May-10, 09:37 AM
Wrong the model was adjusted all along to match galaxies specific characteristics.
see page 7
..."the modelling assumption and parameters are adjusted by trials and errors in order to fit the observed properties of low redshifts galaxies"

Ehhh, DUH! That is how scientific research works. You make assumptions about the "initial parameters" of the universe, then see whether the result is in agreement with what we see, if not, then adjust the parameters, as those are basically the unknowns of the universe.

What is it that you expect? Just throw in random stuff and if that does not work, throw away your computer model?

Naturally you want the magnetic model to be the "correct" one, taking your background into cosideration, but like I said before this is not an either/or question, because nature does not make it that simple. You may want to throw in an incorrect paper from 1982, but that argument is laughable, because as said before, that is at the beginnig stage of this kind of science, and that paper can hardly be used as an argument.

Shaula
2012-May-10, 10:20 AM
Wrong the model was adjusted all along to match galaxies specific characteristics.
see page 7
..."the modelling assumption and parameters are adjusted by trials and errors in order to fit the observed properties of low redshifts galaxies"
Please read my posts more carefully. The bit you bolded is being interpreted out of context.

So (as a random example) if you just gave it a few parameters relating to density and temperature, tuned it to match the CMBR and it produced galaxies then that would be a predictive success. The model did not know about galaxies but it produced them.
That is what I said. It was being used as an example, not as a specific. It was showing what predictive power could mean in astrophysical models. It was not a commentary on the model or paper presented.

parejkoj
2012-May-10, 02:01 PM
There was no Lambda-CDM model in the 80s: the dark matter models of the time were a variety of warm, hot, and cold dark matter, with several possibilities for the values of omega_M and omega_b. It turns out that if you don't have the correct values of the initial density parameters, you don't get the matter evolution of the Universe correct. Who knew?

Don J
2012-May-10, 06:29 PM
Please read my posts more carefully. The bit you bolded is being interpreted out of context.

That is what I said. It was being used as an example, not as a specific. It was showing what predictive power could mean in astrophysical models. It was not a commentary on the model or paper presented.

Right i have understand what you proposed however your specific claim sounds like you are saying that they have only put few basic parameters and let run the model from that point which is totally wrong. see post 19

This is your specicfic claim
So (as a random example) if you just gave it a few parameters relating to density and temperature, tuned it to match the CMBR and it produced galaxies then that would be a predictive success. The model did not know about galaxies but it produced them.

ETA


It was showing what predictive power could mean in astrophysical models.

Can you point out to the predictive power (a prediction) this model as made about something who was not already known at the time of the simulation?

Don J
2012-May-10, 06:35 PM
Ehhh, DUH! That is how scientific research works. You make assumptions about the "initial parameters" of the universe, then see whether the result is in agreement with what we see, if not, then adjust the parameters, as those are basically the unknowns of the universe.

What is it that you expect? Just throw in random stuff and if that does not work, throw away your computer model?

Naturally you want the magnetic model to be the "correct" one, taking your background into cosideration, but like I said before this is not an either/or question, because nature does not make it that simple. You may want to throw in an incorrect paper from 1982, but that argument is laughable, because as said before, that is at the beginnig stage of this kind of science, and that paper can hardly be used as an argument.
The CDM model made in 1997 was not better.

The present CDM model looks more like a staged movie.See page 7 about the specific algorithms used for galaxies formation for example.

http://arxiv.org/abs/astro-ph/0504097

parejkoj
2012-May-10, 07:16 PM
Right i have understand what you proposed however your specific claim sounds like you are saying that they have only put few basic parameters and let run the model from that point which is totally wrong.

Actually, that's exactly what they did. Why do you think it isn't? If there is a part of the paper you are confused about, perhaps you could quote it and we could discuss it.

Their semi-analytical model for galaxy formation is a model for how galaxies occupy dark matter halos, and was not tuned to match observations. They did not follow the formation of individual stars, gas and dust into galaxies (that's impossible given current computing capabilities), but rather parametrized our understanding of that process with a simpler procedure (the "semi-analytic model"), and applied that procedure to the dark matter distribution.

Shaula
2012-May-10, 07:19 PM
Can you point out to the predictive power (a prediction) this model as made about something who was not already known at the time of the simulation?
Already covered this. Not going to keep repeating myself.

Don J
2012-May-10, 07:23 PM
Already covered this. Not going to keep repeating myself.
Your answer in
Post 13... I presume ....

Don J
2012-May-10, 07:29 PM
Actually, that's exactly what they did. Why do you think it isn't? If there is a part of the paper you are confused about, perhaps you could quote it and we could discuss it.

Their semi-analytical model for galaxy formation is a model for how galaxies occupy dark matter halos, and was not tuned to match observations. They did not follow the formation of individual stars, gas and dust into galaxies (that's impossible given current computing capabilities), but rather parametrized our understanding of that process with a simpler procedure (the "semi-analytic model"), and applied that procedure to the dark matter distribution.
I was replying to the specific claim made by Shaula in post 13...


_if you just gave it a few parameters relating to density and temperature, tuned it to match the CMBR and it produced galaxies then that would be a predictive success. The model did not know about galaxies but it produced them.

You are confirming that Shaula is wrong when he claim that The model did not know about galaxies but it produced them.

Shaula
2012-May-10, 08:42 PM
You are confirming that Shaula is wrong when he claim that The model did not know about galaxies but it produced them.
And I have already covered that. It was not a specific claim it was a demonstrative example of, hypothetically, how a model could be shown to have predictive power if it was shown to match observations not used to 'train' it.

Is it really that hard to understand? You were questioning how an astrophysical model could be predictive, I was explaining. I was not referring to any particular model.

Don J
2012-May-10, 08:57 PM
And I have already covered that. It was not a specific claim it was a demonstrative example of, hypothetically, how a model could be shown to have predictive power if it was shown to match observations not used to 'train' it.

Is it really that hard to understand? You were questioning how an astrophysical model could be predictive, I was explaining. I was not referring to any particular model.
Thanks, for the clarification...!No, that is not really hard to understand that this model as no predictive power because that contrary to your demonstrative example the present model was trained to match observations. see post 19

http://arxiv.org/abs/astro-ph/0504097

Reality Check
2012-May-11, 01:59 AM
No ,that is not surprising at all. However how they can pretend that they are making predictions?.Are they talking about the evolution of the large scale structures in n billions years?

Yes they they are making predictions!.
The evolution of the large scale structures in billions of years is an output of the model. It is not part of the input. It is a prediction of the model.


CDM based model(s) made in (1982) ,(1997)...
Which has nothing to do with Springel et al.'s work on the Millennium Simulation which was done in 2005.

Reality Check
2012-May-11, 02:10 AM
You are confirming that Shaula is wrong when he claim that The model did not know about galaxies but it produced them.
He is confirming what anyone who knows about the simulation knows: That the model did not know about galaxies but it produced them. and

To track the formation of galaxies and quasars in the simulation, we implement a semianalytic model to follow gas, star and supermassive black hole processes within the merger history trees of dark matter halos and their substructures (see Supplementary Information). The trees contain a total of about 800 million nodes, each corresponding to a dark matter subhalo and its associated galaxies. This methodology allows us to test, during postprocessing, many different phenomenological treatments of gas cooling, star formation, AGN growth,
feedback, chemical enrichment, etc.
My emphasis added.
Each node in the simulation contained multiple galaxies. So they used other models to simulat the galaxies for the simple reason that the nodes did not go down to the atoms in the universe!

Don J
2012-May-11, 03:12 AM
Yes they they are making predictions!.
The evolution of the large scale structures in billions of years is an output of the model. It is not part of the input. It is a prediction of the model.

Right ,but as I pointed in the OP there is another mainstream model based on magnetic field which predict the same thing.
here
http://adsabs.harvard.edu/cgi-bin/nph-bib_query?bibcode=1997A%26A...326...13B&db_key=AST



He is confirming what anyone who knows about the simulation knows: That the model did not know about galaxies but it produced them.

Wrong, their CDM model known about Galaxies formation
As i pointed out numerous time...
See page 7of the paper about the specific -semi-analytic model- used by the model to account for galaxies formation and the adjustments by trial and error they were forced to made in their CDM model to match observations.
http://arxiv.org/abs/astro-ph/0504097

parejkoj
2012-May-11, 03:26 AM
So, I feel like I need to clear up some confusion on the part of several people here about what Springel et al. (and most modern cosmological simulations) did.

A semi-analytic model does not "produce galaxies" in the sense of generating them ab initio from the gravitational collapse of gas, but rather assigns galaxies to dark matter halos following some prescription. The overall matter distribution is set by the cosmological parameters and however they generate the initial Gaussian random field. The small to medium scale galaxy clustering (correlation function or power spectrum) at a given redshift is thus determined by both how they assign galaxies to halos and the clustering of the dark matter particles (and thus halos) at that redshift. Because of the nature of the gravitational collapse of structure, the clustering of matter on large scales (~>70Mpc) is also determined by both, because of effects like the Baryon Acoustic Oscillation (BAO) which shows up at scales around 150Mpc.

So, a simulation like the Millennium run makes direct predictions about both the small scale and large scale clustering of galaxies of various types (whatever types the semi-analytic model produces) throughout the range of redshift that the simulation covers. Those predictions can be verified or rejected by observations of those types of galaxies at the appropriate redshift. By providing initial cosmological parameters that match the results of, e.g., the WMAP data, we can test whether our understanding of cosmology from WMAP matches with what we can observe from the large scale distribution of galaxies (it does). And by the same token, by running a suite of simulations with different cosmological parameters, we can see what other choices of cosmological parameters also produce a similar large scale galaxy distribution (very few).

parejkoj
2012-May-11, 03:31 AM
Right ,but as I pointed in the OP there is another mainstream model based on magnetic field which predict the same thing.
here
http://adsabs.harvard.edu/cgi-bin/nph-bib_query?bibcode=1997A%26A...326...13B&db_key=AST


That is not a mainstream model, and that paper does not provide any comparison with observations. In addition, the understanding of the large scale structure of galaxies was quite limited during the 90s, so any such comparison (though again, that paper provides none) would have to be revisited with more recent data from e.g. SDSS or 2dF.

Don J
2012-May-11, 03:38 AM
So, I feel like I need to clear up some confusion on the part of several people here about what Springel et al. (and most modern cosmological simulations) did.

A semi-analytic model does not "produce galaxies" in the sense of generating them ab initio from the gravitational collapse of gas, but rather assigns galaxies to dark matter halos following some prescription. The overall matter distribution is set by the cosmological parameters and however they generate the initial Gaussian random field. The small to medium scale galaxy clustering (correlation function or power spectrum) at a given redshift is thus determined by both how they assign galaxies to halos and the clustering of the dark matter particles (and thus halos) at that redshift. Because of the nature of the gravitational collapse of structure, the clustering of matter on large scales (~>70Mpc) is also determined by both, because of effects like the Baryon Acoustic Oscillation (BAO) which shows up at scales around 150Mpc.


Thanks for the clarification.

A semi-analytic model......assigns galaxies to dark matter halos following some prescription.

The small to medium scale galaxy clustering (correlation function or power spectrum) at a given redshift is thus determined by both how they assign galaxies to halos and the clustering of the dark matter particles (and thus halos) at that redshift.

So the point is about Galaxies clustering and redshift that the semi-analytic model introduce in their CDM model...which still mean that the model known about Galaxies in term of data gathered from observation surveys about Galaxies clustering and Redshift.

parejkoj
2012-May-11, 04:29 AM
Thanks for the clarification.
So the point is about Galaxies clustering and redshift that the semi-analytic model introduce in their CDM model...which still mean that the model known about Galaxies in term of data gathered from observation surveys about Galaxies clustering and Redshift.

No, I think you're still misunderstanding. The prescription for how to populate halos with galaxies doesn't know about "galaxy clustering and redshift". It knows about baryon physics, and something about how baryons are distributed relative to the dark matter. That's it. The tuning that happens is required because some aspects of the baryon physics are very complicated and not that well constrained, but the model is not tuned to match observations of galaxy clustering. The clustering is a direct prediction of the model.

Don J
2012-May-11, 04:44 AM
No, I think you're still misunderstanding. The prescription for how to populate halos with galaxies doesn't know about "galaxy clustering and redshift". It knows about baryon physics, and something about how baryons are distributed relative to the dark matter. That's it. The tuning that happens is required because some aspects of the baryon physics are very complicated and not that well constrained, but the model is not tuned to match observations of galaxy clustering. The clustering is a direct prediction of the model.

But they specifically claim in Page 19 _ our parameter adopted values are consistent with a combined analysis of the 2dFGRS surveys and first WMAP data"._
http://arxiv.org/abs/astro-ph/0504097
So it seem that they plugged the results of the combined analysis of the 2dFGRS surveys and first WMAP data to run the program... Right?

Eta
On page 18 in the Method chapter they describe the code utilised for the simulation called -hierarchical mutipole expansion method, or"tree"algorithm.
I think it is a similar version or a modification of that one .
http://ai.stanford.edu/~paskin/gm-short-course/lec3.pdf

Jerry
2012-May-11, 05:17 PM
Models always assume round cows. No one has ever made a model based upon first principles that create the universe as we know it. As more knowledge of what the universe is has rolled into our path, the more complex the models must be to handle the details.

The danger in this approach is that by adding more assumptions and parameters, more confidence ends up going into the model than is warranted. A good example is the University of Colorado 'hurricane predictions' posted in late March or early April. Over the last five years, these models demonstrated zero predictive power and had to be scrapped. There was nothing wrong with the physics in the models - the problem was the inability of the model to adapt to rapid climate change. Cause-and-effect assumptions were just plain wrong.

We should enjoy the fact that we have toy models that roll out a universe similar to what we see. But these models are toys - not hard physical solutions, and the underlying danger is that more confidence is placed in 'established scientific principles' than should be. In any and all cases, we are infants sharing ideas about something we are trying wrap our collective arms about. We should welcome as many models as possible, and scrutinize the assumptions that seem to have the best predictive power - both into the past and into the future. There is much to learn.

Meanwhile, back at this thread - It seems to me that a model based upon both gravitational and electrodynamics has a much better chance of surviving into the future than a dark matter model. NONE of the predicted attributes of dark matter have shown their signature. Dark matter is a vacuous unsubstantiatable physical assumption, and the sooner we find a way to discard it, the better.

Tensor
2012-May-12, 11:44 PM
But they specifically claim in Page 19 _ our parameter adopted values are consistent with a combined analysis of the 2dFGRS surveys and first WMAP data"._
http://arxiv.org/abs/astro-ph/0504097
So it seem that they plugged the results of the combined analysis of the 2dFGRS surveys and first WMAP data to run the program... Right?

Don, any astrophysical paper will specify the assumed cosmological values. Those values would be different if you assume MOND, but there would still be values that have to be plugged in. Lookback time, co-moving distance, etc are all different if the parameters are different, that's why they have to specify what parameters they are using. For instance This paper (http://arxiv.org/pdf/1204.2838v1.pdf) at the bottom right on page 2, lists the cosmological parameters used in the paper. That was a random paper I found by typing Gunn-Petterson into google. I was the fifth item on the page.

And, just as a note, they are trying to model the evolution of galaxies, quasars and their distribution. They plug in the values that have been measured now, and let the thing run from the last scattering. If the end result of the simulation isn't what we see, then there may be a problem with our current model. If the end of the simulation matches what we currently see, then our current model may be on the right path. This isn't to say that if it doesn't match there's something wrong or if it does, it definitely confirms it. Just that it's a piece of evidence either for (it matches) or against (it doesn't match)


Eta
On page 18 in the Method chapter they describe the code utilised for the simulation called -hierarchical mutipole expansion method, or"tree"algorithm.
I think it is a similar version or a modification of that one .
http://ai.stanford.edu/~paskin/gm-short-course/lec3.pdf

Actually, not it's not. That particular on is used in artificial intelligence or information theory, not N-body simulations. The code that Springle et al used is a combination of the Particle Mesh and TREE code. This paper (http://arxiv.org/pdf/astro-ph/9409021v1.pdf) describes the merging of PM and TREE codes. The TREE codes came out of work Josh Barnes and Piet Hut (http://en.wikipedia.org/wiki/BarnesĖHut_simulation) did.

Don J
2012-May-13, 03:38 AM
And, just as a note, they are trying to model the evolution of galaxies, quasars and their distribution. They plug in the values that have been measured now, and let the thing run from the last scattering. If the end result of the simulation isn't what we see, then there may be a problem with our current model. If the end of the simulation matches what we currently see, then our current model may be on the right path. This isn't to say that if it doesn't match there's something wrong or if it does, it definitely confirms it. Just that it's a piece of evidence either for (it matches) or against (it doesn't match)

That is exactly what i said since the beginning that the model known about all these things (Large Scale structures ,evolution of galaxies, quasars and their distribution.) , so that is not surprising that it produced them.

That is what i tried to explain to Shaula who argued in post 13 via his -demonstrative example- "The model did not know about galaxies but it produced them."

Or Reality Check in post 26... about large scale structures who argued:
"The evolution of the large scale structures in billions of years is an output of the model. It is not part of the input. It is a prediction of the model."
and in post 27
"He is confirming what anyone who knows about the simulation knows: That the model did not know about galaxies but it produced them."

Eta
Now about the observation and "prediction" that the model provided.

Observation in page 15
Following a demonstration~
...are the baryon wiggle also present in the Galaxies distribution ? ...fig 6 shows that the answer to that important question is yes.

Prediction in page 18 down the page
If future surveys increase on this: (see text )....then precision measurements of Galaxies clustering will shed light on the most puzzling components of the Universe :the elusive dark energy field.

Tensor
2012-May-13, 04:46 AM
That is exactly what i said since the beginning that the model known about all these things (Large Scale structures ,evolution of galaxies, quasars and their distribution.) , so that is not surprising that it produced them.

No, it doesn't know about galaxies. Those parameters are not large scale structure, galaxies or quasars or their distribution. They are simply numbers, that are based on observations. Those numbers do nothing else but give the model the initial conditions. The model then proceeds from there.


That is what i tried to explain to Shaula who argued in post 13 via his -demonstrative example- "The model did not know about galaxies but it produced them."

That is exactly what the model does. It starts with no galaxies. How does it know about galaxies, when it starts without them?


Or Reality Check in post 26... about large scale structures who argued:
"The evolution of the large scale structures in billions of years is an output of the model. It is not part of the input. It is a prediction of the model."
and in post 27
"He is confirming what anyone who knows about the simulation knows: That the model did not know about galaxies but it produced them."

He's right. They give their paper's initial conditions. A MOND model would have different parameters, BUT THE MOND MODEL WOULD STILL HAVE TO HAVE THOSE PARAMETERS TO INITIALIZE THE MODEL. Would you claim the MOND model "knows" about Large Scale Structure, galaxies, quasars and their distribution because of the MOND's required initial parameters? Even a plasma universe model would require initial parameters to run. Every model has initial conditions that has to be input into the model, to allow it to run. If you are arguing that inputing initial conditions(that have been measured) somehow causes the model to output the answer you want, it appears that you don't know what initial conditions or initial parameters are, or their importance in modeling.


Modeling works in the following way:
Develop the model.
Write the code that represents the model.
Enter the initial conditions
Run the model
Observe the data during the run and the end of the run.

Where in there does entering numbers before running the model allow the model (which was written without knowing the actual numeric initial conditions) to "know" about Galaxies, Quasars, large scale structure, and distribution?

The following are the parameters that were used in the model.

Ωm = Ωdm + Ωb = 0.25,
Ωb = 0.045,
h = 0.73,
ΩΛ = 0.75,
n = 1,
σ8 = 0.9.

ρcrit = 3H02/(8πG)
H0 = 100 h km s−1Mpc−1
σ8 is the rms linear mass fluctuation within a sphere of radius 8h−1Mpc extrapolated to z = 0

What changes should be made in those values, so the model doesn't "know" about Large scale structures, evolution of galaxies, quasars, and their distribution, and why do those changes take away the knowledge from the model?

Don J
2012-May-13, 05:03 AM
The following are the parameters that were used in the model.

Ωm = Ωdm + Ωb = 0.25,
Ωb = 0.045,
h = 0.73,
ΩΛ = 0.75,
n = 1,
σ8 = 0.9.

ρcrit = 3H02/(8πG)
H0 = 100 h km s−1Mpc−1
σ8 is the rms linear mass fluctuation within a sphere of radius 8h−1Mpc extrapolated to z = 0

Where in there does entering numbers before running the model allow the model (which was written without knowing the actual numeric initial conditions) to "know" about Galaxies, Quasars, large scale structure, and distribution?

Those parameters are not large scale structure, galaxies or quasars or their distribution. They are simply numbers, that are based on observations. Those numbers do nothing else but give the model the initial conditions. The model then proceeds from there.



But the code algorithm (the program)is the key he his creating the large scale stuctures ....via a pre-established pattern.



The code that Springle et al used is a combination of the Particle Mesh and TREE code.
http://en.wikipedia.org/wiki/Barnes%E2%80%93Hut_simulation
and this paper describes the merging of PM and TREE codes.
http://arxiv.org/pdf/astro-ph/9409021v1.pdf

Tensor
2012-May-13, 05:40 AM
But the code algorithm (the program)is the key he his creating the large scale stuctures ....via a pre-established pattern.

Yeah, so. Gravity and EM act a certain way. You can't change that. That is what is the program does, takes initial conditions and then takes into account the effects of Gravity and EM as the model runs. Or are you accusing them of specifically making up code, to specifically to get large scale structures, galaxies, and quasars?

Don J
2012-May-13, 05:59 AM
Yeah, so. Gravity and EM act a certain way. You can't change that. That is what is the program does, takes initial conditions and then takes into account the effects of Gravity and EM as the model runs.
Right ...however are you sure this model takes EM effects into account ?


Or are you accusing them of specifically making up code, to specifically to get large scale structures, galaxies, and quasars?
No, what I said is that the simulation program used ...-which obviously must take into account the initial conditions.-
http://en.wikipedia.org/wiki/Barnes%E2%80%93Hut_simulation

....create what is it supposed to create (node at the junction of 2 or more intersecting filaments).....
http://en.wikipedia.org/wiki/Node_%28graph_theory%29

In graph theory, a vertex (plural vertices) or node is the fundamental unit out of which graphs are formed: an undirected graph consists of a set of vertices and a set of edges (unordered pairs of vertices), while a directed graph consists of a set of vertices and a set of arcs (ordered pairs of vertices). From the point of view of graph theory, vertices are treated as featureless and indivisible objects, although they may have additional structure depending on the application from which the graph arises; for instance, a semantic network is a graph in which the vertices represent concepts or classes of objects.

The two vertices forming an edge are said to be the endpoints of this, and the edge is said to be incident to the vertices. A vertex w is said to be adjacent to another vertex v if the graph contains an edge (v,w). The neighborhood of a vertex v is an induced subgraph of the graph, formed by all vertices adjacent to v.

Taking into account the Universe expansion factor .
Multipole expansion
http://en.wikipedia.org/wiki/Multipole_expansion

Shaula
2012-May-13, 06:23 AM
....via a pre-established pattern.
What exactly do you mean by that? Please explain in suitable detail where the template you seem to be alluding to is, what it consists of and how it invalidates the results.

Don J
2012-May-13, 06:38 AM
....via a pre-established pattern.
What exactly do you mean by that? Please explain in suitable detail where the template you seem to be alluding to is, what it consists of

See post 40.


and how it invalidates the results.

I don't pretend it invalidate the results.

Shaula
2012-May-13, 07:25 AM
See post 40.
That really does not answer what I asked, even with the edits. You seem to be making an assumption that somehow the way the algorithm divides the simulation space up creates large scale structure? Is your argument that any simulation, no matter the starting conditions, that uses this scheme will generate results similar to the filamentary structures we observe? Please clarify your stance on this as I am not sure I agree with you.

Don J
2012-May-13, 07:48 AM
That really does not answer what I asked, even with the edits. You seem to be making an assumption that somehow the way the algorithm divides the simulation space up creates large scale structure?

No, not about the way the algorithm divides the simulation space up.Remember the code that Springle et al used is a combination of the Particle Mesh and TREE code.



Is your argument that any simulation, no matter the starting conditions, that uses this scheme will generate results similar to the filamentary structures we observe?

Not exactly...The TREE code will still produce filamentary structures but obviously not similar to what we observe today.But i wonder what will happend if you change the value of 50% less for DarkMatter and 50% more for Baryonic Matter without changing the other values?


The following are the parameters that were used in the model.

Ωm = Ωdm + Ωb = 0.25,
Ωb = 0.045,
h = 0.73,
ΩΛ = 0.75,
n = 1,
σ8 = 0.9.

ρcrit = 3H02/(8πG)
H0 = 100 h km s−1Mpc−1
σ8 is the rms linear mass fluctuation within a sphere of radius 8h−1Mpc extrapolated to z = 0

Shaula
2012-May-13, 07:54 AM
So your point is that because astrophysical simulations produce filamentary structures by simulating gravitational effects it is cheating and non-predictive to use astrophysical models in astrophysics?

What exactly is your chain of logic here? Please take the time to spell it out a bit more completely than you have been doing. Your posts feel like a series of retorts - it would be useful to have it laid out clearly in one place.

Don J
2012-May-13, 08:04 AM
So your point is that because astrophysical simulations produce filamentary structures by simulating gravitational effects it is cheating and non-predictive to use astrophysical models in astrophysics?

I have never said that .
In post 36 I have even pointed to an -observation- and a -prediction- this LCDM model make.This model answer an important question and make a big prediction...

http://arxiv.org/abs/astro-ph/0504097

Observation made by this LCDM model in page 15
Following a demonstration~
question ...are the baryon wiggle also present in the Galaxies distribution ? ...fig 6 shows that the answer to that important question is yes.

Prediction made by this LCDM model in page 18 down the page
If future surveys increase on this: (see text )....then precision measurements of Galaxies clustering will shed light on the most puzzling components of the Universe :the elusive dark energy field.



But as i pointed out in the OP there is another mainstream model based on magnetic fields and gravity who also make predictions about large scale structures.

http://adsabs.harvard.edu/cgi-bin/nph-bib_query?bibcode=1997A%26A...326...13B&db_key=AST

Shaula
2012-May-13, 09:37 AM
But the code algorithm (the program)is the key he his creating the large scale stuctures ....via a pre-established pattern.
This is the statement I am trying to get to the bottom of. You seem to be implying that there is something going on in the case of the paper you clearly do not favour that makes it's predictions less valuable than on first glance. Please can you elaborate?

parejkoj
2012-May-13, 02:38 PM
Not exactly...The TREE code will still produce filamentary structures but obviously not similar to what we observe today.But i wonder what will happend if you change the value of 50% less for DarkMatter and 50% more for Baryonic Matter without changing the other values?

If you do that, you'll get the matter correlation function wrong (among other things). Read some of the references in Springel et al. for examples of this.

The filamentary structure comes almost exclusively from the gravitational collapse of Gaussian-random perturbations of the initial density field. The specific code they use to perform the simulation doesn't really matter: it's just one choice to speed up calculations.

parejkoj
2012-May-13, 02:42 PM
But as i pointed out in the OP there is another mainstream model based on magnetic fields and gravity who also make predictions about large scale structures.

http://adsabs.harvard.edu/cgi-bin/nph-bib_query?bibcode=1997A%26A...326...13B&db_key=AST

For the Nth time, this is not a mainstream model, and its predictions cannot be easily compared with observations, nor does that paper attempt to do so. If you want to suggest that that model is correct, you have to either find a paper by that group that does compare their results with modern observations, or make that comparison yourself.

Don J
2012-May-13, 06:11 PM
For the Nth time, this is not a mainstream model, and its predictions cannot be easily compared with observations, nor does that paper attempt to do so. If you want to suggest that that model is correct, you have to either find a paper by that group that does compare their results with modern observations, or make that comparison yourself.

I see, only models based on LCDM and gravity as the only driving force are considered mainstream.Why the mainstream chose to ignore EM force in his model?



For the Nth time, this is not a mainstream model,

But the model works notheless... right ?
http://adsabs.harvard.edu/cgi-bin/nph-bib_query?bibcode=1997A%26A...326...13B&db_key=AST

Don J
2012-May-13, 06:20 PM
But i wonder what will happend if you change the value of 50% less for DarkMatter and 50% more for Baryonic Matter without changing the other values?
If you do that, you'll get the matter correlation function wrong (among other things). Read some of the references in Springel et al. for examples of this.

That is what i have in mind. So the model work only on the hypothetical assumption of the cold dark matter ratio and Dark Energy ratio...



The filamentary structure comes almost exclusively from the gravitational collapse of Gaussian-random perturbations of the initial density field. The specific code they use to perform the simulation doesn't really matter: it's just one choice to speed up calculations.
I have understand that....

parejkoj
2012-May-13, 07:24 PM
But the model works notheless... right ?
http://adsabs.harvard.edu/cgi-bin/nph-bib_query?bibcode=1997A%26A...326...13B&db_key=AST

Battaner's model? I have no idea. As I said, there are no comparisons with observations in that paper, so there's no way to know.

Don J
2012-May-13, 07:32 PM
Battaner's model? I have no idea. As I said, there are no comparisons with observations in that paper, so there's no way to know.
There is a comparison with observations here:

http://adsabs.harvard.edu/abs/1998A%26A...338..383B

parejkoj
2012-May-13, 07:54 PM
There is a comparison with observations here:

http://adsabs.harvard.edu/abs/1998A%26A...338..383B

Wow... Seriously? Some sketches of "octahedral structure" and they call it good? Besides the fact that in the past decade we've gotten several orders of magnitude more data that maps the large scale structure in a large volume out to z~0.7, that paper is a joke! There's absolutely no statistical analysis, no comparison with the galaxy correlation function or power spectrum (as was found in Springel et al.), and no mention of how they convert redshifts to distances, which is kind of important for this sort of thing.

Anyone who points at a contour map, draws some straight lines on it, and says "look at these obvious structures" is trying to sell you a bill of goods.

I particularly like the line on the first page where they say "... the recognition of the octahedral network was noticeably easy, rendering a full statistical analysis unnecessary." 'nuff said.

Tensor
2012-May-13, 08:32 PM
Right ...however are you sure this model takes EM effects into account ?

Yeah, they talk about reionization and radio mode cooling. Of course, you may want more....


No, what I said is that the simulation program used ..

snip....

formed by all vertices adjacent to v.

What is the problem here? You're complaining about how computers do calculations, how operations are done, or how the computer makes decisions on which operations to do . It still appears that you think that the programmers of the model are somehow specifically programming the model to produce what they want the model to produce. You haven't provided any specific example. Everything you've complained about is nothing more than standard procedure for running computer models. They use the same methods in modeling nuclear weapons, supernova explosions, hydrodynamical models, etc.

Can you show exactly how this would lead the code to "know" about galaxies, quasars, large scale structure, and connections? If you can't, it would seem to be nothing more than a case of you don't like it, but really can't show anything wrong with it.



Taking into account the Universe expansion factor .
Multipole expansion
http://en.wikipedia.org/wiki/Multipole_expansion

You do know that the multipole expansion is a mathematical operation, (under most circumstances it is the Laplace Expansion (http://en.wikipedia.org/wiki/Laplace_expansion_(potential))). The Universe Expansion Factor is nothing more than the Hubble parameter.?

Tensor
2012-May-13, 08:51 PM
I see, only models based on LCDM and gravity as the only driving force are considered mainstream.Why the mainstream chose to ignore EM force in his model?

But the model works notheless... right ?
http://adsabs.harvard.edu/cgi-bin/nph-bib_query?bibcode=1997A%26A...326...13B&db_key=AST

Define works. There is the following nugget in that paper(Part 1, Page 2, top of the second column):

"We have not included either protons or electrons in the system of equations."

Really? How accurate do you think this will be? At least Springel et al's model uses matter. Not to mention if there were as much of a magnetic field as Battaner claims, there would be effects seen in the CMB. We don't see those effects, so there couldn't have been that much of a magnetic field. In Battaner's defense, his paper was before the precision WMAP data. Could be that EM is ignored in most models because we don't see it's effects in any large way.

Don J
2012-May-13, 09:00 PM
What is the problem here? You're complaining about how computers do calculations, how operations are done, or how the computer makes decisions on which operations to do . It still appears that you think that the programmers of the model are somehow specifically programming the model to produce what they want the model to produce. You haven't provided any specific example. Everything you've complained about is nothing more than standard procedure for running computer models. They use the same methods in modeling nuclear weapons, supernova explosions, hydrodynamical models, etc.

Can you use that program to create Spiral Galaxies for example: in the sense of generating them ab initio from the gravitational collapse of gas ?



Can you show exactly how this would lead the code to "know" about galaxies, quasars, large scale structure, and connections? If you can't, it would seem to be nothing more than a case of you don't like it, but really can't show anything wrong with it.

well i think the term "know" about Galaxies was rather innapropriate.... as parejkoj pointed out

"it rather assigns galaxies to dark matter halos following some prescription."

so what are those prescription ?



You do know that the multipole expansion is a mathematical operation, (under most circumstances it is the Laplace Expansion (http://en.wikipedia.org/wiki/Laplace_expansion_(potential))). The Universe Expansion Factor is nothing more than the Hubble parameter.?
Not problem with that ! Does the Hubble parameter determine the value of the Dark Energy ?
h = 0.73,
ΩΛ = 0.75,

Shaula
2012-May-14, 05:07 AM
Can you use that program to create Spiral Galaxies for example: in the sense of generating them ab initio from the gravitational collapse of gas ?
The main model doesn't even try to produce galaxies - it focuses on very large scale structure, the distribution of matter over vast distances. The parameters from this are then fed into a different, as you have said, semi-analytical model to be populated with galaxies. The changes in the parameters and this model are used to evolve the objects embedded in the halos. The model does not attempt to deal with galactic formation - it is about the interactions between the large scale structure and the objects in it.

Don J
2012-May-14, 05:40 AM
Can you use that program to create Spiral Galaxies for example: in the sense of generating them ab initio from the gravitational collapse of gas ?

The main model doesn't even try to produce galaxies - it focuses on very large scale structure, the distribution of matter over vast distances. The parameters from this are then fed into a different, as you have said, semi-analytical model to be populated with galaxies. The changes in the parameters and this model are used to evolve the objects embedded in the halos. The model does not attempt to deal with galactic formation - it is about the interactions between the large scale structure and the objects in it.
My reply was in response to Tensor which said... (see post 57 for context.)


They use the same methods in modeling nuclear weapons, supernova explosions, hydrodynamical models, etc.


I wanted to know if all the computing power was used for that specific task only and using the specific code algorithm used for the Millenium Simulation.

Can you use that program to create Spiral Galaxies for example: in the sense of generating them ab initio from the gravitational collapse of gas ?

Shaula
2012-May-14, 03:08 PM
What do you mean by 'that program'? Do you mean the code? No, it is a large scale structure simulation. If you mean the algorithm - you could but you need to include other effects to get anything realistic. Galaxies are complex beasts. Galactic formation models have to take into account the evolution of the objects in them, they are inherently more complex that dark matter. The bulk of the computing power was used to generate larger than ever data sets, high resolution data over huge volumes. It was not used to generate spiral galaxies from scratch.

Don J
2012-May-31, 07:06 PM
Not to mention if there were as much of a magnetic field as Battaner claims, there would be effects seen in the CMB. We don't see those effects, so there couldn't have been that much of a magnetic field. In Battaner's defense, his paper was before the precision WMAP data. Could be that EM is ignored in most models because we don't see it's effects in any large way.
Newly discovered:
Universal, primordial magnetic fields discovered in deep space
http://phys.org/news204298215.html

Scientists from the California Institute of Technology and UCLA have discovered evidence of "universal ubiquitous magnetic fields" that have permeated deep space between galaxies since the time of the Big Bang.

About filaments
Herschel reveals galaxy-packed filament
http://phys.org/news/2012-05-herschel-reveals-galaxy-packed-filament.html

The Herschel Space Observatory has discovered a giant, galaxy-packed filament ablaze with billions of new stars. The filament connects two clusters of galaxies that, along with a third cluster, will smash together in several billion years and give rise to one of the largest galaxy superclusters in the universe.

Cougar
2012-May-31, 07:52 PM
Newly discovered:
Universal, primordial magnetic fields discovered in deep space....

September 21, 2010 is "newly discovered"?

Don J
2012-May-31, 11:01 PM
September 21, 2010 is "newly discovered"?
All is relative (not even 1 year and half old) .Newly presented here at least.

Tensor
2012-Jun-01, 05:38 AM
Newly discovered:
Universal, primordial magnetic fields discovered in deep space
http://phys.org/news204298215.html

Scientists from the California Institute of Technology and UCLA have discovered evidence of "universal ubiquitous magnetic fields" that have permeated deep space between galaxies since the time of the Big Bang.

Heheheheh, yeah, Femto-Gauss strength fields. Which are orders of magnitudes less than the fields Battaner claims to need. And besides Don, the conclusions from the actual paper that article is based on, depends on objects from catalogs whose distances are based on the ΛCDM universe. Are you now saying you accept the ΛCDM universe? Could you please explain your stand here? Are you agreeing with the ΛCDM universe or still not agreeing with it? If you are agreeing, then this discussion is rather moot, don't you think? If you're not agreeing, then obviously the paper you present here has no bearing on seeing effects of magnetic fields on the CMB. Which is it?


About filaments
Herschel reveals galaxy-packed filament
http://phys.org/news/2012-05-herschel-reveals-galaxy-packed-filament.html

The Herschel Space Observatory has discovered a giant, galaxy-packed filament ablaze with billions of new stars. The filament connects two clusters of galaxies that, along with a third cluster, will smash together in several billion years and give rise to one of the largest galaxy superclusters in the universe.

And, so? What is your point? You do realize the actual paper talks about the filaments being dust and being detectable because the stars and galaxies forming in the filaments are heating the dust to IR, which is why the filaments are visible. You have only mentioned filaments in association with EU/PU universe models and in this paper, the filaments have nothing to do with EU/PU ideas. Or, are you claiming these filaments have anything to do with the EU/PU universe?

How about including a bit more context with your posts, instead of, not even the papers themselves, but just links to press releases. I'm not a mind reader as far as what those press releases are supposed to mean.

Tensor
2012-Jun-01, 05:46 AM
Can you use that program to create Spiral Galaxies for example: in the sense of generating them ab initio from the gravitational collapse of gas ?

Nope. The program doesn't do that.* See below.


well i think the term "know" about Galaxies was rather innapropriate.... as parejkoj pointed out

Inappropriate? How about completely wrong?


"it rather assigns galaxies to dark matter halos following some prescription."

Assigns? It still looks like you think the whole galaxy thing is specifically coded into the program. It's not. Just the physics is coded. If you think it assigns galaxies to dark matter halos, by all means, show us, IN THE CODE or IN THE ASSUMPTIONS, where that happens.

What donít you get about letting the simulation following the physics and seeing what comes out of the run (or comes out of several runs)? Why do you think that there is some sort of code in the program going ďOh, hereís some dark matter, we have to put a galaxy hereĒ?


so what are those prescription ?

Yeah, the interaction of energy with matter. Sorta like, thereís two particles over here. Those two attracted this other one. These three attracted this other one. Those four attracted these other three.

If you think differently, then by all means show us another model,that matches the physics rules and also matches observations quantitatively. All you done and continue to do is to show us models, with no quantitative or numerical comparison with actual observations.

But you somehow think that two-dimensional pictures or images in models that look vaguely like pictures or images in observations (but don't come close in four dimensional spacetime) are somehow a match.


problem with that ! Does the Hubble parameter determine the value of the Dark Energy ?

You mean you don't know? And you're arguing against Big Bang cosmology? How exactly do you argue against Big Bang Cosmology, if you don't know the mechanics of it?


h = 0.73,
ΩΛ = 0.75,

So you think the Hubble parameter determines the value of Dark Energy because the actual numeric values are very similar? Did you even do a dimensional analysis on the two values to compare them? My guess would be no. And anyhow, what exactly is your point of providing the values of H and ΩΛ here?

*As far as the following in Post #59



My reply was in response to Tensor which said... (see post 57 for context.) They use the same methods in modeling nuclear weapons, supernova explosions, hydrodynamical models, etc.

Please read what I said. I said they use the same METHODS, in modeling the other things. All of those things have specific physical rules that have to be followed (EM coupling, gravitational coupling, strong force coupling, weak force coupling, pressure, etc) that specifically determine how the code has to be written. You canít just make it up to satisfy what you want the result to be.


I wanted to know if all the computing power was used for that specific task only and using the specific code algorithm used for the Millenium Simulation.

Each task has itís own rules, own algorithm, own code, and own run time. But the method in putting the code together is pretty much the same. If you want to simulate a supernova explosion, you run the computer with the supernova code. If you want to simulate a nuclear weapons explosion, you run the nuclear weapons code, etc. But the computer time is so extensive for any of them, that the specific code for the specific problem is the only code running.


Can you use that program to create Spiral Galaxies for example: in the sense of generating them ab initio from the gravitational collapse of gas ?

As Shaula pointed out, no, you canít. There is too much going on to simulate the formation of a galaxy. The Millennium Simulation is so complicated as it is, you canít even fudge writing the code to produce what you want. You write the rules into the code, as specified by the physics, enter the initial values, hit run, and watch it run. Then you take the output and compare it to the actual observations.

parejkoj
2012-Jun-01, 04:06 PM
Just to make a correction to what Tensor posted (as he was quoting posts where Don J had quoted me):



Assigns? It still looks like you think the whole galaxy thing is specifically coded into the program. It's not. Just the physics is coded. If you think it assigns galaxies to dark matter halos, by all means, show us, IN THE CODE or IN THE ASSUMPTIONS, where that happens.

Actually, in the case of the Millennium Simulation, that's exactly what happens. The primary gravitational simulation is entirely dark matter+dark energy, with the baryon physics applied afterward via a semi-analytic model. Such a model includes a prescription for how the baryons will evolve in a particular dark matter halo, while not directly computing all of the baryon physics. Here's a paper describing some of the details (http://arxiv.org/abs/astro-ph/0608019) of this particular calculation, including a description of how to access the database of results. For the Millennium Run, the dark matter particle masses (~109Msun) are comparable with those of an individual dwarf galaxy, so they are not really appropriate for modeling the internals of a single galaxy.

Though the Millennium Run didn't do the full gas+dust+stars+gravity simulation of galaxy formation, there have been many smaller-scale simulations that do. Guedes et al. (2011) (http://adsabs.harvard.edu/cgi-bin/bib_query?arXiv:1103.6030) is a nice recent example, directly modeling the formation and evolution of a Milky Way analog from z=90 to the present. The Millennium Run was created to test predictions on a cosmological scale, while producing some information about how galaxies and quasars evolve. Simulations that directly model the baryon "gastrophysics" are run within a single dark matter halo, with much smaller particle masses (~105Msun), to test the physics of galaxy formation and evolution directly, while having the cosmology applied "externally".

I hope this clears things up. For the record, there have been plenty of comparison tests between semi-analytic models and full baryon models, and there is general agreement between their predictions. Semi-analytic models tend to break down at smaller scales and masses, but that's not what they were designed to do.

Don J
2012-Jun-02, 03:37 AM
Nope. The program doesn't do that.* See below.




Inappropriate? How about completely wrong?



Assigns? It still looks like you think the whole galaxy thing is specifically coded into the program. It's not. Just the physics is coded. If you think it assigns galaxies to dark matter halos, by all means, show us, IN THE CODE or IN THE ASSUMPTIONS, where that happens.

What donít you get about letting the simulation following the physics and seeing what comes out of the run (or comes out of several runs)? Why do you think that there is some sort of code in the program going ďOh, hereís some dark matter (halo), we have to put a galaxy hereĒ?

see parejkoj reply in post 66.




Yeah, the interaction of energy with matter. Sorta like, thereís two particles over here. Those two attracted this other one. These three attracted this other one. Those four attracted these other three.

How the program can make the difference betwen cold dark matter and baryonic matter in the simulation ,remember than both are mixed together as particles points?


If you think differently, then by all means show us another model,that matches the physics rules and also matches observations quantitatively. All you done and continue to do is to show us models, with no quantitative or numerical comparison with actual observations.
For now there is still some aspect who remains in the "dark" for example the above question.


But you somehow think that two-dimensional pictures or images in models that look vaguely like pictures or images in observations (but don't come close in four dimensional spacetime) are somehow a match.
That may be interesting to see a real simulation based on Battener model using advanced tool which were at the disposition of the Millenium simulation.




You mean you don't know? And you're arguing against Big Bang cosmology? How exactly do you argue against Big Bang Cosmology, if you don't know the mechanics of it?
But the simulation is not about the Big Bang it is about LCDM and Dark Energy two essential "element" who were later added to explain
observations which were not predicted by the original model.




So you think the Hubble parameter determines the value of Dark Energy because the actual numeric values are very similar? Did you even do a dimensional analysis on the two values to compare them? My guess would be no. And anyhow, what exactly is your point of providing the values of H and ΩΛ here?
Not really, I was sure that the Dark Energy depend on other parameters I was hoping that you will say which one exactly and also if that value have changed along the years ?


Nope. The program doesn't do that.* See below.

*As far as the following in Post #59



Please read what I said. I said they use the same METHODS, in modeling the other things. All of those things have specific physical rules that have to be followed (EM coupling, gravitational coupling, strong force coupling, weak force coupling, pressure, etc) that specifically determine how the code has to be written. You canít just make it up to satisfy what you want the result to be.



Each task has itís own rules, own algorithm, own code, and own run time. But the method in putting the code together is pretty much the same. If you want to simulate a supernova explosion, you run the computer with the supernova code. If you want to simulate a nuclear weapons explosion, you run the nuclear weapons code, etc. But the computer time is so extensive for any of them, that the specific code for the specific problem is the only code running.



As Shaula pointed out, no, you canít. There is too much going on to simulate the formation of a galaxy. The Millennium Simulation is so complicated as it is, you canít even fudge writing the code to produce what you want. You write the rules into the code, as specified by the physics, enter the initial values, hit run, and watch it run. Then you take the output and compare it to the actual observations.
That is what i said in a previous post "The program do what it is suppose to do"...ie It was programmed to create filaments and large scale structures

Shaula
2012-Jun-02, 06:38 AM
How the program can make the difference betwen cold dark matter and baryonic matter in the simulation ,remember than both are mixed together as particles points?
One interacts electromagnetically, the other does not. Simple. If the matter was baryonic they would have to take into account things like radiative cooling, just like they have to with galaxy formation.


That is what i said in a previous post "The program do what it is suppose to do"...ie It was programmed to create filaments and large scale structures
It is what you keep asserting. So far you have provided no proof at all. The simulation models a collisionless fluid which only interacts gravitationally evolving from a state with weak density perturbations to a cosmic web. It is not constrained to produce filaments by anything other than basic physics. If you are complaining that the model fits observations and this somehow makes it less valid then I suggest you rethink that stance.

Don J
2012-Jun-02, 06:37 PM
One interacts electromagnetically, the other does not. Simple. If the matter was baryonic they would have to take into account things like radiative cooling, just like they have to with galaxy formation.

Show me where they have taken the electromagnetic force into consideration ?





That is what i said in a previous post "The program do what it is suppose to do"...ie It was programmed to create filaments and large scale structures

It is what you keep asserting. So far you have provided no proof at all. The simulation models a collisionless fluid which only interacts gravitationally evolving from a state with weak density perturbations to a cosmic web. It is not constrained to produce filaments by anything other than basic physics. If you are complaining that the model fits observations and this somehow makes it less valid then I suggest you rethink that stance.
Yep, I am rethinking that stance... replace the word "create" by simulate....

Shaula
2012-Jun-02, 08:34 PM
Show me where they have taken the electromagnetic force into consideration ?
The large scale simulation does not have to, that is the point. It simulates non-baryonic dark matter because that is the dominant form of matter. Then it uses the distribution of that tracer with the semi-analytical model to replicate what we should see. If they modelled it as baryonic matter they would have to take into account EM interactions.

Tensor
2012-Jun-03, 02:01 AM
see parejkoj reply in post 66.

I don't think what parejkoj means by assigns and what you mean by assigns are the same. The parejkoj is not saying the simulations have the location of the galaxies and filaments programed into the code. From your comments, it appears that you think that the simulations are programed to assign galaxies to specific locations, so the simulations match observations.

parejkog (and the paper he linked to) talks about galaxy populations, whose evolution is based on dark matter halo/sub halos initially, and then allowed to evolve using models based on the papers of De Lucia & Blaizot (2006) and Bower et al. (2006). These galaxy populations are then stored. The simulation is then run, and when the simulation get to the chosen output time, the simulation looks at the different halo/sub halo locations, looks up the stored galaxy populations those halo/sub halo produce and assigns the galaxy population based on the halo/sub halo at that location. Filaments are not stored and the location of these are based on gravitational interaction. The location of galaxies is based only on the interaction of gravity (which includes all energy) That simulation is then compared to observations.


How the program can make the difference betwen cold dark matter and baryonic matter in the simulation ,remember than both are mixed together as particles points?

It doesn't have to, they both interact through gravity. What you don't seem to realize is that EM also contributes to gravity. The stress-energy tensor includes all energy, and energy, not mass, is the cause of gravity.


For now there is still some aspect who remains in the "dark" for example the above question.

Well, you haven't shown any examples, except your incredulity.

Which is still no answer to my point that you have yet to provide any kind of numerical or quantitative examples.


That may be interesting to see a real simulation based on Battener model using advanced tool which were at the disposition of the Millenium simulation.

If you have read the paper parejkoj had linked to you would have found that the data if freely available to the public. That paper was written in 2006, so the only reason that Battener's model hasn't been run by his group, is he knows his model is useless. And yes, it would be really interesting to see a real simulation based on Battener's work. After all Battener himself specifically states in his paper that his model does not use protons or electrons. I can see why he wouldn't want to run it.


But the simulation is not about the Big Bang it is about LCDM and Dark Energy two essential "element" who were later added to explain
observations which were not predicted by the original model.

You're being pedantic. the ΛCDM universe with dark energy is the Big Bang universe. As for them not being part of the original, so what? The model was modified as more information was gathered. What, do you think that models and theories shouldn't be modified when new information is found. Or, do you think that we shouldn't actually go looking for new information? The Tauon, the bottom and top quarks were not part of the standard model when it was first proposed either. None of that changes my point that you have been railing against the standard cosmological model and don't seem to know the mechanics of it. Do you regularly argue against your doctor, because you don't like their diagnoses?



Not really, I was sure that the Dark Energy depend on other parameters I was hoping that you will say which one exactly and also if that value have changed along the years ?

Well, if you can explain proper distance and co-moving coordinates to me I'll be glad to explain the terms. I only ask that as I don't want you to misunderstand the parameters, as they can depend on the two terms I ask for you to explain. If you don't understand the two terms I asked about, it will be difficult to have a full understanding of the other two. But, again, you argue against mainstream theory, but don't know the basics of it? Isn't that a bit silly? And again, why did you provide the values of those parameters specifically?


That is what i said in a previous post "The program do what it is suppose to do"...ie It was programmed to create filaments and large scale structures

This is a flat out ATM interpretation of the program. The simulation is programed to follow the rules of physics. If, in the course of running the physics, all the particles spread out into a bunch of particles flying apart, so be it. If the the physics cause everything to collapse into a dense point, so be it. If you only get 20 groups of mass, so be it. It's not programed to do anything but follow the rules of physics.(and even when getting the stored galaxies, the galaxies evolve using the rules of physics). I'd like to know why you think following the rules of physics is considered being programmed to create filaments and large scale structure.

Don J
2012-Jun-03, 03:42 AM
I don't think what parejkoj means by assigns and what you mean by assigns are the same. The parejkoj is not saying the simulations have the location of the galaxies and filaments programed into the code. From your comments, it appears that you think that the simulations are programed to assign galaxies to specific locations, so the simulations match observations.

No, what i said was that they needed to stop running the program at a certain point and add -the semi-analytical model to replicate what we should see- (as Shaula explain (with better terms than me) in post 70.Which is different than saying that the progran was doing all the work in a single run or (several runs) from scratch without interruption.Which is what you seem to allude:


What donít you get about letting the simulation following the physics and seeing what comes out of the run (or comes out of several runs)?



parejkog (and the paper he linked to) talks about galaxy populations, whose evolution is based on dark matter halo/sub halos initially, and then allowed to evolve using models based on the papers of De Lucia & Blaizot (2006) and Bower et al. (2006). These galaxy populations are then stored. The simulation is then run, and when the simulation get to the chosen output time, the simulation looks at the different halo/sub halo locations, looks up the stored galaxy populations those halo/sub halo produce and assigns the galaxy population based on the halo/sub halo at that location. Filaments are not stored and the location of these are based on gravitational interaction. The location of galaxies is based only on the interaction of gravity (which includes all energy) That simulation is then compared to observations.



It doesn't have to, they both interact through gravity. What you don't seem to realize is that EM also contributes to gravity. The stress-energy tensor includes all energy, and energy, not mass, is the cause of gravity.

Ok, thanks for the the stress-energy tensor teaching I din't know that it included all the energy...that clear up some aspects.


You're being pedantic. the ΛCDM universe with dark energy is the Big Bang universe. As for them not being part of the original, so what? The model was modified as more information was gathered. What, do you think that models and theories shouldn't be modified when new information is found. Or, do you think that we shouldn't actually go looking for new information? The Tauon, the bottom and top quarks were not part of the standard model when it was first proposed either. None of that changes my point that you have been railing against the standard cosmological model and don't seem to know the mechanics of it. Do you regularly argue against your doctor, because you don't like their diagnoses?

Well I think the best reply to this was made by Jerry in post 34.


Models always assume round cows. No one has ever made a model based upon first principles that create the universe as we know it. As more knowledge of what the universe is has rolled into our path, the more complex the models must be to handle the details.

The danger in this approach is that by adding more assumptions and parameters, more confidence ends up going into the model than is warranted. A good example is the University of Colorado 'hurricane predictions' posted in late March or early April. Over the last five years, these models demonstrated zero predictive power and had to be scrapped. There was nothing wrong with the physics in the models - the problem was the inability of the model to adapt to rapid climate change. Cause-and-effect assumptions were just plain wrong.

We should enjoy the fact that we have toy models that roll out a universe similar to what we see. But these models are toys - not hard physical solutions, and the underlying danger is that more confidence is placed in 'established scientific principles' than should be. In any and all cases, we are infants sharing ideas about something we are trying wrap our collective arms about. We should welcome as many models as possible, and scrutinize the assumptions that seem to have the best predictive power - both into the past and into the future. There is much to learn.

Meanwhile, back at this thread - It seems to me that a model based upon both gravitational and electrodynamics has a much better chance of surviving into the future than a dark matter model. NONE of the predicted attributes of dark matter have shown their signature. Dark matter is a vacuous unsubstantiatable physical assumption, and the sooner we find a way to discard it, the better.




This is a flat out ATM interpretation of the program. The simulation is programed to follow the rules of physics. If, in the course of running the physics, all the particles spread out into a bunch of particles flying apart, so be it. If the the physics cause everything to collapse into a dense point, so be it. If you only get 20 groups of mass, so be it. It's not programed to do anything but follow the rules of physics.(and even when getting the stored galaxies, the galaxies evolve using the rules of physics). I'd like to know why you think following the rules of physics is considered being programmed to create filaments and large scale structure.
The point i tried to make was not about that it follow the laws of physics to create the filaments and large scale structures.My point is that contrary to your allusion that the progran was doing all the work in a single run or (several runs) from scratch without interruption... in reality the program was interrupted at some point to introduce -the semi-analytical model to replicate what we should see- as Shaula explain in post 70.

Tensor
2012-Jun-03, 05:39 AM
No, what i said was that they needed to stop running the program at a certain point and add -the semi-analytical model to replicate what we should see-

You never said any such thing.

The closest thing I can find is the following from post #28:


See page 7of the paper about the specific semi-analytic model- used by the model to account for galaxies formation and the adjustments by trial and error they were forced to made in their CDM model to match observations.

So much misunderstanding in this one sentence. The trial and error they talk about has nothing to do with the CDM portion of the simulation. What they are doing is running the simplified galaxy models, with inputs on halos/sub-halos along with angular momentum and seeing if it matches what we see among nearby galaxies. If it doesn't, the input parameters are adjusted, and the model is run again, and then compared to observations. When they get the galaxy model to match observations, they can then add it to their database. And when the simulation gets to the designated output time, the simulation can look at the halo/sub-halos and if it matches one of those in the database, that galaxy is put into the simulation, where the halo/sub-halo is matched. If there isn't a match for the halo/sub-halo, there isn't a galaxy put into the simulation. Why don't you go back and read the paper and then use specific quotes from the paper for us Don?


(as Shaula explain (with better terms than me) in post 70.Which is different than saying that the progran was doing all the work in a single run or (several runs) from scratch without interruption.Which is what you seem to allude:

Where is that coming from? That's a way out in left field tangent. Whether or not the program runs in one fell swoop or stops to add other models has nothing to do with whether the simulation uses physics to determine how large scale structure and filament evolve, as we've been telling you. Or, on the other hand, that the programming code somehow specifically places large scale structure and filaments as we see them, as you've been telling us.


Well I think the best reply to this was made by Jerry in post 34.

Yeah, Jerry, like you, likes to complain about current mainstream models, without, like you, seemingly knowing what the actual mechanics of the mainstream models are. You still haven't answered. How can you complain about mainstream cosmology, without, it appears, knowing the mechanics of mainstream cosmology? It appears because of a personal dislike of the theory, not because of any actual science that you can show to be wrong.


The point i tried to make was not about that it follow the laws of physics to create the filaments and large scale structures.

Do you really want to go with this? It took me all of 5 seconds to find two examples where you claimed the code for the simulation was specifically written to produce filaments and large scale structure.


My point is that contrary to your allusion that the progran was doing all the work in a single run or (several runs) from scratch without interruption... in reality the program was interrupted at some point to introduce -the semi-analytical model to replicate what we should see- as Shaula explain in post 70.

LOL, This is nothing more than a slight of hand. You haven't said anything about stopping the program to introduce other models as a being a problem before. And even if it does stop, it has nothing to do with your insistence that the programming code is somehow placing structure into the output, without letting the physics dictate the output. The actual inclusion of the semi-analytical model depends on the location of halos/sub-halos, which depends on the gravitational interaction, which runs without interruption. The semi-analytical models take inputs from the simulation, which then determines which of the stored galactic populations is put back into the simulation.

Well, you haven't shown any examples, except your incredulity.
Which is still no answer to my point that you have yet to provide any kind of numerical or quantitative examples that match observations.
You gonna provide any of these? Or just admit there isn't any?

If you have read the paper parejkoj had linked to you would have found that the data if freely available to the public. That paper was written in 2006, so the only reason that Battener's model hasn't been run by his group, is he knows his model is useless. And yes, it would be really interesting to see a real simulation based on Battener's work. After all Battener himself specifically states in his paper that his model does not use protons or electrons.
You still think it would be interesting to see a run of Battener's model?

Speaking of Battener's model, how are those primordial femto-gauss magnetic fields working with his model?

Well, if you can explain proper distance and co-moving coordinates to me I'll be glad to explain the terms. I only ask that as I don't want you to misunderstand the parameters, as they can depend on the two terms I ask for you to explain. If you don't understand the two terms I asked about, it will be difficult to have a full understanding of the other two. And again, why did you provide the values of those parameters specifically?
You gonna come back to these?

Shaula
2012-Jun-03, 06:22 AM
in reality the program was interrupted at some point to introduce -the semi-analytical model to replicate what we should see
Not really. The output of the model is a map of the dark matter density at a time T. That time can be any time, in fact they took snapshots of the results at several times during the run. It didn't have to stop and the results were perfectly valid. They were descriptions of how the dark matter was distributed. It would be what we could see if we could see dark matter. Trouble is we cannot, generally. So then they took these output and used the semi-analytic model to put galaxies in them according to a set of rules that basically say "in dark matter haloes of this type you will normally get galaxies of this type". These rules were derived from other models and observations.

When I said replicate 'what we should see' I meant that in the most basic sense - what our eyes should see. The emphasis in that phrase is on the word SEE, not SHOULD. It is not a fix, a fiddle or a fudge. What they are doing is using a pre-defined set of rules to populate the dark matter regions with baryonic matter. This is possible because the dark matter dominates and baryonic matter only really has local effects. On the scales they dealt with it is hardly noticeable.

So in essence there are several steps:
Run a galaxy evolution model, compare to observations until you derive a set of rules relating galactic type with the environment it formed it, a way to predict "if I know what the neighbourhood is like then I can tell you the types of galaxies I expect to find there". Output from this: the semi-analytic model, a statistical description of galactic populations based on environment. No fixes, no fiddles, this is based on physics.
Run the millennium model to evolve a chunk of dark matter with small scale inhomogeneities in it. Output from this: A map of dark matter on huge scales. Based entirely on physics and predicting filaments and voids without them being added or adjusted.
Take the dark matter map, which describes the environment for each area of space, and then populate it with galaxies using the results from the first piece of work. This produces the familiar view of the universe, strings of galaxies and so on. This is combining two physics based models, no fixes, no fiddles, no cheating.

You rather leapt on a poor choice of words by me there, I should have been more unambiguous in what I said. I certainly do no support your apparent view that they in some way cheated to get the answers because all mainstream LCDM theories are wrong.

The approach taken is a common one in many fields, actually. There is nothing unusual or uncommon about it. It is simply a reflection of computing power.

Don J
2012-Jun-03, 06:23 AM
You never said any such thing.

The closest thing I can find is the following from post #28:

So much misunderstanding in this one sentence.

One way or another that sentence mean that the simulation was stopped at some point.


LOL, This is nothing more than a slight of hand. You haven't said anything about stopping the program to introduce other models as a being a problem before. And even if it does stop, it has nothing to do with your insistence that the programming code is somehow placing structure into the output, without letting the physics dictate the output. The actual inclusion of the semi-analytical model depends on the location of halos/sub-halos, which depends on the gravitational interaction, which runs without interruption. The semi-analytical models take inputs from the simulation, which then determines which of the stored galactic populations is put back into the simulation.

Well, you haven't shown any examples, except your incredulity.
Which is still no answer to my point that you have yet to provide any kind of numerical or quantitative examples that match observations.
You gonna provide any of these? Or just admit there isn't any?

Where did I say that the simulation does not show something who match observations you even confirm the reason -how- they have finally achieved that.


....What they are doing is running the simplified galaxy models, with inputs on halos/sub-halos along with angular momentum and seeing if it matches what we see among nearby galaxies. If it doesn't, the input parameters are adjusted, and the model is run again, and then compared to observations. When they get the galaxy model to match observations, they can then add it to their database. And when the simulation gets to the designated output time, the simulation can look at the halo/sub-halos and if it matches one of those in the database, that galaxy is put into the simulation, where the halo/sub-halo is matched. If there isn't a match for the halo/sub-halo, there isn't a galaxy put into the simulation.



You gonna come back to these?

Yep....but later.....each thing at its time.