Page 1 of 4 123 ... LastLast
Results 1 to 30 of 113

Thread: Solving the Fermi paradox without assumptions

  1. #1
    Join Date
    Oct 2018
    Posts
    24

    Solving the Fermi paradox without assumptions

    This one actually doesn't go against any established scientific theories, but rather challenges our common sense assumptions and reaches a conclusion too bizarre to be taken seriously by the scientific mainstream.
    In this article, I am trying to show (ideally, prove) that any technology which allows any civilization to leave a noticeable signature on a stellar scale inherently causes the destruction of said civilization. If true, it means we can solve the Fermi paradox without the invocation of the anthropic principle or any assumptions about alien technologies or lifeforms other than pure definitions.
    All counterarguments expressed so far are collected in the "Discussion" section. I am looking forward to adding more.

  2. #2
    Join Date
    Sep 2003
    Location
    The beautiful north coast (Ohio)
    Posts
    49,254
    FunBotan

    First, welcome to CQ.

    Second, I hope you have reviewed our rules and suggestions (see top of the ATM sub-forum), particularly regarding the Against The Mainstream (ATM) section.

    Third, you need to understand that ideas presented in ATM must be discussed in the ATM section. You need to give a fuller explanation of your idea here (you may reference your arxiv article) and all discussion must occur here. If you are just using CQ to promote your arxiv article and draw traffic there, you will be infracted, this thread will be closed, and your link will be removed.

    Have fun.
    At night the stars put on a show for free (Carole King)

    All moderation in purple - The rules

  3. #3
    Join Date
    Oct 2018
    Posts
    24
    Alright, I understand why that would be required. I'm just not sure how to fulfill this requirement without copypasting the entire article.
    Let's try to compress it down to the basic structure and see if that format is understandable.

    Premise: A system is considered intelligent if its actions are aimed at maximizing its future freedom of action (this definition had been proposed by Alexander Wissner-Gross).
    Proposition 1: Maximizing future freedom of action is equivalent to hoarding the greatest amount of resources.
    <This is because any action requires either energy, fuel or material, which are equivalent in the limit (as technology becomes more advanced).>
    Proposition 2: Given long enough time, all available resources will be controlled by a single individual.
    <This can be proven strictly in the context of space, since moving there requires fuel and more fuel can be harvested by whoever can move the furthest, creating a positive feedback loop.>
    Proposition 3: That individual will be incentivized to secure storage of their resources by centralizing it in one spherical structure.
    <This follows from the law of scaling: the useful volume of a given structure grows faster than its surface area, so scaling that structure up and shaping it into a sphere allows to spend the least amount of resources on defense.>
    Proposition 4: This structure will, in the limit, collapse into a black hole.
    Conclusion: Infinite growth is, therefore, impossible, which solves the Dyson dilemma.

    The article still has the arguments in more detailed forms, as well as additional propositions that clarify them.

  4. #4
    Join Date
    Jun 2006
    Posts
    4,793

    Premises = Assumptions

    Verbiage is useless here. Do you have any observations to support your assumptions?
    I'm not a hardnosed mainstreamer; I just like the observations, theories, predictions, and results to match.

    "Mainstream isnít a faith system. It is a verified body of work that must be taken into account if you wish to add to that body of work, or if you want to change the conclusions of that body of work." - korjik

  5. #5
    Join Date
    Oct 2018
    Posts
    24
    Quote Originally Posted by John Mendenhall View Post
    Verbiage is useless here. Do you have any observations to support your assumptions?
    When studying biology, one has to start with the definition of an organism. When studying chemistry, one has to start with the definition of a substance. When studying the Fermi paradox, one has to define what exactly they are looking for, be it life, intelligence or civilization. It is philosophically impossible to make a verifiable statement about the real world without defining a single term used in that statement, without that "Premise 1" in the beginning.
    Could you consider these definitions to be assumptions? Maybe, but it hardly matters. I that case, please read the title as "Solving the Fermi paradox without additional assumptions". In either case, I am not obligated to provide evidence for a definition. You can read the original paper "Casual entropic forces" by Alexander Wissner-Gross to see why such definition was picked. In my article, I've also reviewed alternative definitions and showed that taking any one of them eventually leads to the same result.
    You may discover some implicit assumptions in my reasoning, and for that, I would be thankful. But the fact of the matter is that my solution does not explicitly use any assumptions that would have to be supported by evidence on my part.

  6. #6
    Join Date
    Oct 2009
    Location
    a long way away
    Posts
    10,775
    Quote Originally Posted by FunBotan View Post
    Premise: A system is considered intelligent if its actions are aimed at maximizing its future freedom of action (this definition had been proposed by Alexander Wissner-Gross).
    I haven't come across that definition before, but I find it hard to believe it is a complete definition of intelligence. I ike to think I am reasonably intelligent, but that has never crossed my mind as a goal in life.

    Proposition 1: Maximizing future freedom of action is equivalent to hoarding the greatest amount of resources.
    I think that assumption is a bit unrealistic. There are many things other than just material resources that can contribute to freedom of action. For example, cooperation with others, gaining knowledge and (for a culture) expanding to a larger area of land (or volume of space).

    Proposition 2: Given long enough time, all available resources will be controlled by a single individual.
    This assumption is even more implausible. I imagine that any intelligent species would evolve to that point by being cooperative.

    Proposition 3: That individual will be incentivized to secure storage of their resources by centralizing it in one spherical structure.
    This assumes that the resources are just being stored, rather than being used.

    Proposition 4: This structure will, in the limit, collapse into a black hole.
    And this assumes that your "intelligent" agents are not intelligent enough to recognise this and avoid it.

    I am utterly unconvinced by every one of your assumptions.

  7. #7
    Join Date
    Jun 2015
    Location
    Houston
    Posts
    1,216
    Quote Originally Posted by FunBotan View Post
    Alright, I understand why that would be required. I'm just not sure how to fulfill this requirement without copypasting the entire article.
    Let's try to compress it down to the basic structure and see if that format is understandable.

    Premise: A system is considered intelligent if its actions are aimed at maximizing its future freedom of action (this definition had been proposed by Alexander Wissner-Gross).
    Proposition 1: Maximizing future freedom of action is equivalent to hoarding the greatest amount of resources.
    <This is because any action requires either energy, fuel or material, which are equivalent in the limit (as technology becomes more advanced).>
    Proposition 2: Given long enough time, all available resources will be controlled by a single individual.
    <This can be proven strictly in the context of space, since moving there requires fuel and more fuel can be harvested by whoever can move the furthest, creating a positive feedback loop.>
    Proposition 3: That individual will be incentivized to secure storage of their resources by centralizing it in one spherical structure.
    <This follows from the law of scaling: the useful volume of a given structure grows faster than its surface area, so scaling that structure up and shaping it into a sphere allows to spend the least amount of resources on defense.>
    Proposition 4: This structure will, in the limit, collapse into a black hole.
    Conclusion: Infinite growth is, therefore, impossible, which solves the Dyson dilemma.

    The article still has the arguments in more detailed forms, as well as additional propositions that clarify them.
    Proposition 2 contains an assumption in itself although it isn't implicitly stated. What if new sources of fuel are not found by whoever moves the furthest(might be unlucky at a maximum, at a minimum insufficient to return.

  8. #8
    Join Date
    Jun 2005
    Posts
    13,859
    I have problems with the very first assumption. The clearest most obvious intelligent system we know of is humanity, and it does not seem that we behave in a way that maximizes our future actions. Given that, the assumption seems wrong..


    Sent from my iPhone using Tapatalk
    As above, so below

  9. #9
    Join Date
    Aug 2008
    Location
    Wellington, New Zealand
    Posts
    4,332
    Quote Originally Posted by FunBotan View Post
    Premise: A system is considered intelligent if its actions are aimed at maximizing its future freedom of action (this definition had been proposed by Alexander Wissner-Gross).
    Proposition 1....
    This is more a proposed definition followed by a list of assumptions, FunBotan.
    The list gets close to assuming non-intelligence. We know about black holes. A more advanced civilization will know more about black holes. They would not be so unintelligent as to create a structure so dense that it will collapse into a black hole. That "one spherical structure" will more likely be a Dyson sphere or swarm.

    Alexander Wissner-Gross is a computer scientist. He wrote a "Causal Entropic Forces" paper in 2013 with mathematician Cameron Freer. What the paper essentially shows is that systems of disks and strings subject to non-standard entropy produce a adaptive configuration. They associate the disks with animals and strings with tools to make tool use and social cooperation puzzles. There is no intelligence involved.
    A Grand Unified Theory of Everything criticizes the paper.

    There is no explicit "A system is considered intelligent if its actions are aimed at maximizing its future freedom of action" in the paper. The paper is that adaptive behavior emerges in systems that maximize diversity of future paths using their non-standard definition of entropy:
    Namely, adaptive behavior might emerge more generally in open thermodynamic systems as a result of physical agents acting with some or all of the systems’ degrees of freedom so as to maximize the overall diversity of accessible future paths of their worlds (causal entropic forcing).
    Last edited by Reality Check; 2019-Apr-09 at 12:01 AM.

  10. #10
    Join Date
    Jun 2006
    Posts
    4,793

    Premise and Propositions = Assumptions

    Quote Originally Posted by John Mendenhall View Post
    Verbiage is useless here. Do you have any observations to support your assumptions?
    You have no observations or data to support your ideas. They are speculation. ATM is a place to present and defend, here, your ideas with data, predictions, and results. Until you can do this, you have only words.

    Good luck. Your Propisition 4 assumption is particularly weak, base on your asumptioins in Props 1 to 3. If they have gotten through the first threee, they will know better than to allow 4 to happen.
    Last edited by John Mendenhall; 2019-Apr-09 at 07:59 PM. Reason: typos
    I'm not a hardnosed mainstreamer; I just like the observations, theories, predictions, and results to match.

    "Mainstream isnít a faith system. It is a verified body of work that must be taken into account if you wish to add to that body of work, or if you want to change the conclusions of that body of work." - korjik

  11. #11
    Join Date
    Jun 2005
    Posts
    13,859
    Quote Originally Posted by FunBotan View Post
    Proposition 1: Maximizing future freedom of action is equivalent to hoarding the greatest amount of resources.
    <This is because any action requires either energy, fuel or material, which are equivalent in the limit (as technology becomes more advanced).>
    And to be honest, this seems wrong to me as well. We are intelligent, and it seems that like many animals, we take what we need and don't hoard beyond that, because it requires extra energy.
    As above, so below

  12. #12
    Join Date
    Mar 2010
    Location
    United Kingdom
    Posts
    7,163
    Your premise is incompatible with your third proposition. Turning yourself into a black hole doesn't maximise your potential future action. Your definition of intelligence requires that the entity is able to predict the future, but premise three requires that they are oblivious to the future - so your argument collapses.

    Aside from that I think your definition of intelligence is far too simplistic and I think all of your premises are gross oversimplification. I don't think that this framework holds true for any system other than an exceedingly poorly optimised cellular automata.

  13. #13
    Join Date
    Oct 2018
    Posts
    24
    Quote Originally Posted by bknight View Post
    Proposition 2 contains an assumption in itself although it isn't implicitly stated. What if new sources of fuel are not found by whoever moves the furthest(might be unlucky at a maximum, at a minimum insufficient to return.
    Luck isn't an issue when we're talking in the limit. Your counterexample, therefore, implies all matter being located in one place, which is either not the case or would lead to the same collapse into a black hole.

    This assumes that the resources are just being stored, rather than being used.
    No. Only that it is collected faster than used.

    I have problems with the very first assumption. The clearest most obvious intelligent system we know of is humanity, and it does not seem that we behave in a way that maximizes our future actions. Given that, the assumption seems wrong..
    No. This is the single most important observation in favor of my idea.
    I've never said that civilization as a whole behaves intelligently. I'm actually trying to prove the exact opposite.

    The list gets close to assuming non-intelligence. We know about black holes. A more advanced civilization will know more about black holes. They would not be so unintelligent as to create a structure so dense that it will collapse into a black hole.
    Your premise is incompatible with your third proposition. Turning yourself into a black hole doesn't maximise your potential future action. Your definition of intelligence requires that the entity is able to predict the future, but premise three requires that they are oblivious to the future - so your argument collapses.
    At this point, I have no choice but to refer you to a point in my article where this question is discussed. This would be section V, subsection F.

    That "one spherical structure" will more likely be a Dyson sphere or swarm.
    On that, I agree.

    A Grand Unified Theory of Everything criticizes the paper.
    If you have a better definition, please provide it. I would truly be thankful.

    You have no observations or data to support your iideas. They are speculation. ATM is a place to present and defend, here, your ideas with data, predictions, and results. Until you can do this, you have only words.
    Please refer to section IV where the testability of my proposal is discused.

    We are intelligent, and it seems that like many animals, we take what we need and don't hoard beyond that, because it requires extra energy.
    This is an inherently political statement, so I cannot argue against it here.

    Aside from that I think your definition of intelligence is far too simplistic and I think all of your premises are gross oversimplification.
    They absolutely are, because I couldn't fit my entire article, where everything is explained in detail, in a forum post.

  14. #14
    Join Date
    Jul 2006
    Location
    Peters Creek, Alaska
    Posts
    12,884
    Quote Originally Posted by FunBotan View Post
    At this point, I have no choice but to refer you to a point in my article where this question is discussed. This would be section V, subsection F.

    As Swift warned you, discussion must be conducted here. You may reference off-site material to support discussion here but arguments taking the form of "Go read X" are not allowed.
    Forum Rules►  ◄FAQ►  ◄ATM Forum Advice►  ◄Conspiracy Advice
    Click http://cosmoquest.org/forum/images/buttons/report-40b.png to report a post (even this one) to the moderation team.


    Man is a tool-using animal. Nowhere do you find him without tools; without tools he is nothing, with tools he is all. ó Thomas Carlyle (1795-1881)

  15. #15
    Join Date
    Jun 2015
    Location
    Houston
    Posts
    1,216
    Quote Originally Posted by FunBotan View Post
    Luck isn't an issue when we're talking in the limit. Your counterexample, therefore, implies all matter being located in one place, which is either not the case or would lead to the same collapse into a black hole.

    And no my reply does not indicate that all matter is located in one place. If a treasure is located north of you but you only look south, you will never find the treasure, but it still exists. Now I have made another assumption.


    <snip>
    My point was that your proposition contain a implied assumption, perhaps I misworded my reply, but non the less it has an assumption and therefore defeats your attempt.

  16. #16
    Join Date
    Oct 2018
    Posts
    24
    Quote Originally Posted by PetersCreek View Post
    As Swift warned you, discussion must be conducted here. You may reference off-site material to support discussion here but arguments taking the form of "Go read X" are not allowed.
    Well,
    Quote Originally Posted by Swift View Post
    you may reference your arxiv article
    Other than that, would copypasting entire sections of the article be more legit?
    It's not like I can shrink the explanation further, the article is pretty minimalistic the way it already is, and the answers provided there had been refined to be more understandable. When people ask questions that are already answered in the article, the only two options I have is to either reference or to paste it.

    Quote Originally Posted by bknight View Post
    And no my reply does not indicate that all matter is located in one place. If a treasure is located north of you but you only look south, you will never find the treasure, but it still exists. Now I have made another assumption.
    But there's no north or south in space, and you pretty much always see where the treasure is located from lightyears away. So the only variable remaining is getting there.

  17. #17
    Join Date
    Oct 2009
    Location
    a long way away
    Posts
    10,775
    Quote Originally Posted by FunBotan View Post
    This is an inherently political statement, so I cannot argue against it here.
    I don't think this is political. Systems, especially intelligent ones, tend to find optimal solutions (which is why I think your "an individual would grab all resources and destroy themselves" is extremely implausible).

  18. #18
    Join Date
    Oct 2009
    Location
    a long way away
    Posts
    10,775
    Quote Originally Posted by FunBotan View Post
    At this point, I have no choice but to refer you to a point in my article where this question is discussed. This would be section V, subsection F.
    Looking at D first:
    D. But if the collapse is so easy to predict, surely most individuals should be able to escape it?
    Problem is, escaping the collapse requires es- caping the gravity of the collapsing structure
    No. Avoiding this does not mean escaping the gravity of the collapsing structure. It means being intelligent enough to find an optimal storage structure that does not collapse. Not terribly difficult. (Well, maybe finding the optimal solution is NP-complete, but finding a near optimal one is simple.)

    F. If agents are meant to maximize their future freedom of action (2.2), why wouldn’t they factor in the probability of a collapse?
    Even if they do, this should not affect their behavior. If they choose to suspend growth for safety, another agent will exploit that, as explained in 3.5.
    If it leads to the competitor destroying themselves, then that fails the same "intelligence" test that you are trying to defend.

    Honestly, this all looks hopelessly simplistic and built on a towering house-of-cards of fragile assumptions. (The lack of cooperation is a fatal, and totally unrealistic, assumption by itself, in my opinion.)

  19. #19
    Join Date
    Oct 2009
    Location
    a long way away
    Posts
    10,775
    Also, your "what pre- dictions can it make concerning the future of humanity?" paragraph appears to be a complete non sequitur. Surely the only prediction your model can make is that intelligent humans will destroy themselves in home-made black holes. Otherwise you are saying your model is wrong? Or humans aren't intelligent?

  20. #20
    Join Date
    Jun 2015
    Location
    Houston
    Posts
    1,216
    Quote Originally Posted by FunBotan View Post
    <snip>

    But there's no north or south in space, and you pretty much always see where the treasure is located from lightyears away. So the only variable remaining is getting there.
    Of course there isn't a north or couth but there is a direction, and you can't see a treasure if you're not looking at it or sensing it somehow. So no the treasure isn't always "visible". And how you perceive. "see" "observe, "sense" something light years. And if you can not sense it how do you travel toward it? So no the only variable isn't just getting there. That assumes again that there is sufficient fuel to obtain it another assumption. Your solution has
    too many assumptions, and fails.

  21. #21
    Join Date
    Aug 2008
    Location
    Wellington, New Zealand
    Posts
    4,332
    Quote Originally Posted by FunBotan View Post
    ...On that, I agree.
    This seems top be a reply to my post, FunBotan.
    So we agree that an intelligent civilization will not be ignorant enough to create a black hole. Rather they will likely create mega-structures such as Dyson spheres or swarms or something else. That makes your solution to the Fermi paradox wrong.

    A Grand Unified Theory of Everything criticizes the paper and is not about a definition of intelligence that seems unrelated to Alexander Wissner-Gross.

    The main flaw is easy to understand - the paper is not about this universe!
    "For example, a particle inside a rectangular box will move to the center rather than to the side, because once it is at the center it has the option of moving in any direction." does not happen in this universe.
    "The first problem is that Wissner-Gross’s physics is make-believe." (in this universe).
    Statistical mechanics is about collections of objects and their statistics. A single object has no defined entropy in statistical mechanics and no "casual entropic forces". A particle in a box bouncing around will not magically become centered in the box. What statistical mechanics states is that a collection of particles in a box, e.g. a gas in a container, will pass through every possible configuration over time. Over scales enormously larger than the age of the universe, the gas can gather at 1 location such as the upper left corner or the box or the center of the box.
    There is speculative (not explicitly referenced) physics on causal entropic processes at cosmological scales.

    There is: "Of course, even if the “causal entropic” laws weren’t really drawn from the laws of physics, they could still provide a useful framework for artificial intelligence or for modeling human behavior. But there is very little evidence that they do.".
    But that is not what you are doing. You are not proposing a framework for AI or modeling human or alien behavior.

    There is: "But here, Wissner-Gross relies too much on handpicked cases that happen to work well with the maxim of maximizing your future options".
    These handpicked cases ignore much of what intelligence does. Intelligence does more than maximize future options. The article gives the example of apes preferring grapes over cucumbers and so ignoring future option by eating grapes before cucumbers. Humans make similar choices that do not maximize their future options.

    Lastly there is a tiny bit of "crankiness" around the paper. "A start-up called Entropica aims to capitalize on the discovery" (vanished since 2013). Authors of a 2013 paper who seem not to have published anything else on the subject in the last 6 years.
    Last edited by Reality Check; 2019-Apr-09 at 09:40 PM.

  22. #22
    Join Date
    Oct 2018
    Posts
    24
    Quote Originally Posted by Strange View Post
    I don't think this is political. Systems, especially intelligent ones, tend to find optimal solutions (which is why I think your "an individual would grab all resources and destroy themselves" is extremely implausible).
    Have you heard of the Prisoners' dilemma? It's the easiest example of a system consisting of two perfectly intelligent individuals behaving extremely unintelligently as a whole. The same logic easily scales up to the level of civilization.

    Quote Originally Posted by Strange View Post
    No. Avoiding this does not mean escaping the gravity of the collapsing structure. It means being intelligent enough to find an optimal storage structure that does not collapse.
    This subsection relates to what a randomly sampled individual can do without having control over the majority of resources. Thanks for reading deeper than you had to, though.

    Quote Originally Posted by Strange View Post
    The lack of cooperation is a fatal, and totally unrealistic, assumption by itself, in my opinion.
    I truly wish this was true.

    Quote Originally Posted by bknight View Post
    Of course there isn't a north or couth but there is a direction, and you can't see a treasure if you're not looking at it or sensing it somehow. So no the treasure isn't always "visible". And how you perceive. "see" "observe, "sense" something light years. And if you can not sense it how do you travel toward it? So no the only variable isn't just getting there. That assumes again that there is sufficient fuel to obtain it another assumption. Your solution has
    too many assumptions, and fails.
    I think you should take a look at Dyson's dilemma.

    Quote Originally Posted by Reality Check View Post
    So we agree that an intelligent civilization will not be ignorant enough to create a black hole. Rather they will likely create mega-structures such as Dyson spheres or swarms or something else. That makes your solution to the Fermi paradox wrong.
    Wait, why do you think a Dyson sphere cannot collapse into a black hole?

    Quote Originally Posted by Reality Check View Post
    A Grand Unified Theory of Everything criticizes the paper and is not about a definition of intelligence that seems unrelated to Alexander Wissner-Gross.
    AWG literally calls it "The equation of intelligence".
    I'm not saying it's perfect. If you look at section II of my article, you'll see that I use multiple independent definitions. Again, if you have anything to add, please do. But we have to use some definition.

  23. #23
    Join Date
    Oct 2009
    Location
    a long way away
    Posts
    10,775
    Quote Originally Posted by FunBotan View Post
    Have you heard of the Prisoners' dilemma? It's the easiest example of a system consisting of two perfectly intelligent individuals behaving extremely unintelligently as a whole. The same logic easily scales up to the level of civilization.
    I thought you were talking about individuals, not civilisations. But that doesn't really matter. The Prisoners Dilemma is an artificial situation designed to make cooperation impossible. If it weren't for that, the optimal solution so for the prisoners to cooperate.

    I truly wish this was true.
    It is. You only have to look at the real world.

    Wait, why do you think a Dyson sphere cannot collapse into a black hole?
    Because it is larger than its Schwarzschild radius (and larger than the Chandrasekhar limit).

    You might object that it will not be strong enough to support itself as it gets bigger. But then we come back to the supposedly intelligent actors knowing that and building a more optimal solution.
    Last edited by Strange; 2019-Apr-10 at 01:40 PM.

  24. #24
    Join Date
    Oct 2018
    Posts
    24
    Quote Originally Posted by Strange View Post
    I thought you were talking about individuals, not civilisations. But that doesn't really matter. The Prisoners Dilemma is an artificial situation designed to make cooperation impossible. If it weren't for that, the optimal solution so for the prisoners to cooperate.
    Yes, I made this distinction multiple times.
    No, it matters more than anything else in this discussion.
    Cooperation in the context of TPD can only occur when there is another agent that controls both prisoners (like a mafia boss in real life). But it is impossible when several "apex agents" compete for the top position.

    Quote Originally Posted by Strange View Post
    It is. You only have to look at the real world.
    That real world where we've been evading a nuclear apocalypse by a hair margin? That real world where CO2 emissions are still rising despite every toddler knowing it's going to kill us? That real world where a person can have more money than a country?
    I'm not sure we're living in the same world.

    Quote Originally Posted by Strange View Post
    Because it is larger than its Schwarzschild radius (and larger than the Chandrasekhar limit).
    You might object that it will not be strong enough to support itself as it gets bigger. But then we come back to the supposedly intelligent actors knowing that and building a more optimal solution.
    It doesn't matter what kind of structure you build, you can always scale it up enough to fit in its own Schwarzschild radius.

  25. #25
    Join Date
    Oct 2009
    Location
    a long way away
    Posts
    10,775
    Quote Originally Posted by FunBotan View Post
    Cooperation in the context of TPD can only occur when there is another agent that controls both prisoners
    The setup is specifically defined to prevent cooperation. (I don't know whether it has any relevance to how people actually behave, anyway.)

    I'm not sure we're living in the same world.
    You don't see people working together help each other and to solve problems?

    It doesn't matter what kind of structure you build, you can always scale it up enough to fit in its own Schwarzschild radius.
    Obviously not true.

  26. #26
    Join Date
    Oct 2018
    Posts
    24
    Quote Originally Posted by Strange View Post
    The setup is specifically defined to prevent cooperation. (I don't know whether it has any relevance to how people actually behave, anyway.)
    And the double slit experiment is specifically defined to demonstrate the wave-like behavior of light. So?

    Quote Originally Posted by Strange View Post
    You don't see people working together help each other and to solve problems?
    It depends on two factors: class of situation and scale. Cooperation is to be expected when the situation comes down to a positive sum game, but otherwise, it becomes increasingly rare as the scale increases.

    Quote Originally Posted by Strange View Post
    Obviously not true.
    Would you provide a counterexample then?

  27. #27
    Join Date
    Aug 2008
    Location
    Wellington, New Zealand
    Posts
    4,332
    Quote Originally Posted by FunBotan View Post
    Wait, why do you think a Dyson sphere cannot collapse into a black hole?
    Because they are in orbit around their star and inhabited by an intelligent civilization. You are back to your intelligence = ignorance mistake. No intelligent civilization will build any structure that could accidently collapse into a black hole. No too big spherical ball. No meta-structure that could collapse.

    We can speculate that an intelligent civilization might purposely build a black hole, e.g. as a communication device. The collapse would be energetic and easily detected over vast distances (an SF cause of gamma ray bursts?).

    You should actually read Grand Unified Theory of Everything. The paper you are relying on does not describe this universe. At best, it describes some toy models for AI, not human or other intelligence.

    Replies to a post should be relevant to the post. I wrote: The main flaw is easy to understand - the paper is not about this universe! etc.. A TED video where Alexander Wissner-Gross describes the contents of the paper is not a reply, especially since you seem to ignore that "new equation for intelligence"!

    There is some idiocy in that video from 2014.
    A silly story about aliens observing that asteroids are suddenly deflected from Earth and concluding that it is a physical effect not related to intelligence.
    Cosmology and a universe "finely tuned for intelligence" fantasy. Fine-tuned Universe is a proposition that the physical constants in this universe are fine-tuned for the "establishment and development of matter, astronomical structures, elemental diversity, or life as it is understood.". The fine tuning would be the same even if only bacteria existed on Earth.
    Cosmology and a probable "universal states that maximize the diversity of possible futures" fantasy.
    Game playing, chess, Go and AI with a possible "maximize future options" fantasy. Ditto for robotic motion planning?
    A video inside a video mainly touting a software engine !
    That video has fantasies about entropy . Entropy is not maximized by processes. The second law of thermodynamics is that entropy does not decrease over time in an isolated system.
    That video repeats the non-physical, not-this-universe scenarios in his paper.
    That video touts software the seems to not exist any more ! Tried to find it but I get a quantum computing lab and Grand Unified Theory of Everything where the "Entropica" link redirects to AWG's personal web site.
    Dr. Alexander D. Wissner-Gross
    Dr. Alexander D. Wissner-Gross has no publications related to Entropica.
    Dr. Alexander D. Wissner-Gross has no patents related to Entropica.
    Dr. Alexander D. Wissner-Gross has no companies related to Entropica.
    Dr. Alexander D. Wissner-Gross has no companies related to Entropica.
    Dr. Alexander D. Wissner-Gross has no talks related to Entropica.
    Dr. Alexander D. Wissner-Gross has no press related to Entropica.
    Dr. Alexander D. Wissner-Gross has been ignoring his paper for 6 years. That is a sign of an abandoned or invalid idea.
    Last edited by Reality Check; 2019-Apr-11 at 02:18 AM.

  28. #28
    Join Date
    Jul 2018
    Posts
    128
    Hello FunBotan! I love talking about the Fermi Paradox, I find it fascinating. But before I comment on the details of what you propose, my biggest objection is that even if true this won't solve the Fermi Paradox. Your proposal concludes with "advanced enough civilizations will eventually collapse into a black hole". Is that about right?

    Well, in order to get to that point, you first have to become incredibly advanced. As in, probably at least a Kardashev II civilization that is building Dyson Swarms, right? Sorry to say it but even if you did collapse the home system into a black hole you have not killed the civilization. They would have many habitats in the outer parts of the solar system which would survive. They would have ships en route to other colonies with people on board. They would have some very well established colonies around many of the nearby star systems, each with it's own Dyson Swarm. They would be in the process of establishing new colonies around more distant star systems. And all of this kind of activity would be both:

    a) Visible to anyone with a telescope

    b) Prevent the entire civilization from dying just because their home star system was destroyed in a terrible accident.

    So before we even get into your proposal, I don't see it as a viable solution to the Fermi Paradox.

    Quote Originally Posted by FunBotan View Post
    Premise: A system is considered intelligent if its actions are aimed at maximizing its future freedom of action (this definition had been proposed by Alexander Wissner-Gross).
    While I would agree that a system that is maximizing it's future actions is probably an intelligent system, I am with the others that are complaining that they can also think of intelligent systems (ie humans) NOT maximizing it's future freedom of actions.

    Proposition 1: Maximizing future freedom of action is equivalent to hoarding the greatest amount of resources.
    I agree with this, though perhaps "utilizing" would work better then "hoarding".

    Proposition 2: Given long enough time, all available resources will be controlled by a single individual.
    I take objection to the word individual, but if we replace it with "group", or "entity", or "civilization" then I'd be more inclined to agree. Basically whatever group is in charge of an area will eventually come to control the resources of that area, more or less exclusively.

    Proposition 3: That individual will be incentivized to secure storage of their resources by centralizing it in one spherical structure.
    Why only one spherical volume and not many? If you have colonies in star systems that span the galaxy, would it really make sense to force the colonies on the outer edges to travel 50,000 light years to the center of the galaxy to get their resources, rather then keeping it in a smaller storage structure much closer to where they need it?

    Proposition 4: This structure will, in the limit, collapse into a black hole.
    Why? This is the only proposition that I flat out disagree with, as I really don't understand why you would conclude this. Dyson Swarms for instance are not in danger of collapsing into a black hole, the habitats would all be in stable orbits FAR outside the Schwarzschild radius of the solar system. How could a Dyson Swarm collapse into a black hole?

  29. #29
    Join Date
    Aug 2008
    Location
    Wellington, New Zealand
    Posts
    4,332
    Quote Originally Posted by FunBotan View Post
    And the double slit experiment is specifically defined to demonstrate the wave-like behavior of light. So?
    The double slit experiment is an experiment that shows that anything that we might consider to be a particle also has wave-like behavior. Photons (particles of light) give an interference pattern. Electrons give an interference pattern. C60 molecules give an interference pattern. Molecules with 810 atoms give an interference pattern. This is a physics experiment.

    The Prisoner's dilemma is a game theory scenario "that shows why two completely rational individuals might not cooperate, even if it appears that it is in their best interests to do so".

  30. #30
    Join Date
    Oct 2009
    Location
    a long way away
    Posts
    10,775
    Quote Originally Posted by FunBotan View Post
    And the double slit experiment is specifically defined to demonstrate the wave-like behavior of light. So?
    You can't use a thought experiment where cooperation is specifically prevented as evidence that people will not cooperate under normal circumstances.


    It depends on two factors: class of situation and scale. Cooperation is to be expected when the situation comes down to a positive sum game, but otherwise, it becomes increasingly rare as the scale increases.
    Evidence?


    Would you provide a counterexample then?
    How big a counterexample: the solar system, the entire universe? Clearly structures of any mass can exist and be larger than their Schwarzschild radius. If it were not the case, the universe would not exist.

    Maybe you didn't mean what you wrote, though.

Tags for this Thread

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •