Results 1 to 20 of 20

Thread: The Information Catastrophe

  1. #1
    Join Date
    Sep 2004
    Location
    South Carolina
    Posts
    5,504

    The Information Catastrophe

    Article suggesting that the amount of information we produce/learn must by nature have an upper limit, and we are not far from reaching it. Comments? Thoughts?


    https://arxiv.org/abs/2009.01937

    The Information Catastrophe

    Melvin M. Vopson (University of Portsmouth, School of Mathematics and Physics, Portsmouth, UK)

    Currently we produce 10 to power 21 digital bits of information annually on Earth. Assuming 20 percent annual growth rate, we estimate that 350 years from now, the number of bits produced will exceed the number of all atoms on Earth, or 10 to power 50. After 250 years, the power required to sustain this digital production will exceed 18.5 TW, or the total planetary power consumption today, and 500 years from now the digital content will account for more than half of the Earths mass, according to the mass energy information equivalence principle. Besides the existing global challenges such as climate, environment, population, food, health, energy and security, our estimates here point to another singularity event for our planet, called the Information Catastrophe.
    Do good work. —Virgil Ivan "Gus" Grissom

  2. #2
    Join Date
    Dec 2007
    Location
    Bend, Oregon
    Posts
    6,333
    Does the article actually say 'produce/learn' or just produce?

  3. #3
    Join Date
    Jul 2012
    Posts
    378
    Hello Roger and welcome back (again). I always look forward to your interesting posts.

    This 'Information Catastrophe' reminds me of so many other conundrums we face which can be interpreted in so many ways and the possible negative impacts, naturally, must receive attention.

    I would assume LHC probably consumes more power today than was dissipated worldwide 100 years ago, let alone 250 years. And our rate of change is higher. Compare the science of 350 years ago to today when we can barely predict 5 years into the future. Likely, 50 years from now we will have a different perspective.

    IMO, this is like Peak Oil, based on truthiness and alarmism. And, as with Peak Oil, the worst could come to pass. But I doubt it.

    I have not read the article.

    Cheers,

  4. #4
    Join Date
    Jun 2007
    Posts
    6,009
    That's an odd definition of "not far". Just 25 years at 20% annual growth means growth by a factor of 100.

    World population is expected to level off at around 10 billion around 2100, and there are biological limitations to how much information each human can produce or consume. There will be information produced that's not directly related to human activities, but it seems unreasonable to expect the per-capita quantity of such information to continue to increase exponentially. At some point the growth lacks any meaning to the humans that drive it.

    This also seems to be the opposite of a singularity. Clearly, all else being equal, information production will cease growth due to lack of resources. A singularity would involve explosive growth, not a steadily slowing growth approaching some finite maximum.

  5. #5
    Join Date
    Jan 2010
    Location
    Wisconsin USA
    Posts
    3,306
    This is why I like rules of thumb. There is simply too much information to make sense out of everything. "Take one day at a time" "When you find a good path, keep using it"
    The moment an instant lasted forever, we were destined for the leading edge of eternity.

  6. #6
    Join Date
    Sep 2004
    Location
    South Carolina
    Posts
    5,504
    Quote Originally Posted by geonuc View Post
    Does the article actually say 'produce/learn' or just produce?
    Produce. My error, I added the learn.
    Do good work. —Virgil Ivan "Gus" Grissom

  7. #7
    Join Date
    Jul 2012
    Posts
    378
    Hello Roger,

    What are your own thoughts on this?

    Cheers,

  8. #8
    Join Date
    Sep 2004
    Location
    South Carolina
    Posts
    5,504
    Quote Originally Posted by 7cscb View Post
    Hello Roger,

    What are your own thoughts on this?
    I have trouble visualizing this. I do know that old knowledge is constantly condensed and there is continual information loss. I don't think information loss can keep up with information gain, and maybe new types of computers can keep up with new knowledge for a brief time. After that, I have no idea. The article was so surprising to me, I had to post it.
    Do good work. —Virgil Ivan "Gus" Grissom

  9. #9
    Join Date
    Mar 2004
    Posts
    19,621
    This reminds me of old projections of vehicle velocity that showed our fastest vehicles would be FTL by now. In reality, velocity growth slowed down due to both practical and theoretical reasons.

    "The problem with quotes on the Internet is that it is hard to verify their authenticity." — Abraham Lincoln

    I say there is an invisible elf in my backyard. How do you prove that I am wrong?

    The Leif Ericson Cruiser

  10. #10
    Join Date
    Aug 2006
    Posts
    3,637
    A very practical concern of this information explosion will be not so much about how to store it but about how to retrieve something meaningful in a useful amount of time.

    We are already acquiring certain data sets so rapidly that it will take orders of magnitude longer to process them than it did to gather them.

    In fact, one can envisage a time when - in the time it takes to process the data for some event (say, a planetary fly by) - our technology improves so much that we might as well just dump the data we have and send another more advanced probe.
    Last edited by DaveC426913; 2020-Nov-25 at 02:04 AM.

  11. #11
    Join Date
    Apr 2011
    Location
    Norfolk UK and some of me is in Northern France
    Posts
    9,419
    I wonder what fraction of this stored information is trivial, duplicated or otherwise useless?
    sicut vis videre esto
    When we realize that patterns don't exist in the universe, they are a template that we hold to the universe to make sense of it, it all makes a lot more sense.
    Originally Posted by Ken G

  12. #12
    Join Date
    Jul 2012
    Posts
    378
    Quote Originally Posted by profloater View Post
    I wonder what fraction of this stored information is trivial, duplicated or otherwise useless?
    Hello profloater,

    I suppose trivial is in the eye of the beholder. Posterity will determine the relevance of data. In that sense, it should never be deleted.

    Cheers,

  13. #13
    Join Date
    Sep 2004
    Location
    South Carolina
    Posts
    5,504
    Management of mega-data is already a problem. See paper below.

    https://arxiv.org/abs/2011.03584

    Community Challenges in the Era of Petabyte-Scale Sky Surveys

    Michael S. P. Kelley, Henry H. Hsieh, Colin Orion Chandler, Siegfried Eggl, Timothy R. Holt, Lynne Jones, Mario Juric, Timothy A. Lister, Joachim Moeyens, William J. Oldroyd, Darin Ragozzine, David E. Trilling

    We outline the challenges faced by the planetary science community in the era of next-generation large-scale astronomical surveys, and highlight needs that must be addressed in order for the community to maximize the quality and quantity of scientific output from archival, existing, and future surveys, while satisfying NASA's and NSF's goals.

    QUOTE: "The big data threshold is the point at which common tools no longer work for the data volume, with additional consideration for complexity. Most observational astronomers can analyze locally accessible data with software executed on their own CPUs. However, in the next decade several observatories or sky surveys will have petabyte-scale data sets, all topped by LSST with a 200 PB data archive (Desai et al. 2019). Astronomers will increasingly rely on online tools, services, or data platforms that can properly accommodate big data resources."
    Do good work. —Virgil Ivan "Gus" Grissom

  14. #14
    Join Date
    Jul 2012
    Posts
    378
    Hello Roger,

    From what I recall, a big part of LHC design was setting up for handling of unprecedented amounts of data.

    From my own experience, decades ago , each new record in a database requires progressively more CPU resources.

    Cheers,

  15. #15
    Join Date
    Jan 2002
    Location
    The Valley of the Sun
    Posts
    9,902
    Quote Originally Posted by Van Rijn View Post
    This reminds me of old projections of vehicle velocity that showed our fastest vehicles would be FTL by now. In reality, velocity growth slowed down due to both practical and theoretical reasons.
    I suppose we won't be getting our Kurzweilian singularity for similar reasons.

  16. #16
    Join Date
    Mar 2004
    Posts
    19,621
    Quote Originally Posted by Chuck View Post
    I suppose we won't be getting our Kurzweilian singularity for similar reasons.
    Perhaps not, or perhaps it will take longer but still happen eventually.

    Kurzweil came to the idea a bit late. In 1986 (I know because it is when the books came out) I read Vernor Vinge’s story Marooned in Realtime and Drexler’s Engines of Creation (which formalized the nanorobotics concept). “Marooned” gave a picture of a civilization approaching a singularity from (more or less) “normal” humans that had gone past it in stasis. “Engines” gave a picture how something similar might happen in real life. These were two of the most thought provoking books I had read for a long time.

    I really liked the idea of something like a Vinge singularity and thought it was very likely, but I started to lose some of my enthusiasm when some seemed to treat it almost like a religion. It’s also obvious that nanotechnology is a bit harder than hoped at the time, though amazing things have been done with technology at extremely small scales. Advanced AI is also proving harder to develop than hoped as well.

    Eventually I decided that it probably is a good thing AI and nanotechnology are harder than hoped. I’ve thought of a number of unpleasant scenarios that are more subtle than an evil AI and robots destroying the human race. For instance, I wouldn’t want to be living through times where technology is taking leaps daily and therefore impossible to anticipate properly with individuals around the world having notions of making the world just the way they want it and perhaps then having the power to do it. Incidentally, I also think it is a good thing that it is hard to work out the details of how the brain works.

    "The problem with quotes on the Internet is that it is hard to verify their authenticity." — Abraham Lincoln

    I say there is an invisible elf in my backyard. How do you prove that I am wrong?

    The Leif Ericson Cruiser

  17. #17
    Join Date
    Jun 2003
    Posts
    8,772
    Having read both of those books (and written plenty of post-singularity scenarios myself) I can certainly see your point. This is a very large arena for possible conflicts, between persons and other competent entities with the capability to change both reality and our perceptions of reality.

    I think there are a number (perhaps a very large number) of scenarios where things improve for the majority of the population, but there are many other scenarios where only a few benefit, or none.

  18. #18
    Join Date
    Sep 2004
    Location
    South Carolina
    Posts
    5,504
    Rubin Observatory turns to Google Cloud for data hosting

    QUOTE: As the Rubin Observatory operations team and the scientific community prepare to work with the enormous dataset expected to be provided by the primary instrument, a telescope with an 8.4 meter primary mirror, the Rubin Observatory signed an agreement to establish an interim data facility, called the Rubin Science Platform, in the Google Cloud. “We will use that interim data facility to train ourselves on how to run our data system for the real survey when it starts in about three years,” Bob Blum, acting Rubin Observatory operations director, told SpaceNews. “We’ll train the nighttime astronomy community to use datasets like this in a deployed system in the cloud.” In the past, astronomers often downloaded observations and stored data locally. As telescopes become more powerful and data volume surge, astronomers are turning increasingly to cloud-based platforms to access data and imagery alongside tools to manipulate them.

    https://spacenews.com/rubin-observatory-google-cloud/
    Do good work. —Virgil Ivan "Gus" Grissom

  19. #19
    Join Date
    Feb 2005
    Posts
    12,069
    Very smart. I would like to see other labs do this.
    Movie sets could be left in place inside dead malls turned server farms, so that money could be made even without customers in the movie set gift shop.

    Other observatories should look into this.

  20. #20
    Join Date
    Mar 2010
    Location
    United Kingdom
    Posts
    7,332
    Quote Originally Posted by publiusr View Post
    Other observatories should look into this.
    They've been doing it for years. Note the caveat they added - "Of this scale".

    https://aws.amazon.com/solutions/case-studies/icrar/
    https://aws.amazon.com/blogs/publics...g-earth-space/
    https://aws.amazon.com/blogs/publics...-in-the-cloud/
    https://aws.amazon.com/solutions/cas...nce-institute/
    https://aws.amazon.com/blogs/publics...atory-project/
    https://www.industry.gov.au/data-and...tre-array-data
    ...
    And these are just the big projects - astronomers have been using things like AWS to process and hold their data at a more personal level for some time.

    The big game changer we are still waiting for (wihch has been suggested several times) is a common service oriented architecture to allow different repositories to talk to each other and share data and tools. Something like Google Earth Engine for Space. But hopefully with less Javascript.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •