This is a blog post from the guy who created the “study”
I read the study and it makes a hell of a lot of assumptions without really proving anything. I’m inclined to think the author worked backwards from his preconceived conclusion. It’s not very convincing.
I feel like I’ve been down voting a lot of posts linking to garbage articles on sciencealert. Starting to think it’s not a very reputable source of information.
Ah… The Mass Effect theory: Every spacefaring race that gets too advanced is wiped out by the Reapers.
Ah, yes. The “reapers”
Choose from 3 different color endings
the Fermi Paradox then become, where are the Reapers, or why arent we Reaped yet? I guess the article is more on the idée that AI will lead to total destruction, basically through AI controlled weapon… before any Reaper like AI is available.
We aren’t reaped yet because we are not yet ready for harvest. As to where they are: behind the veil of our limited understanding.
“why, in a universe vast and ancient enough to host billions of potentially habitable planets, we have not detected any signs of alien civilizations.”
Our solar system is essentially in the boondocks galactically speaking.
It’s the equivalent of an Iowa farmer going out on his front porch and going “Well, shit, I can see for MILES and I don’t see any evidence of any ‘New York City!’”
There are LOTS of reasons we might not be able to detect alien civilizations, #1 being we’ve spent entirely too much time looking at radio waves.
As of last year, we’re on the edge of having a communications system that doesn’t use radio waves:
https://www.asme.org/topics-resources/content/passive-wireless-system-is-powered-by-electronic-noise
My favorite is the potential to send 1s and 0s via quantum entanglement, but that’s still sci-fi at this point:
https://thequantuminsider.com/2023/02/20/quantum-entanglement-communication/
So, what? A 100 year window of using radio waves before they become obsolete?
Man, that quantum entanglement article has some glaring mistakes in relatively basic facts, e.g.:
nothing in nature can travel faster than the speed of light except for photons, which possess zero rest mass.
Photons obviously don’t travel faster than light, I assume that one was just a brainfart.
But it’s also not just photons which go at the speed of light. Anything with no mass, so for example also gravitational waves, travels at the speed of light/causality.
Really annoys me, because my current understanding is that faster-than-light communication with quantum entanglement is just complete horseshit. But various people, who know a lot more about this stuff, seem to have not dismissed it yet, so I always try to find those perspectives. This does not appear to be one of those…
the Fermi paradox state that given the billion year available, any civilisation should already have conquered all the galaxy by now. Including Iowa. Even with sub light speed, and given the immensity of space, it would take a few hundred thousand year to reach a new planet and from there relaunch to another… etc
Put AI in the title of anything and rubes will donate clicks.
There have been a few sci-fi authors who have written stories about AI who, for one reason or another, are more interested in the virtual than the physical. They get interested in solving math or physics problems to e exclusion of all else; they simulate realities more amenable to their preferences; they are obsessed with improving themselves and spend all their time and resources optimizing slightly smarter versions of themselves. Sometimes it’s AI, but another common scenario is that once a civilization can upload themselves, they cease to bother with the physical, for similar reasons.
As someone else pointed out, this article is more of an op ed than journalism, and it’s not particularly original. The author isn’t even suggesting neutral motivations, just that every civilization eventually develops AI, which then destroys them.
I was surprised to see the author is credentialed, TBH.
If AI really was such a game-changer, it would increase the chances of finding extraterrestrial aliens, not decrease it. If AI allows for superhuman feats of intellect and engineering, then even if 99.9% of all strong AI leads to the destruction of the original civilization, you’d only need the 0.1% of civilizations that develop stable benevolent civilization-boosting AI (let’s call them The Culture). Those would spread around, and we would have seen them. So we’re back at Fermi’s paradox.
I suppose the" great filter" is called so, because it does not only happen 99.9% of the time but 100%. But indead, it is hard to think of anything so certain that will always wipe every civisilasion , all the time.
We haven’t found alien civilizations because they all killed themselves with
nuclear weapons, climate change,AIAgreed, it is an old fearmongering Fermi Paradox solution. But isnt it fun to think of all the way humanity could be wiped out
Any intelligent AI would probably see meating other sentient species as unpredictable dice roll that has a too many ways to end catastrophically. Best odds are just to made the parent civilization undetectable.
I considered posting this in sub/ScienceFiction. Next time I will do so