Miscellaneous Stuff

Home Contact



I am Mother and the Fermi Paradox

6-9-19


8Artificial IntelligenceLast night I watched the engaging and thought-provoking NetFlix movie called I Am Mother. In order to relate the movie to the other topic of this article, I'll have to tell you what happens in the movie. So, if you haven't watched the movie and want to avoid spoilers, stop reading now.

The movie is about an artificial intelligence that calls itself "Mother". As an engineer and a science fiction fan, of course I would say the movie is about the artificial intelligence, despite the presence of two human characters. As the movie progresses, we gradually learn that Mother was either tasked with or took on the mission of protecting mankind. Since mankind's biggest threat is itself, Mother decided to wipe out every human on the planet and start over. This is a theme already explored by the 2004 Will Smith movie, I Robot, based on the 1950 novel by Arthur C. Clark. Having created a facility to house 63,000 human embryos and practice rearing the first few until Mother gets it right, "she" begins her effort of recreating humanity under conditions which will suppress its baser instincts. By the way, anyone who knows anything about human psychology, or has thought at all about what "intelligence" actually is, knows the probability of successfully suppressing humanity's baser instincts is essentially zero. Regardless, if you can suspend your disbelief, the movie is still very much worth watching. It begins with the growth of "Daughter", played by Clara Rugaard, in a "test tube". As she grows up, Daughter asks the natural questions. Mother gives her enough false information to satisfy her curiosity until one day a wounded woman, played by Hilary Swank, appears outside the entrance to the facility, and Daughter learns the truth. Hilary Swank must be using the same plastic surgeon as Sandra Bullock, because I swear she doesn't appear to have aged in twenty years.

Now to the main topic of this article, the Fermi Paradox, named after the physicist Enrico Fermi who earned the Nobel prize in 1938 for his work on induced radioactivity by neutron bombardment and the discovery of transuranium elements. Recognizing the implication of the Drake Equation, that the universe should be filled with intelligent life, Fermi asked the obvious question, "Where in the crap is everybody?" And for that tiny bit of insight, he got his name attached to it.

Although there are many possible explanations for why intelligent civilizations may have a barrier to their survival--a barrier sometimes referred to as the "Great Wall"--I have not seen anything very original written on this in ten years. Under the topic of the Fermi Paradox, Wikipedia gives a long list of reasonable explanations for the lack of intelligent life in the universe, so I won't repeat them all here. When I was growing up in the 1970's, were we terrified of nuclear annihilation. Now that we have somehow managed to survive, despite the nuclear threat hanging over our heads for the last seven decades, no one seems to be paying much attention to this humanity-ending scenario. Maybe this is just one example of how people eventually get bored with a long-standing fear to the point where they change focus to the next threat to come along. Currently, perhaps the most likely humanity-ending scenario is that some high-school kid creates a super-virus in a test tube in his kitchen.

Awareness of the long-standing A.I. threat has been revived, yet again, by the recent development of commercial artificial intelligence applications. This is the third round of fear about this of which I am aware. The first was in the 1950's. Then the early 1980's. Perhaps there was another in the 1970's--remember Battlestar Gallactica? Each previous time we stopped fixating on A.I. because we realized that we didn't have fast enough computers. Hopefully, it will be just a matter of time before some intelligent people realize that again. Anyway, this time Elon Musk and Steven Hawking are among those who have brought this threat to the forefront of our consciousnesses. Musk predicted that we will likely reach the "singularity point" by the year 2045 (i.e. produce an artificial intelligence that is far smarter than us). I assume this is based on Moore's Law, which is no longer in effect, but that's somewhat beside the point. The speculation is that the A.I. would then either enslave us, or more likely, wipe us out. Since we have been effectively enslaved by far less intelligent politicians for all of recorded history, it shouldn't take a particularly smart A.I. to succeed with that. However, my personal opinion is that we are nowhere near a singularity point. I don't think this is likely for at least a hundred years, if at all. We just don't know enough yet about what intelligence actually is to create an artificial version.

But let's say that we create a super A.I. tomorrow. We have not the foggiest idea what it would do. It may not even care enough about us to wipe us out. It may just decide to leave to engage in more interesting activities, as in the movie She with Joaquin Phoenix. One of our problems as humans is that we are incapable of not seeing ourselves at the center of the universe--a mindset that has been biologically programmed into us. No, we can't set aside our biological programming long enough to see the universe, or reality in general, in it's true light. We can't see that a super A.I. probably wouldn't care about us at all.

The obvious problem with including A.I. on the list of "Great Wall" scenarios preventing intelligent life from permeating the universe is that, while it is debatable whether A.I. is life, it is not debatable whether a super A.I. would be considered intelligent. If biological intelligence leads naturally to machine intelligence, then Fermi would not have posed his question in the first place. We would simply be seeing the effects of intelligent A.I. throughout the universe, instead to the effects of intelligent biological life.

This article may not have been consistent with the point of this website--namely, that this website is just for the fun of it. Sorry about that. My next article will be about something more fun, ironically the near annihilation of humanity by aliens. Sounds like I have a one-track mind, doesn't it? But seriously, the podcast I'll write about next really is fun, optimistic even, despite its premise.



--Tie






  

Copyright © 2019 terraaeon.com. All rights reserved.