Existential risk from artificial ignorance
Existential risk from artificial ignorance is the continuing threat that unforeseen interactions of control systems could someday result in human extinction. The risks are huge that any control systems can cause, and have caused, problems ranging in scale from inconvenience to catastrophic damage. There has been a worldwide debate over Existential risk from artificial general intelligence, but why wait for AI systems to get smarter than people, when artificial ignorance provides risk already? Wherever we have multiple systems interacting there is an increased chance of ignorance causing errors (due to non-communication, or over-reliance on the other system, or lack of information about risks)
Artificial Ignorance in Art and Literature[edit]
The Existential risk from artificial ignorance is shown in the Artificial Ignorance WebComic, as indicated in its synopsis "Artificial Ignorance was a weekly webcomic that follows two robots as they come to terms with their existence which takes place both in a physical dystopian future free of humans as well as cloudspace where their programing is limited to their imagination"[1]
Nithyananda Sangha explained how Artificial Ignorance plays a major role in everyday life. He said "Artificial Ignorance has started playing the leaders role"[2]
Current Trends in Artificial Ignorance[edit]
The development of "An ignorant (or un-aware) iteratively self-improving machine" is being carried out on systems lacking ethics, sense, understanding of life, self-awareness, or a world view, as explained in the article on Artificial Ignorance by Steve Moraco.[3] He also writes "For the first time in human history there are no more technological or conceptual barriers between the current state of the art and a potentially self-designing machine. It’s merely a matter of implementation."
The concept of "Trees for the Forest – Ignore and Optimize" (described here Artificial Ignorance - not normal is an opportunity) shows that filtering of data looking for specific traits, and ignoring the rest of the data records, is a necessary and common practice. And while the practice is quite useful, it does however leave users open to the risk of assuming that everything is being checked, when in actuality only a small fraction of the data is.
In her article AI Ethics: Artificial Intelligence, Robots, and Society, Joanna Bryson wrote "In fact, AI is here now, and even without AI, our hyperconnected socio-technical culture already creates radically new dynamics and challenges for both human society and our environment."
In an interview with Nick Bostrom, director of the Future of Humanity Institute at Oxford, Ross Andersen asks 'In one of your papers on this topic you note that experts have estimated our total existential risk for this century to be somewhere around 10-20%. I know I can't be alone in thinking that is high. What's driving that?'[4]
Software bots created using simple algorithms are competing with humans in the financial markets and in social media. In both places, including inside Wikipedia, different bots (who's developers may be unaware of the other bots' existence) continue waging battles that go on for years. adding increased volatility and major disruptions due to unintended bot interactions. as explained in an article by Milena Tsvetkova, Ruth García-Gavilanes, Luciano Floridi, and Taha Yasseri called "Even good bots fight: The case of Wikipedia"[5] "We have classified high-frequency trading algorithms as malevolent because they exploit markets in ways that increase volatility and precipitate flash crashes. ... Wikipedia is an ecosystem of bots. ... Bots are predictable automatons that do not have the capacity for emotions, meaning-making, creativity, and sociality and it is hence natural to expect interactions between bots to be relatively predictable and uneventful. ... Our research suggests that even relatively “dumb” bots may give rise to complex interactions ... a system of simple bots may produce complex dynamics and unintended consequences."
"The growing interconnections between people, markets and networks together with the development of new technologies have increased the frequency and impact of large-scale disasters around the globe." "This paper takes a governance perspective by assuming that policy actions should be designed to cope with ignorance and large-scale losses, being the primary features characterising such emerging catastrophic risks."[6]
Jonathan Yarden wrote about the risks of ignoring security, and explained how the increasing complexity of computer systems increases the risk level. He wrote "I'm convinced that the more feature-rich Internet software is, the more bugs it's going to have"[7]
Types of Artificial Ignorance[edit]
- Risk that none of the systems was designed to detect.
- Risk that a system was designed to detect, but failed to.
- Safe condition that is falsely noted as a risk.
- Risk that is known, but each system relies on the other system to handle.
Past examples of Artificial Ignorance[edit]
A home security system called a couple back from vacation early to find the police at their home, but there had been no break in. What was the error? Their home security system, was triggered by their robot vacuum. In this example, one system was ignorant of the other systems existence. Would a roomba trigger a home's security system?
False Alarms from NORAD Led to Alert Actions for U.S. Strategic Forces In this example the system is ignorant that it has mistaken a safe condition as a risk. How many times have we the people of the world been brought to the brink of extinction, due to this type of artificial ignorance?
Chernobyl disaster
Catastrophes like the Exxon Valdez oil spill could be prevented in the future with smarter control systems. In this example there was ignorance of immanent risks in the immediate environment.
On December 2, 1984, the Union Carbide pesticide plant in Bhopal, India started leaking methyl isocyanate gas and other poisons into the air. More than half a million people were exposed to the toxins, eventually resulting in more than 35 thousand deaths. The Bhopal disaster, in this example, there was over-reliance between the systems in place at the time. The systems that were designed to try to prevent gas leaks (mechanical, computerized, and administrative) each relied on the others, to try to maintain safe conditions.
References[edit]
- ↑ Valenzuela, Kyle. "Artificial Ignorance". Behance. Retrieved 2 February 2017.
- ↑ Sangha, Nithyananda. "Are You In the Matrix of Artificial Ignorance or in Existential Reality?". Retrieved 2 February 2017.
- ↑ "Artificial Ignorance - Steve Moraco". Steve Moraco. 2016-02-16. Retrieved 2017-01-14.
- ↑ "We're Underestimating the Risk of Human Extinction - Ross Andersen". The Atlantic. 2012-03-06. Retrieved 2017-01-21.
- ↑ Tsvetkova, Milena. "Even good bots fight: The case of Wikipedia". PLOS ONE. Retrieved 4 March 2017.
- ↑ Castellano, Giuliano G. "Governing Ignorance: Emerging Catastrophic Risks - Industry Responses and Policy Frictions". SSRN. University of Warwick - Law School; i3-CRG, École polytechnique, CNRS. SSRN 1650565. Missing or empty
|url=
(help);|access-date=
requires|url=
(help) - ↑ Yarden, Jonathan. "Is computing getting too complicated?". TechRepublic. Retrieved 2 February 2017.
This article "Existential risk from artificial ignorance" is from Wikipedia. The list of its authors can be seen in its historical. Articles copied from Draft Namespace on Wikipedia could be seen on the Draft Namespace of Wikipedia and not main one.