Existential risk from artificial ignorance

Existential risk from artificial ignorance is the continuing threat that unforeseen interactions of control systems could someday result in human extinction. The risks are huge that any control systems can cause, and have caused, problems ranging in scale from inconvenience to catastrophic damage. There has been a worldwide debate over Existential risk from artificial general intelligence, but why wait for AI systems to get smarter than people, when artificial ignorance provides risk already? Wherever we have multiple systems interacting there is an increased chance of ignorance causing errors (due to non-communication, or over-reliance on the other system, or lack of information about risks)
Artificial Ignorance in Art and Literature
The Existential risk from artificial ignorance is shown in the Artificial Ignorance WebComic, as indicated in its synopsis "Artificial Ignorance was a weekly webcomic that follows two robots as they come to terms with their existence which takes place both in a physical dystopian future free of humans as well as cloudspace where their programing is limited to their imagination"
Nithyananda Sangha explained how Artificial Ignorance plays a major role in everyday life. He said "Artificial Ignorance has started playing the leaders role"
Current Trends in Artificial Ignorance
The development of "An ignorant (or un-aware) iteratively self-improving machine" is being carried out on systems lacking ethics, sense, understanding of life, self-awareness, or a world view, as explained in the article on Artificial Ignorance by Steve Moraco. He also writes "For the first time in human history there are no more technological or conceptual barriers between the current state of the art and a potentially self-designing machine. It’s merely a matter of implementation."
The concept of "Trees for the Forest - Ignore and Optimize" (described here
Artificial Ignorance - not normal is an opportunity) shows that filtering of data looking for specific traits, and ignoring the rest of the data records, is a necessary and common practice. And while the practice is quite useful, it does however leave users open to the risk of assuming that everything is being checked, when in actuality only a small fraction of the data is.
In her article [http://www.cs.bath.ac.uk/~jjb/web/ai.html AI Ethics: Artificial Intelligence, Robots, and Society], Joanna Bryson wrote "In fact, AI is here now, and even without AI, our hyperconnected socio-technical culture already creates radically new dynamics and challenges for both human society and our environment."
In an interview with Nick Bostrom, director of the Future of Humanity Institute at Oxford, Ross Andersen asks 'In one of your papers on this topic you note that experts have estimated our total existential risk for this century to be somewhere around 10-20%. I know I can't be alone in thinking that is high. What's driving that?'
Software bots created using simple algorithms are competing with humans in the financial markets and in social media. In both places, including inside , different bots (who's developers may be unaware of the other bots' existence) continue waging battles that go on for years. adding increased volatility and major disruptions due to unintended bot interactions. as explained in an article by Milena Tsvetkova, Ruth García-Gavilanes, Luciano Floridi, and Taha Yasseri called "Even good bots fight: The case of " "We have classified high-frequency trading algorithms as malevolent because they exploit markets in ways that increase volatility and precipitate flash crashes. ... is an ecosystem of bots. ... Bots are predictable automatons that do not have the capacity for emotions, meaning-making, creativity, and sociality and it is hence natural to expect interactions between bots to be relatively predictable and uneventful. ... Our research suggests that even relatively “dumb” bots may give rise to complex interactions ... a system of simple bots may produce complex dynamics and unintended consequences."
"The growing interconnections between people, markets and networks together with the development of new technologies have increased the frequency and impact of large-scale disasters around the globe." "This paper takes a governance perspective by assuming that policy actions should be designed to cope with ignorance and large-scale losses, being the primary features characterising such emerging catastrophic risks."
Jonathan Yarden wrote about the risks of ignoring security, and explained how the increasing complexity of computer systems increases the risk level. He wrote "I'm convinced that the more feature-rich Internet software is, the more bugs it's going to have"
Types of Artificial Ignorance
*Risk that none of the systems was designed to detect.
*Risk that a system was designed to detect, but failed to.
*Safe condition that is falsely noted as a risk.
*Risk that is known, but each system relies on the other system to handle.
Past examples of Artificial Ignorance
A home security system called a couple back from vacation early to find the police at their home, but there had been no break in. What was the error? Their home security system, was triggered by their robot vacuum. In this example, one system was ignorant of the other systems existence.

False Alarms from NORAD Led to Alert Actions for U.S. Strategic Forces In this example the system is ignorant that it has mistaken a safe condition as a risk. How many times have we the people of the world been brought to the brink of extinction, due to this type of artificial ignorance?

Chernobyl disaster

Catastrophes like the Exxon Valdez oil spill could be prevented in the future with smarter control systems. In this example there was ignorance of immanent risks in the immediate environment.

On December 2, 1984, the Union Carbide pesticide plant in Bhopal, India started leaking methyl isocyanate gas and other poisons into the air. More than half a million people were exposed to the toxins, eventually resulting in more than 35 thousand deaths. The Bhopal disaster, in this example, there was over-reliance between the systems in place at the time. The systems that were designed to try to prevent gas leaks (mechanical, computerized, and administrative) each relied on the others, to try to maintain safe conditions.
 
< Prev   Next >