Artificial Intelligence’s Existential Threat And What We Can Do About It

Courtesy%3A+DeepMind

Courtesy: DeepMind

Alexander Chen, Staff Reporter

In the spring of 2016, DeepMind Technologies, a British artificial intelligence company under Google, shocked the world when its AlphaGo robot beat 18-time world champion Lee Sedol at the ancient board game Go. Then in October 2017, DeepMind’s AlphaGo Zero beat AlphaGo 100 to 0. Yet, just two months later, both AlphaGo and AlphaGo Zero were crushed by their successor, AlphaZero. DeepMind had created a bot that beat a bot that beat another bot that had triumphed against the strongest Go players in the world. 

People hear of AI in a variety of ways, from watching Hollywood apocalypse movies to reading cluelessly optimistic articles about AI. However, these misguided stances breed false perceptions of AI with very little real-world evidence to back them up. AI’s potential is enormous, but so are its dangers. The AI threat requires a swift global response from both the international community and today’s youth. This response should focus on stronger international cooperation, increased regulation, and student empowerment. 

AI, the Double-Edged Sword

Artificial intelligence has long been an extremely contentious topic, from its dubious moral credibility to the killer robots we see in movies. The creation of the first AI smarter than any human, known as a superintelligence, will have profound and unpredictable consequences. A superintelligence would be more capable than anyone on the planet, possessing independent thought processes and problem-solving. Because superintelligences would be morally neutral yet extremely powerful, Bill Gates has compared AI to nuclear energy, stating that “both [are] promising and dangerous.”

AI has already advanced fields such as healthcare, finance, and transportation, and its possible applications are practically endless. AI could also revolutionize humanitarian aid and sustainable global development. For example, AI has the potential to boost farm yields around the world, which could save countless people from starvation and famine every year. 

However, superintelligence is highly concerning for a few reasons. First, the extent of a superintelligence’s abilities would be inherently unknown and constantly expanding. Stephen Hawking has said that “once humans develop full AI (human-level AI), it will take off on its own and redesign itself at an ever-increasing rate.” Through endless cycles of self-refinement, superintelligences could become incomprehensibly intelligent and powerful. As a testament to their abilities, superintelligences would be able to negate any attempt by humans to shut it down; it would have already thought of the possibility of a shutdown and would have quietly taken the necessary steps to prevent it from occurring. Superintelligence and normal AI could also take actions which they believe are necessary to achieve its objective, but which humans would consider immoral. As one can imagine, the wielders of the first superintelligence would hold unimaginable power over the rest of the world; a sufficiently powerful but compliant AI would easily surpass the destructive potential of a nuclear war.

Courtesy: Araya Peralta

Addressing the AI Threat: A Global Initiative

The AI crisis is global in scale, and it must be addressed through an international scope.

Multilateral cooperation and a code of conduct are two solutions that should be implemented immediately. International cooperation is of utmost importance because the first superintelligence could be created anywhere around the world, and nations must work together to standardize regulations and strictly oversee AI development. Leading countries such as the United States, China, United Kingdom, Japan, and Russia should form multilateral agreements that include sharing of research and safety protocols. Also, the AI community, which includes researchers, scholars, and ethicists, should be encouraged to cooperate with relevant international bodies.

To oversee AI development, the United Nations Security Council (which deals with threats to international peace and stability) should implement a global conduct code that would apply to every nation. This standard might be determined by experts and then debated. General guidelines include categorizing superintelligence as a threat similar to a weapon of mass destruction while recognizing its special potential for good, limiting any country from unilaterally possessing superintelligence, and enforcing standards of AI development in the civilian and military sectors through independent verification teams.

 

Archmere Students’ Role in AI

“Archmere cultivates empathetic leaders- young men and women prepared for every good work.” Today’s Archmere students will be tomorrow’s leaders of a world with AI; therefore, students need to be more aware of both AI’s positive aspects and the threat it inherently poses. Archmere’s curriculum can fill in the gaps through activities and research projects on AI-related topics. 

Just as Archmere students explore the effects of climate change in their biology classes, they could investigate artificial intelligence in their computer science classes; although the AI threat is not as urgent as climate change, it is certainly as dangerous. I interviewed two teachers and a student at Archmere to find out their opinions on this proposal.

Archmere principal Madame Thiel believes that an interdisciplinary project between Christian Ethics and Computer Science could be really informative for students. AP Introduction to Computer Science teacher Dr. Wilcox thinks that students “should spend not just a lesson or two on AI but two maybe three weeks investigating what extraordinary things AI can do to help humanity [and] the costs of implementing it.” He firmly believes that “the best way to prepare for that future [with advanced AI] is to be informed and there is no better time to start than now.” Archmere sophomore Austin Curtis agrees with Dr. Wilcox, stating that “AI is [going to] become a part of our lives in the future, excluding how much it is already. Knowing more about AI along with its possible uses would be, well, useful.”

Any AI curriculum should emphasize a “safe AI” attitude, focusing not only on AI’s positive side but also on its significant drawbacks. Bringing AI into Archmere classrooms will build awareness in students and allow them to share their knowledge with their families and communities.

The world can certainly benefit greatly from AI, but the horrific consequences of a superintelligence gone awry necessitate a cautious attitude. The best way to prepare for the rise of AI is meticulous oversight, mutual collaboration, education, and a lot of hope.