Opinion: Looking at AI Past, Present, and Future

Alexander Bogey, Staff Reporter

During the 1980s, when artificial intelligence was viewed with fear and awe, John Searle produced the Chinese Room argument. Decades before our time, Searle was able to put forth one of the biggest argu- ments against AI tools such as Dall-E and ChatGPT. To start, imagine you are in a sealed room, and outside is a man who can only speak Chinese, (or if you know Chinese then replace it with any foreign language of your choosing). The stranger slides a ques- tion under the door in Chinese, and you are given the task of answering it. Since you are illiterate in the lan- guage, you are given two books, one that translates Chinese to English and visa versa. You translate the text to English, think of an answer, then write it in Chinese and send it off.

You may have noticed that during this experiment, you are just simply accepting an input (the question), interpreting it, then releasing an output (the answer). Looking at it this way, this process is very similar to the way computer programs work, from calculators to the software I am writing this article on. This concept can even be related to the most complex AI, such as ChatGPT. You give it a question, it runs it through its system, then it gives you an answer. But the fatal flaw lies in the fact that ChatGPT does not understand your language, but rather interprets it through its own binary language. It is here we come to Searle’s conclu- sion: no matter how complex or human-like these systems can be, it will never have a true sense of consciousness or understanding of the information. Which leads to the question, how can we trust something that does not understand anything beyond its own code? This is supported by chatbots presenting false information, like in the case of Google’s Bard, who drove the company’s stock down when it did just that. And how can these systems be creative? Dall-E can make beautiful pieces, but it could never understand and appreciate what it is creating. It is important to consider this when working with such systems.

If we look back on history, we can see that people have always been resistant to change. Socrates was against books, carriage drivers were against cars, oldies musicians hated electronic music, the list goes on. We are at a point in history where we are heading into an AI world, and I do not see a point in resisting something that the world will inevitably accept. Yes, this new wave will present new problems, as seen from the hysteria caused by Deepfakes videos and the ChatGPT’s homework controversy. However, AI will solve some old problems, from increasing farming productivity to sending robots to aid people in natural disasters. AI is a double age sword, and we need to measure if the consequences out- weigh the benefits. In order to make the most out of our situation, we have to use AI responsibly and have proper control over it. Sometimes the greatest of intentions leads to the greatest evil. For example, Alfred Nobel founded the Nobel Peace prize to make up for his biggest regret, inventing dynamite. Nobel envisioned his invention as a fast solution for clearing land for mining and canals, but he was left horrified when countries started us- ing it in warfare. In order to prevent AI from becoming a threat, we have to make sure we are using it for the right purposes in the right hands for an appropriate purpose. It is for this reason that ChatGPT was released to the public. The investors and creators behind OpenAI, such as Elon Musk, Sam Altman, Peter Thiel, and Rosie Hoffman, had made it their mission to make sure AI tools were not left in the hands of the elite, and to keep the AI “open” to the public. By making the decision to release, they set the standard for how AI advancements should be handled.

But this gesture does not dismantle all the potential problems that are now becoming issues, and people are starting to be skeptical. How will I know a robot will not replace my job? What is stopping someone from hacking a self-driving car and crash- ing it? Are we heading to a Blade Runner-like world where robots are indistinguishable from humans?

It is understandable that with such rapid innovation, we need to take precaution in order to avoid falling like Icarus. The AI revolution is not stopping, and in order to maintain an ideal future, we should learn how to maximize the potential of AI but implement careful measures and un- derstand AI’s limits. In the future, we may see the adoption of AI systems to help education in schools like Archmere. The future holds things we can only dream of, and we have to make sure that these systems benefit the student when there are so many options to cut corners. With such steep advancement at our footsteps, it is important we tread lightly to avoid a self-inflicted disaster.