The Thinking Machine: A philosophical analysis of the Singularity
If you’ve ever seen The Matrix, 2001: A Space Odyssey, or even Short Circuit, you’re familiar with the idea that machines—robots and computers—can gain consciousness, become individually aware, and determine their own destinies.
Welcome to the rise of Artificial Intelligence.
Most questions about AI inevitably lead to the “Turing Test,” which examines a machine’s ability to exhibit intelligent behavior. As English mathematician and seminal computer scientist Alan Turing put it, a machine would pass his test if a person talking to it couldn’t tell it apart from another human being.
And so far, no machine has yet passed Turing’s eponymous challenge.
But all that is going to change; it’s only a matter of time.
This eventuality has often been called the “Singularity”, and —if popular culture is to be believed— then it would spell the end of humanity as we know it.
Or will it?
Recently, the De La Salle University (DLSU) Department of Philosophy celebrated Turing’s birth centenary with a conference in his honor to this and other questions that his work on AI has raised for the generations of computer scientists that followed after him.
In a plenary lecture entitled, The Singularity: A Philosophical Analysis, visiting professor David Chalmers examined the likelihood of AI being achieved, the hypothetical timeframe of this event, and how it would affect humanity.
In his talk, he identified three types of AI: AI (human level of intelligence or greater); AI+ (greater than human intelligence); and AI++ (far greater than human intelligence).
Many great thinkers have been trying to predict when AI would emerge: the years 2000, 2021 and 2035 have been suggested. American researcher Eliezer Yudkowsky said in 1996, “Two years after Artificial Intelligences reach human equivalence, their speed doubles. One year later, their speed doubles again. Six months, three months – 1.5 months… Singularity.”
However, Chalmers suggests that it will take a few more centuries, but when it does happen, it will not take much longer for AI++ to emerge.
But even before that time comes, there are so many other questions to ask about the nature of AI. One of the most important: Can humans still fit in a post-singularity world? If so, Chalmers suggests that our only options would be extinction, inferiority or segregation.
The possibility of integration with AI may be the only way to save humanity: upload the contents of human brains to the computers hosting the AI. At that point, the questions will involve not only the identity of the AI, but the identity of the human whose brain has been uploaded to the computer.
In theory, the AI could also reconstruct your consciousness based on the uploaded information. “Will it be me? Philosophical questions will become practical questions, essential for our survival,” Chalmers says.
These are certainly very worrisome questions to ponder, but one might just well ask —Why not just pull the plug?
Would that it were so easy.
In fact, right now, many of the gadgets you use already have the capacity to learn; they “remember” your preferences to serve you better.
An easy example is the predictive text on your phone: it suggests words as you type in letters, and you select the appropriate one. More sophisticated software notes the words you use most often, and suggests them before other words. The software also remembers your preferences for spelling or capitalization.
It doesn’t seem so harmless when it’s just your phone figuring out if you spell it “centre” or “center,” but here’s a more advanced version: computers that can play chess.
Due to its subtlety and complexity, chess is widely thought to exemplify the excellence of human intellect. Yet, in 1989, International Master David Levy was defeated by IBM’s computer, Deep Thought. And then came the coup de grace: Deep Thought’s successor, Deep Blue, soundly defeated reigning World Champion Garry Kasparov in 1996.
Such powerful computers bring us ever closer to the Singularity. It all sounds like science fiction but, as Chalmers pints out, these philosophical questions —even if wanting for anwers— may be essential to our future survival.
And because these issues affect us all, everyone needs to take part in the discussion —not just scientists and philosophers.
“It’s all about asking questions, and being sincerely interested in finding the answers to those questions. Everything else will follow,” concluded DLSU’s Mark Anthony Dacela, one of the organizers of the event.
No doubt, Turing would have agreed. — TJD, GMA News
Go to comments