I, for one, do not welcome our new robot overlords.
Let me elaborate.
Writing about Artificial Intelligence is a challenge. By and large, there are two directions to take when discussing the subject: focus on the truly remarkable achievements of the technology or dwell on the dangers of what could happen if machines reach a level of Sentient AI, in which self-aware machines reach human level intelligence).
This dichotomy irritates me. I donât want to have to choose sides. As a technologist, I embrace the positive aspects of AI, when it helps advance medical or other technologies. As an individual, I reserve the right to be scared poop-less that by 2023 we might achieve AGI (Artificial General Intelligence) or Strong AI â" machines that can successfully perform any intellectual task a person can.
Not to shock you with my mad math skills, but 2023 is 10 years away. Forget that robots are stealing our jobs, will be taking care of us when weâre older, asking us to turn and cough in the medical arena.
In all of my research, I cannot find a definitive answer to the following question: How we can ensure humans will be able to control AI once it achieves human-level intelligence?
So, yes, I have control issues. I would prefer humans maintain autonomy over technologies that could achieve sentience, largely because I donât see why machines would need to keep us around in the long run.
Itâs not that robots are evil, per se. (Although Ken Jennings, Jeopardy champion who lost to IBMâs Watson might feel differently.) Itâs more that machines and robots are currently, and for the moment, predominantly, programmed by humans who always experience biases.
In a report published by Human Rightâs Watch and Harvard Law Schoolâs International Human Rights Clinic, "Losing Humanity, The Case Against Killer Robots", the authors write: âIn its Unmanned Systems Integrated Roadmap FY2011-2036, the U.S. Department of Defense wrote that it âenvisions unmanned systems seamlessly operating with manned systems while gradually reducing the degree of human control and decision making required for the unmanned portion of the force structure.ââ
The "unmanned systems" refer to fully autonomous weapons that can select and engage targets without human intervention.
Who is deciding when a target should be engaged? Come to think of it, whoâs deciding who is a target? Do we really want to surrender control for weaponized AI to machines, in the wake of situations like the cultural morass of the Trayvon Martin shooting? How would Floridaâs Stand Your Ground Law operate if controlled by weaponized AI-police enforcement hooked into a cityâs smart grid?
Short answer: choose Disneyland.
FUD Versus FAB
Image: Steve Mann
The term, FUD stands for âFear, Uncertainty and Doubt.â Itâs a pejorative phrase with origins in the tech industry, where companies use disinformation tactics to spread false information about competitors.
FUD has evolved, however, to become a tedious phrase leveled at anyone questioning certain aspects of emerging technology, often followed by accusations of Ludditism.
But I think people have the wrong picture of Luddites. In the New York Times, Paul Krugman recently wrote on this idea, noting the original Luddite movement was largely economically-motivated, in response to the Industrial Revolution. The original Luddites werenât ignorant regarding the technology of the day, or at least its ramifications (loss of work). They took up arms to slay the machines they felt were slaying them.
Not too far a stretch to say weâre in a similar environment, although the stakes are higher â" strong AI arguably poses a wider swath of technological issues than threshing machines.
So, as a fan of acronym-creation Iâd like to posit the following phrase to counter the idea of FUD, especially relating to potentially human-ending technology without standards governing its growth:
FAB: Fear, Awareness and Bias
The acronym distinguishes a blind and reactionary fear used to proactively spread false information, from a warranted and human fear based in the bias that itâs okay to say we donât want to be dominated, ruled, out-jobbed or simply ignored by sentient machines.
Does that mean I embrace relinquishment, or abandoning AI-related research? Not altogether. The same Watson that won on Jeopardy is also now being utilized in pioneering oncological studies. Any kneejerk reaction to stop work in the AI space doesnât make sense (much less, itâs impossible) .
But the moral implications of AI get murky when thinking about things like probabilistic reasoning, which helps computers move beyond Boolean decisions (yes/no) to make decisions in the midst of uncertainty â" for instance, whether or not to give a loan to an applicant based on his or her credit score.
It is tempting to wonder what would happen if we spent more time focusing on helping each other directly, versus relying on machines to essentially grow brains for us.
FAB Ideas
Image: Clyde DeSouza
âNuclear fission was announced to the world at Hiroshima.â James Barrat is author of Our Final Invention: Artificial Intelligence and the End of the Human Era, which expounds a thorough description of the chief players in the larger AI space, along with an arresting sense of where weâre headed with machine learning â" a world we canât define.
For our interview, he cited the Manhattan Project and the development of nuclear fission as a precedent for how we should consider the present state of AI research:
We need to develop a science for understanding advanced Artificial Intelligence before we develop it further. Itâs just common sense. Nuclear fission is used as an energy source and can be reliable. In the 1930s the focus of that technology was on energy production, initially, but an outcome of the research led directly to Hiroshima. Weâre at a similar turning point in history, especially regarding weaponized machine learning. But with AI we canât survive a fully realized human level intelligence that arrives as abruptly as Hiroshima.
Barrat also pointed out the difficulty regarding AI and anthropomorphism. Itâs easy to imbue machines with human values, but by definition, theyâre silicon versus carbon.
âIntelligent machines wonât love you any more than your toaster does," he says. "As for enhancing human intelligence, a percentage of our population is also psychopathic. Giving people a device that enhances intelligence may not be a terrific idea.â
A recent article in The Boston Globe by Leon Neyfakh provides another angle to the concern over autonomous machines. Take Googleâs Self-Driving Car â" what happens when a machine breaks the law?
Gabriel Hallevy, a professor of criminal law at One Academic College in Israel and author of upcoming book When Robots Kill: Artificial Intelligence Under Criminal Law, adds to Barratâs assessment: Machines need not be evil to cause concern (or in Hallevyâs estimation, be criminally liable).
The issue isnât morality, but awareness.
Hallevy notes in "Should We Put Robots on Trial," âAn offender â" a human, a corporation or a robot â" is not required to be evil. He is only required to be aware of what heâs doingâ¦[which] involves nothing more than absorbing factual information about the world and accurately processing it.â
Options for AI
The nature of FAB, as Iâm proposing it, is to move beyond the dichotomy of only two ways of thinking about AI and elevate the work of unique thinkers in the space. Use our Fears about the nature of potential scenarios to help create Awareness of positive possibilities that will Bias us to action regarding AI, versus succumbing to complacency or tacit acceptance toward inevitable overlord rule.
In that regard, I appreciated when James Barrat told me about the work of Steve Omohundro who holds degrees in physics and mathematics from Stanford, a Ph.D. in physics from U.C. Berkeley and is president of Self-Aware Systems, a think tank he created to âbring positive human values to new intelligent technologies.â
He provides a refreshing voice in the AI community, acknowledging that âthese systems are likely to be so powerful that we need to think carefully about ensuring they promote the good and prevent the bad.â
In terms of using AI for positive means, itâs worth watching two of his videos: his TEDx talk in Estonia on "Smart Technology for the Greater Good" (above) and a keynote talk at Oxford on "Autonomous Technology for the Greater Human Good."
Steve Mann, pioneer in the field of wearable computing, has a theory of Humanistic Intelligence (HI) that also adds a unique layer to the discussion surrounding Artificial Intelligence. The theory came from his Ph.D. work at MIT, where Marvin Minsky (whom many call the father of AI) was on his thesis committee.
Mann explains in the opening of his thesis, âRather than trying to emulate human intelligence, HI recognizes that the human brain is perhaps the best neural network of its kind, and that there are many new signal processing applications, within the domain of personal technologies, that can make use of this excellent but often overlooked processor.â By leveraging tools like Googleâs Glass or other intelligent wearable camera systems, we can enhance our lives as aided by technology, versus having our consciousness supplanted by it. He described his theory for our interview:
HI is intelligence that arises by having the human being in the feedback loop of the computational process. AI is not immediately a reality, whereas HI is here and now and viable. HI is a revolution in communications, not mere computation. Itâs really a matter of people caring about people, not machines caring about people.
Compared to the notion of the Singularity as described by Ray Kurzweil (the moment in time when machines gain true sentience), Mannâs description of Humanistic Intelligence in full fruition is the Sensularity. Itâs an appealing concept: that technology assisting humanity towards greater innovation can feature compassion over computation as its primary goal.
HI features elements that ring of transhumanism, or H+, the idea that we could transform the human condition by merging technology with our bodies.
Image: Steve Mann
While many of us get anxious by the idea of ingesting sensors or replacing our eye with a camera, we donât think twice about prosthetic limbs (even if itâs embedded with a smartphone).
In Clyde deSouzaâs science fiction novel Memories with Maya, however, AI and Augmented Reality add to the transhuman mix (in the form of haptic interfaces) by imagining how weâll interact with the reanimated avatars of our loved ones. The concept is fascinating and imminently credible. Think of the volume of content around a person: pictures, videos and words (sentiment expressed in texts, mails and social networking posts). It wonât be long until weâre able to fabricate or recreate people in virtual form.
DeSouza noted in an interview with Giulio Prisco for the Kurzweil blog, âMemories with Maya is a story that aims to seed ideas, grounded in hard science, on how AI, AR and advances in the field of deep learning and cybernetic reconstruction will eventually allow us to virtually resurrect the dead. A time will soon come when questions will need to be answered on the ethical use of such technology and its impact on intimate human relationships and society.â The book imagines life's repercussions if we could essentially keep our loved ones alive beyond the time their bodies physically stop functioning.
When is the best time to discuss the ethical uses of these technologies? NOW.
The Depth and the Direction
âIâm hungry for depth.â Peter Vander Auwera is cofounder of Innotribe, the innovation arm of SWIFT, the global provider of secure financial messaging services, and the cocreator of Corporate Rebels United, an organization geared to create actionable value practices within organizations.
Calling Corporate Rebels United a âdo tankâ versus a think tank, Vander Auwera learned to temper his passion for technology with a fervor for human connection. Like Mannâs focus on people caring for people, Vander Auwera is calling for a revolution focused on empowering humans, which he outlined in a recent blog post, "Dystopian Futures:"
We have come at a point where our only option [from dystopia] is a revolution [from being] data slaves and evolving as a new kind of species in the data ocean, trying to preserve what makes us human ⦠We will need a new set of practices for value creation; where data slaves dare to stand up and call for a revolution ⦠But it will be very difficult to turn back the wheel that has already been set in motion several decades ago.
I do not welcome our robot overlords.
I welcome aspects of accelerated learning and improvements in health, but not the full-stop acceptance of a time when AI will gain sentience, one which many are expediting. Iâm with Peter Vander Auwera â" I love technology, but I want to be part of the revolution that dares to stand up and say, âI like being human! I want humans to retain autonomy over machines!â
My hope is that, like the Genesis Angels, the $100 million fund created to spur acceleration in AI and robotics startups, someone will step up and justify the ramifications of AI before unleashing it full-blown onto humanity.
For the robots or technology that may surpass our intelligence in the near future, observe my fleshy middle-digit and hear me cry: âI wave my private parts at your Auntie! Your mother was a hamster and your father smells of elderberry!â â"John Cleese, Monty Python and the Holy Grail.
It may be challenging to embrace such wonderfully crass humanity, but sometimes it's safer than a quest for the unknown.
Image: Jeff J Mitchell/Getty Images
No comments:
Post a Comment