On Being Pragmatic About Artificial Intelligence
Last week's comments about AI, from the The Economist and Jaron Lanier, in The New Yorker, are worth considering.
From ““How to worry wisely about artificial intelligence” (The Economist, April 20, 2023):
"The fear that machines will steal jobs is centuries old. But so far new technology has created new jobs to replace the ones it has destroyed. Machines tend to be able to perform some tasks, not others, increasing demand for people who can do the jobs machines cannot.
Proponents of ai argue for its potential to solve big problems by developing new drugs, designing new materials to help fight climate change, or untangling the complexities of fusion power. To others, the fact that ais’ capabilities are already outrunning their creators’ understanding risks bringing to life the science-fiction disaster scenario of the machine that outsmarts its inventor, often with fatal consequences.
"This powerful technology poses new risks, but also offers extraordinary opportunities. Balancing the two means treading carefully. A measured approach today can provide the foundations on which further rules can be added in future. But the time to start building those foundations is now."
From "There is NO A.I." by Jaron Lanier (The New Yorker, April 20, 2023):
"We’re at the beginning of a new technological era—and the easiest way to mismanage a technology is to misunderstand it.
"It is widely stated, even by scientists at the very center of today’s efforts, that what A.I. researchers are doing could result in the annihilation of our species, or at least in great harm to humanity, and soon ... The arguments aren’t entirely rational: when I ask my most fearful scientist friends to spell out how an A.I. apocalypse might happen, they often seize up from the paralysis that overtakes someone trying to conceive of infinity.
"It’s easy to attribute intelligence to the new systems; they have a flexibility and unpredictability that we don’t usually associate with computer technology. But this flexibility arises from simple mathematics.
"The most pragmatic position is to think of A.I. as a tool, not a creature ... mythologizing the technology only makes it more likely that we’ll fail to operate it well—and this kind of thinking limits our imaginations, tying them to yesterday’s dreams. We can work better under the assumption that there is no such thing as A.I. The sooner we understand this, the sooner we’ll start managing our new technology intelligently."
Note: Lanier is a globally recognized scientist, currently at Microsoft Research. He led the development of virtual reality technologies during the 1980s.
OUR TAKE
Media coverage of this new era of technology is reaching "peak hype" level, and market participants will increasingly focus on practical use cases and addressing its limitations.
The pace of innovators will challenge industry standards and government regulation efforts - but the "innovator challenge" is not exclusive to artificial intelligence.
The capabilities of GPT-4 and similar systems will be followed by more powerful and capable AI systems. As systems provide significant benefits, the potential for harmful uses will increase as well.