Perhaps humanity is artificial?

Sursa: Pixabay

AI will bring fantastic new benefits almost every realm – and also risks a downside that may shake us to the core

The past four decades or so have seen spectacular technological advances that have vastly disrupted industries, brought unimaginable convenience and efficiencies, and scrambled our brains in ways we may regret.

So tremendous are the changes that it is remarkable that the journey felt mostly incremental. Rare were the moments when it was clear something spectacular had been unleashed. But we are certainly experiencing such a moment with the arrival of ChatGPT, the hyper-bot cooked up by an outfit called OpenAI.

There have been other seminal moments over the years, of course. One was the arrival of the personal computer, available in the 1970s in the form of the Commodore VIC-20 and TI-99. Then came the move from clunky DOS (“disk operating system”) command lines to user-friendly graphical user interfaces, pioneered in some obscurity in the 1970s at the legendary Xerox Palo Alto Research Center in California. The desktops, icons, drawers, and dropdown menus took a while to get to the public, finally introduced to the masses by Apple in 1984 (via the Macintosh) and then copied and popularized by Microsoft (the Windows operating system).

Mobile phones, even the arrival of the World Wide Web in the 1990s, social media—all these had huge impact, but no big bang moment to announce them (an exception was the arrival of the smartphone—again developed elsewhere, but honed by Apple—announced with much fanfare in a Steve Jobs-driven spectacle in 2007).

The advances that drove artificial intelligence also happened without fanfare, behind closed doors in university labs and corporate R&D centers. I remember studying robotics and computer vision for my master’s project in the mid-1980s—it was a challenge to get the program to identify a circle. These fields, along with machine learning (an algorithm that seems to get smarter as it gains experience) and natural language processing (approximating human speech) all saw huge advances, oddly unremarked to the ordinary person.

Something critical was churning in the background the whole time: Moore’s Law, which posits that the density of transistors in integrated circuits doubles approximately every two years. This projection, made almost 60 years ago by Intel founder Gordon Moore, means that computing power (and access to stored data) would constantly increase at ever greater speed. To infinity, as far as the human eye can see. And quantum computing holds the promise of even faster development.

This means that if an algorithm can figure out a certain type of thing, it will eventually be able to figure out every instance of that type of thing, instantly. If it can remember one thing, it will eventually be able to instantly remember everything.

This puts humans at a clear and growing disadvantage versus machines in every area of activity that is based on calculations and recall.

Since chess, for example, is in fact nothing more than a very extensive but actually limited series of possible moves and reactions by both parties, then a computer that knows what all the previous outcomes were and can recall them instantly will defeat any human player. There’s no way we can compete on calculations and recall.

Where can we compete? Well! Inspiration, creativity, passion, emotion, the poesy of the spirit. An algorithm cannot create the magic of the Beatles or Bach. It cannot write the poetry of Homer or Robert Frost. It cannot do Emile Zola or Somerset Maugham. It cannot live romance. It cannot love.

Anyone who has used ChatGPT will begin to see the problem.

The program is not in itself a breakthrough—not exactly, since the technology behind it has been developing for years. We all have encountered early versions of what artificial intelligence can do in Apple’s Siri, or even with a Google search phrased as a question. Also, perhaps less impressively, in the aggravating “help” chat services run by various banks.

But ChatGPT crosses a certain line, unmarked but undeniable. It is uncanny in its cleverness, and it has seized center stage in the global conversation since being suddenly made available two months ago to the general public. That happened concurrent to news that Microsoft was investing another $10 billion in OpenAI, in a deal that would leave it with a 49 percent stake.

People all around the world began to realize that the bot can answer almost every question reasonably, and sometimes intelligently. It can pass university-level law and business exams. It can write lyrics in certain styles. It can advise on politics. It certainly can recite facts in essays that are better written than what the average person would produce.

There are some downsides: the algorithm seeks to offend no one, and its inclination to hedge and balance can reek of a bothsidesism that in a human might be ridiculed or seen as cowardly. Lacking the recklessness of some humans, it won’t take a stand.

Once this wrinkle is addressed, the implications are staggering for professions like journalism and education. This level of AI can write the first draft of the first draft of history. It can answer questions from students. It can enable students to cheat by never memorizing anything themselves. Will a generation allowed to use such tools not find its brains in atrophy?

At this very moment almost every major business, and certainly consultancy, is holding emergency meetings to calculate how to integrate ChatGPT in its activities. It’s simplifying the situation to make it all about ChatGPT, but the notion that AI is ready or nearly ready for prime time is correct. This is not a fad.

There are great benefits to be savored here. Medical diagnoses may become speedier and more accurate. Companies may make wiser decisions. The efficiencies will multiply.

Naysayers have focused on fears that the job market may not adjust to too many existing jobs being rendered moot. Efficiency is not a net positive if 90 percent of the workforce is put out of business, and if only computer programmers (and perhaps sex workers) find any demand for their services.

They may be worrying about the wrong thing. Luddites have always expressed concerns such as these, and humanity has adjusted. There is brilliance in our species (along with a vexing dumbness, to be sure).

The real concern is that we start doubting that brilliance. If “generative” (meaning content-creating) AI starts producing art and music and even novels that we like well enough, people will start to wonder whether there ever existed such things as inspiration and romance. That’s because no matter how smart AI becomes it will never be anything more than algorithms and recall. When it becomes good enough, people may start wondering is that is true for them as well.

Science cannot yet explain the spark of life and the very existence of consciousness, but one day it might. There have long been those who argue that we are, in the end, nothing more than neurons. As AI spreads, these “mechanists” may gain the upper hand; most of us might conclude that talk of spirit and magic are but pitiful delusion.

Put another way: Since computers cannot be humans—maybe the convergence suggests that humans are actually nothing more than biological computers.

What will that do to our state of mind? To art and culture? To our desire to procreate? Will rates of depression, already depressing, not soar to untenable heights?

In the spirit of the times, I asked ChatGPT.

“The development and use of AI technology does not necessarily lead people to conclude that everything is calculation and there is no such thing as human inspiration,” the algorithm said, ignoring my warnings about unhelpful hedging. “It is ultimately up to individuals to form their own opinions and beliefs about the role of technology and human creativity. Some may see AI as a tool to enhance human capabilities, while others may view it as a threat to human uniqueness.”

So, we know at least one thing: humans, or some of them, are still less boring than the cream of the AI crop.

(A version of this story appeared in Newsweek and Ask Questions Later


Please enter your comment!
Please enter your name here