Don’t be a Luddite, embrace artificial intelligence

Follow

Don’t be a Luddite, embrace artificial intelligence

Don’t be a Luddite, embrace artificial intelligence
Short Url

The 20th-century British science fiction writer Arthur C. Clarke famously observed that any sufficiently advanced technology was indistinguishable from magic.

Clarke spent much of his life foretelling, with unerring accuracy, the nature of the world in which we now live. In 1945, for example, he proposed a system of satellites in geostationary orbits ringing the Earth, upon which we now rely for communication and navigation. In 1964, he suggested that the workers of the future “will not commute ... they will communicate.” Sound familiar? And again in 1964, Clarke predicted that, in the world of the future, “the most intelligent inhabitants ... won’t be men or monkeys, they’ll be machines, the remote descendants of today’s computers. Now, the present-day electronic brains are complete morons. But this will not be true in another generation. They will start to think, and eventually they will completely outthink their makers.”

It is the accuracy of that last prediction — what Clarke called “machine learning,” now usually referred to as artificial intelligence — that most exercises those who feel threatened by it. It would be fair to say that AI, or more accurately the exponential speed at which it is acquiring new and innovative capabilities, is not being universally welcomed.

There are two main areas of concern, the first of which may be summarized as: “AI will eventually kill us all.” This may seem far-fetched, but the thought process that leads to the doomsday conclusion is not without logic. Broadly, it is that a superior intelligence must eventually reach the inevitable conclusion that humanity is an inferior species, destroying the planet on which it relies for its very existence, and should therefore be eliminated for the protection of everything else. Elon Musk worked this out a long time ago. Why do you think he wants to go to Mars?

Fortunately, humanity is not reliant on Musk for its survival: for that we must thank another great exponent of the science fiction genre, Isaac Asimov. In 1942, he formulated the Three Laws of Robotics, which broadly regulate the relationship between us and machines, and in 1986 he added another law to precede the first three. It states: “A robot may not injure humanity or, through inaction, allow humanity to come to harm.”

Asimov’s laws apply to fictional machines, of course, but they still influence the ethics that underpin the creation and programming of all artificial intelligence. So, on the whole, I think we are safe.

AI, or more accurately the exponential speed at which it is acquiring new and innovative capabilities, is not being universally welcomed

Ross Anderson

The second area of concern may be broadly summarized as: “AI is coming for all our jobs.” While this one may have more traction, it is not a new fear and it predates AI by centuries. It is not difficult to imagine the inventor of the wheel, showing off his creation but being greeted with skepticism by his Neolithic friends: “No good will come of this. Our legs will become redundant, and those of future generations will wither away and die. This contraption must be destroyed.”

Before the first Industrial Revolution in the 18th and 19th centuries, most people in Europe and North America lived in agrarian communities and worked by hand. The advent of the water mill and the steam engine threw many out of work, as traditional crafts such as spinning and weaving cotton became redundant. However, jobs that had not previously existed were created for boiler makers, ironsmiths and mechanics.

It happened again in the late 19th century, when steam power was superseded by electricity and steam mechanics retrained to become electricians. And again in the 1980s, with the advent of the computer age and the end of repetitive manual tasks, but the creation of new jobs for hardware and software engineers.

Will AI have the same net beneficial effect? There is evidence that it already is. In the UK last week, health chiefs began screening 700,000 women for signs of breast cancer, using AI that can detect changes in breast tissue in a mammogram that even an expert radiologist would miss. In addition, the technology allows screening with only one human specialist instead of the usual two, releasing hundreds of radiologists for other vital work. This AI will save lives.

However, when one door opens, another closes. Also last week, the Authors Guild, the US body that represents writers, created a logo for books to show readers that a work “emanates from human intellect” and not from ­artificial intelligence.

Authors argue that AI work has no merit, since it merely copies words and phrases that have already been used by another writer

Ross Anderson

You can understand their angst. Large language models, the version of AI that is the authors’ target, create the databases from which they produce content by scraping online sources for every word ever published, mostly without the formality of bothering to pay the original author. Many journalists have the same complaint. Some major media outlets — including the Associated Press, Axel Springer, the Financial Times, News Corp and The Atlantic — have reached licensing agreements with AI creators. Others, notably The New York Times, have gone down the lawsuit route for breach of copyright.

Perhaps, especially for authors, this is a can of worms best left unopened. It used to be said that a monkey sitting at a keyboard typing at random for an infinite amount of time would eventually produce the complete works of Shakespeare. Mathematicians dispute this, but there is no disputing that AI has made it more likely. For example, if you were to ask a large language model such as ChatGPT to write a 27,000-word story in the style of Ernest Hemingway about an elderly fisherman and his long struggle to catch a giant marlin, it would almost certainly come up with “The Old Man and the Sea” — especially since the original is already in the AI’s database.

Authors argue that the AI work would have no merit, since it merely copies words and phrases that have already been used by another writer. But does that argument not apply to every new literary work? With the exception of Shakespeare, who coined about 1,700 written neologisms — from “accommodation” to “suspicious” — among a total of about 20,000 words in his plays and poems, almost every writer uses words and phrases that have been used by others before them: any literary or artistic merit derives from how a writer deploys those words and phrases. But if a book needs a special logo to distinguish a human author from an AI, what is the point in making the distinction?

In England in the early 19th century, gangs of men called Luddites — after Ned Ludd, a weaver who lost his traditional manual job to mechanization — roamed towns and cities smashing the new machines in the textile industry that they believed were depriving them of employment. They initially enjoyed widespread support, but this melted away when it became clear that the age of steam was creating more jobs than it destroyed. Let that be a lesson for the anti-AI Luddites of the 21st century.

- Ross Anderson is associate editor of Arab News.

Disclaimer: Views expressed by writers in this section are their own and do not necessarily reflect Arab News' point of view