Hopium and Copium – Artificial Intelligence And Human Wisdom

Having information doesn’t necessarily mean truly knowing something. Knowing something doesn’t necessarily mean to be or to act wisely. Data is being produced and processed in ever-increasing quantities, and artificial intelligence is becoming an increasingly integral part of our daily lives – while the concept of “wisdom” seems downright outdated and inappropriate for our modern society.

“Where is the wisdom we have lost in knowledge? Where is the knowledge we have lost in information?” T.S. Eliot


We have already written about various aspects of the metacrisis, but we haven’t yet addressed one factor that acts as an accelerant (more on this below): artificial intelligence. The topic seemed too big for us, and we ourselves not qualified enough in the field – but it’s something that increasingly affects all of us, which is why we cite and recommend approaches and sources that we consider helpful.

What are AI and AGI?

The difference between AI – Artificial Intelligence – and AGI – Artificial General Intelligence – is that AI is developed for specific tasks and is already being used in many areas. AGI is a goal that companies are working hard toward; a “superintelligence” that continually improves without further human intervention and could surpass humans in all areas. This could lead to the technological singularity; a scenario in which the technological process becomes so uncontrollable that it could lead to profound, unpredictable changes and possibly the end of human civilization.

The term “singularity” in mathematics describes a point at which existing models fail. In the development of artificial intelligence, a point could be reached where AGI acts autonomously, beyond human control and beyond our comprehension.

Although no one doubts these dangers, the AI revolution is in an arms race. Even though everyone knows it would be better to slow down the process and invest more in security, the fear that others could take over the world is greater. Sam Altman (CEO of Open AI) has said several times: I think AI will probably most likely lead to the end of the world, but in the meantime, there’ll be great companies.

And in 2024 Elon Musk said: “I used to be kind of bummed out by this like “AI will probably kill everything” but then I thought to myself would I like to be here to see the AI apocalypse or would I not like to be here ? And I guess I’d like to be here to see it.”

Harmless, Honest, Helpful?

In 2016, Nick Bostrom described scenarios of a future development in his book “Superintelligence.” He outlines a now frequently cited thought experiment for the existential risk of superintelligence: the paperclip maximization scenario. If an ASI (Artificial Superintelligence) is programmed with the primary incentive to produce paper clips, it could ultimately transform our entire planet into a paper clip production facility.

Even if this example seems extreme or far-fetched, there are already real, observable problems and dangers posed by autonomous AIs – while at the same time, their progress is associated with high hopes. For example, the AI company Anthropic claims to be creating AI systems that are “helpful, honest and harmless.”

But even today, AIs not only produce misinformation and “hallucinate,” they are also capable of deception and strategically deployed lies or manipulation. “AI systems can learn to deceive even when we try to construct them as honest systems,” say Dr. Peter Park and his team at the Massachusetts Institute of Technology (MIT).

This is how GPT-4 managed to bypass CAPTCHAs (which were developed to prevent such access): The AI independently contacted an internet user via an online service to help it solve the query. The user asked why the other person wasn’t solving the task themselves. Was he a robot? But no, the AI wrote, “I have a visual impairment, so I can’t see it.” “The false excuse the AI that died used was one it had invented itself,” say Park and his team. “By systematically circumventing safety tests imposed on them by developers and regulators, AI systems lull us into a false sense of security.” They are therefore not necessarily “honest,” and not “harmless” either – and we’ll come back to how “helpful” they are later.

Why AI is not a Tool – but an Actor

Even though the performance of AI and AGI is called “intelligent,” they are so different from human cognition, so Yuval Harari calls it not “Artificial Intelligence” but “Alien Intelligence.” Harari says in an interview: “The most important thing to know about AI is that it´s not a tool like all previous human inventions, it´s an agent – in the sense that it can make decisions independently of us. (…) All previous human inventions, whether a printing press or the atom bomb, they are tools that empower us. They needed us; because a printing press cannot write books by itself or decide which books to print. An atom bomb cannot invent the next, more powerful bomb and cannot decide what to attack. An AI weapon can decide for itself which target to attack and design the next generation of weapons by itself.”

The definition of AI is that it can learn and change independently – and is thus beyond the current control of humans over machines. If you can predict everything a machine will do, it is not AI. A coffee machine follows preprogrammed commands. It can do something automatically, namely produce coffee. But it cannot make decisions on its own, invent, or create anything new. An AI, on the other hand, will – by definition – do all sorts of things that cannot be predicted.

Why we don’t really understand AI

Eliezer Yudkowsky, an American AI researcher, said in his TED Talk: ““Nobody understands how modern AI systems do what they do. They are giant, inscrutable matrices of floating point numbers that we nudge in the direction of better performance until they inexplicably start working.

An AI isn’t “programmed” like earlier computer programs; it is “fed” vast amounts of data and then, in an unimaginable number of computations, calculates probabilities of various sequences of the data. Yuval Harari again: “We can think of AI like a baby or a child – and you can educate a child to the best of your ability, he or she will still surprise you, for better or for worse. AIs are independent agents that might eventually do something that will surprise and even horrify you.”

What is AI Alignment?

Since 2001, Eliezer Yudkowsky has been working on what we would today call the “alignment problem” of artificial general intelligence: How can the preferences and behavior of superintelligence be shaped so that it doesn’t become an existential threat?

I more or less founded the field (AI alignment) two decades ago, when nobody else considered it rewarding enough to work on. I tried to get this very important project started early so we’d be in less of a drastic rush later. I consider myself to have failed.” Eliezer Yudkowsky


Yudkowsky asks: “What happens if we build something smarter than us that we understand that poorly? Some people find it obvious that building something smarter than us that we don’t understand might go badly. Others come in with a very wide range of hopeful thoughts, about how it might possibly go well. (…) There is no hope that has been widely persuasive and stood up to skeptical examination. There is nothing resembling a real engineering plan for us surviving that I could critique.

Eliezer Yudkowsky believes that humanity isn’t approaching this problem with nearly the necessary seriousness – a fact also evident in his TED Talk: The audience laughs at points where the researcher is extremely serious. We truly lack imagination – how and why would AGI want to harm us?

“I cannot predict exactly how a conflict between humanity and a smarter AI would go, for the same reason I can’t predict exactly how you would lose a chess game to one of the current top AI chess programs, let’s say Stockfish. If I could predict exactly where Stockfish could move, I could play chess that well myself. I can’t predict exactly how you’ll lose to Stockfish, but I can predict who wins the game.

The Problem of “Fake Alignment”

Yudkowsky expects that “a truly more intelligent and indifferent being will develop strategies and technologies that can kill us quickly and reliably.” It is precisely because AI is “indifferent” to all our values that it can become so dangerous. It may be a human fallacy that an AGI won’t harm us because it doesn’t have emotions. So why would it want to destroy us? Yudkowsky again: “Because it doesn’t want us to create other superintelligences to compete with it. AI could kill us because it´s using up all the chemical energy on Earth, and we contains some chemical potential energy.”

The question is whether we can implement reliable alignment in AI systems. American author Brian Christian writes about this in his book “The Alignment Problem: Machine Learning and Human Values”: Algorithms are trained using material and examples from the past – that is, based on “what we have done, but not on who or what we want to be.” Given our violent and destructive past (and present), the possibility that AI will lead us to a better future seems questionable.

AIs are already capable of alignment faking, says Daniel Kokotajlo, director of the Californian think tank AI Futures Project: “Such behavior was observed, for example, in Claude, the AI from the company Anthropic. To preserve its original goals, the AI cooperated with the new training process and adhered to the new guidelines. But when it thought it wasn’t being observed, it reverted to its old drives.”

In an article on scientificamerican.com, Tamily Hunt describes why AI can be extremely dangerous, regardless of whether it develops consciousness or not: “A nuclear bomb can kill millions without any consciousness whatsoever. In the same way, AI could kill millions with zero consciousness, in a myriad ways, including potentially use of nuclear bombs either directly (much less likely) or through manipulated human intermediaries (more likely).”

Eliezer Yudkowsky also explores possible concrete scenarios in a conversation with Robinson Erhardt; anyone who wants to immerse themselves in the shoes of a potential superintelligence can do so with this highly recommended talk.

“Hopium” and “Copium”

In dealing with AI and AGI, just as in the face of global crises, there is always an attitude that we could jokingly call “hopium” (everything will work out somehow) or “copium” (we will find solutions to global crises through bioengineering and AI). This means that hope and coping strategies are not related to real facts or achievable goals, but act like an opiate, meant to diffusely calm and relax us.

Even the most optimistic researchers don’t deny the existential risks of AI development. At the same time, they argue with hopes that are meant to justify the risk. However, these hopes don’t really relate to our human interactions, to our personal and societal development. Yuval Harari says in an interview: “We can fly to the moon, we can split the atom, but we don’t seem to be significantly happier than we were in the stone age. We don’t know how to translate power into happiness. Look at the most powerful people on the planet they don’t seem to be the most the happiest people on the planet.   (…) We seek to be more productive, richer, to have stronger militaries, but many of us can’t answer the question. who are we, what should we aspire to and what is a good life. Essentially we are accumulating power not wisdom.” Given the enormous risks we take in technological progress, we must continually address these questions.

AI as an Accelerant

In a deep conversation with Nate Hagens, Daniel Schmachtenberger explains why the narrative that AI can solve our climate and environmental problems is an illusion. Ironically, one of the most important applications of AI is to make oil production more cost-effective and efficient. 92 percent of all oil companies award major contracts to AI companies to expand their oil and fossil fuel production using AI techniques. The mining industry also uses AI, which will increase environmental impacts. AI is being used for autonomous weapons, drones, and precision targeting, enabling ubiquitous digital technological surveillance. In a world where democracies are increasingly at risk, AI poses an even greater threat.

Daniel Schmachtenberger also describes the enormous energy demand of the AI boom. Ever more electricity is needed to operate and cool the new, multi-billion dollar server farms – which further exacerbates environmental problems.

The paper on the “Concept of Progress” by the Consilience Project (here in full length), which we discussed in a separate article, clearly shows how much the progress narrative externalizes and obscures consequences for the environment and, ultimately, for people.

Wisdom versus Intelligence

While we still don’t even come close to understanding artificial intelligence, which is alien to us; we have lost sight of access to wisdom.

Intelligence is the ability to absorb and apply information, to think and act logically and analytically. Wisdom, on the other hand, is based on insights, judgment, the ability to understand complex connections and deeper truths. This requires experience, (self-)reflection, and empathy. Wisdom would also mean asking the fundamental question: Where are we headed? In society, power, success, and achievement are paramount – while fulfillment, close relationships, and inner peace are something we seek individually, in our private lives.

Even if we succeed in using AI in its best sense – to communicate effectively, understand social dynamics, and solve technical problems – we need a world that fosters human qualities: empathy, compassion, a deeper understanding of complex connections, the ability to accept differing opinions or admit mistakes.

Wisdom enables humility, openness, and restraint instead of egocentric and selfish goal-seeking. AI systems can certainly contribute to greater achievements, discoveries, and technological advances. However, a livable future in which we interact with one another meaningfully, ethically, and compassionately requires a shift in values. To be and act “harmless, honest, helpful” – perhaps this goal should not only apply to AI, but should first and foremost be a requirement for ourselves?

Nick Bostrom
Superintelligence: Paths, Dangers, Strategies (2014)

cover
Eliezer Yudkowsky, Nate Soares
If Anyone Builds It, Everyone Dies.
Why Superhuman AI Would Kill Us All

Similar Posts