Tuesday, July 01, 2025

Coralling AI

It’s been a while since I’ve taken on the subject of artificial intelligence, partly because it’s such a difficult topic to gain sufficient perspective on to have something concrete and useful to say.

That’s where an interview with a top AI researcher comes in: “Max Tegmark: Can We Prevent AI Superintelligence From Controlling Us?” I’ll summarize some of MIT professor Tegmark’s main observations, but I urge anyone who is interested to watch the entire conversation, which lasts one hour.

Tegmark establishes that AI researchers have gotten much closer to creating artificial general intelligence (AGI) than almost anyone expected by this point. AGI is considered an existential risk because it is not only going to be as smart as us, it will also be capable of making itself much more intelligent (and therefore more powerful) without any further involvement from us — if we let it. Left on its own, it would rapidly evolve.

And it’s probably only a matter of a few years until this critical threshold moment for AGI is upon us.

Tegmark explains that humanity’s problem when confronted with this super-intelligence is the equivalent to that of a snail in relation to human beings. There was never any doubt which species — snails or humans — would control life on earth, as humans are more intelligent and therefore more powerful than snails. 

But when it comes to AGI, there are limits to our human brainpower that machines don’t share so we could well end up occupying the position of the snails in relation to AGI.

Do not despair just yet. Tegmark is pushing hope, not fear.

First, by comparing AGI to nuclear bombs, he points out that humanity has for 80 years avoided extinction through a combination of regulation, deterrence and our shared survival instinct.

A similar global approach, Tegmark believes, can save us from extinction by way of super-intelligent machines. They key first step is government regulation — setting basic safety standards that prevents the release of AGI in forms that are too general and therefore too risky for humans to control.

The key, he believes, is to confine AI products to narrower applications, such as tools. Curing cancer, self-driving cars, creating wealth are all good goals for AGI, whereas making war on our species is not.

As for the global competition with China, Tegmark believes something like the US-USSR standoff will be possible and necessary in the face of AGI, since it represents an alien species that in many ways will be more powerful than either country.

Come to think of it, maybe the scenario in the film “Independence Day” is a better reference point for all this. When it comes to the ultimate battle between ‘Us’ and ‘Them’, humans can put aside their differences long enough to overcome an alien invader. 

It looks like we’re gonna have to do the for real.

To contact your Congressional representatives about regulating AI, click on these links for your elected officials in the House or the Senate.

See also “Guardrails for AI.”

(Thanks to my friend and AI researcher John Jameson, for alerting me to Tegmark’s interview.)

HEADLINES:

MUSIC VIDEO:

The Rolling Stones & Bob Dylan Like a Rolling Stone live Rio de Janeiro 

No comments: