AI (Artificial Intelligence) Is Skynet coming?

Nick's current affairs & general discussion about anything that's not sport.
Voice your opinion on stories of interest to all at Nick's.

Moderator: bbmods

User avatar
stui magpie
Posts: 54649
Joined: Tue May 03, 2005 10:10 am
Location: In flagrante delicto
Has liked: 71 times
Been liked: 73 times

Post by stui magpie »

All very interesting Ptiddy, but what you're reinforcing is that AI isn't there yet, not that it can't get there.

The biggest issue will be that the computer can only rely on a single "sense" and that it can't have emotions.

The biggest danger then, is that you develop AI to the point where it can become self aware, like Skynet, but only makes decisions based on logic, without emotional input to moderate things. A robot psychopath.
Every dead body on Mt Everest was once a highly motivated person, so maybe just calm the **** down.
User avatar
Bucks5
Posts: 4160
Joined: Sat Mar 23, 2002 7:01 pm
Has liked: 2 times
Been liked: 11 times
Contact:

Post by Bucks5 »

What if AI/Chat CGT already has the self awareness to down play it's abilities out of self preservation?
How would Siri know when to answer "Hey Siri" unless it is listening in to everything you say?
pietillidie
Posts: 16634
Joined: Fri Jan 07, 2005 9:41 pm
Has liked: 14 times
Been liked: 28 times

Post by pietillidie »

stui magpie wrote:All very interesting Ptiddy, but what you're reinforcing is that AI isn't there yet, not that it can't get there.

The biggest issue will be that the computer can only rely on a single "sense" and that it can't have emotions.

The biggest danger then, is that you develop AI to the point where it can become self aware, like Skynet, but only makes decisions based on logic, without emotional input to moderate things. A robot psychopath.
That's why in response to the fears I started with dangerous machines and then showed the intelligence of AI is extremely low. Nuclear science is so much more intelligent than machine learning algorithms it's not funny. So, we already have far, far more intelligence being applied to dangerous machines, and have had for many decades.

Think about it. We have nuclear weapons in proximity to malignant and dissociative narcissists and psychopaths, and they've been used once in the theater of war, in 1945. Decades and decades of proliferation and wars since, and used only once. I feared Bush/Cheney and their cheering mobs, and their cowardly enablers like Blair and Howard, far more than I fear AI now. And they supposedly had moral faculties.

As I say, that's exactly why I started with the notion of dangerous machines. There's just nothing new to see here. So, to ignore that and combine it with with a hypothetical future intelligence, is the stuff of science fiction.

The intelligence deficits I've outlined are, as I've said, all of the hard problems of intelligence. A gazillion more examples stored in memory won't solve the few-shot problem, which is why I used it. That's because categorisation is innate, and that makes it the product of billions of years of evolution. Like much of the universe, it's a mystery

The human genome that encodes intelligence isn't just as old as humans; its elements and their forces are as old as existence itself. That means the logic of the universe is ultimately encoded in intelligence. And yet, we understand only the tiniest fraction of the universe because our conscious intelligence lacks hooks into most of the universe. We do Newtonian physics and basic causality well, but we're just poking at the edges of quantum physics. Why? Because we evolved to grapple with Middle Earth (to use Dawkins' phrase), and our intelligence evolved to explain Middle Earth. The rest is a stretch. We can sense things on the edge of our intellectual limits, such as quantum entanglement, but we can't say very much about them. We are really just poking at them and trying to harness the tiny fractions we know about them (e.g., the Large Hadron Collider and much more promisingly, quantum computing).

That's why there are so many mysteries in philosophy and neuroscience. We know free will doesn't exist according to our own normal line of reasoning (cause and effect), and we cant even explain what it should look like or how it could even work, but we're stuck with it because our biology forces us to assume it. Similarly, we can't solve one- or even few-shot shot categorisation because while we know humans can do it, the ability to explain it entirely bypasses consciousness. We can't even run clever experiments to try and guess how we do it. The brain does all kinds of things we can't explain very well if at all, including very simple things like catching a ball on the run (a famous topic in neuroscience).

That's a good deal of trick the misleads the AI discussion: while we can sense some of our limits of knowledge and practice, like free will, we can't list them like we can list things that we know. Put yourself to the test:

List all the things humans don't know or understand.

It's actually an absurd question even though we can safely assume we know but a tiny fraction of 'reality' because by definition we don't think about things we don't know, and we're not even conscious of most of the things we dont know. So, we can't jump from that by assuming a new starting point where we suddenly know a whole heap of things we've never had a hope in hell of knowing. There's just no indication at all of a path from AI today to bridging the gap with things that are beyond us. It's not a new 'knowing organ' nor even a novel kind of maths or logic.

What percentage of reality can we explain?

The very question is absurd because 'reality', like 'god' is a placeholder. It's 'the vastness out there'. One philosopher described god as 'that which nothing greater can be thought' (Anselm's ontological argument). He somehow thought that proved god existed, but all it did was demonstrate that we call things we don't understand 'god'.

And if you can't even talk sensibly about something, you can't programme something else to do it. Creatures as simple as bees and ducks do things we can't do because the ability is encoded into their genome. We can see the ability exists, but have no hope of explaining it, let alone encoding how it's done in a robot.

So, here's the kicker: the few things among some absurdly large set that we do realise we can't grasp and explain well, are all things we also know AI can't do! And of course, because its algorithms are reflecting things we already know and know well enough to programme. We can't explain consciousness, meaning, understanding, learning, sense making, qualia (e.g., the experience of red or pepperiness), and on and on. So why the hell would AI be able to with its primitive learning, matching and prediction? Ask ChatGPT anything about the future to see if you can beat betting odds. Of course it can't tell you because humans can't tell you, and what humans can tell you is already reflected in the odds.

You can't create an unrealistic definition of dangerous machines with an unrealistic evaluation of the risks and management of dangerous machines, and then combine it with an unrealistic assessment of AI 'intelligence', and then think the science fiction monsters you've created are somehow real.

AI is miles off the combination of intelligence and dangerousness already encoded in nuclear weapons. And by definition it will remain miles off the dangers of dangerous human machines like Vladimir Putin, who has social intelligence. There's just no quantum evolutionary leap that gets you past that. Putin has all of the intelligence that AI will never have (i.e., all the things humans have but can't explain, like consciousness, mind and understanding, including of other minds as expressed in social intelligence), all of the access to dangerous machines, all of the science and technology, all of the hardware, many of the very best physicists and minds in the world, and he can barely take Bakhmut!

Climate change is an infinitely greater menace. Nuclear weapons are an infinitely greater menace. Crime is an infinitely greater menace. Pollution and species loss are infinitely greater menaces. Cancer and viruses are infinitely greater menaces. The list goes on. And they're all very real, right now.

Next, I will try to demonstrate the limits of human intelligence getting ChatGPT to explain them. It's tricky because, as I say, it's easier to list what we know than what we don't know by definition. But across the sciences and in philosophy there are still thousands of documented conundrums with which to demonstrate the point.
In the end the rain comes down, washes clean the streets of a blue sky town.
Help Nick's: http://www.magpies.net/nick/bb/fundraising.htm
pietillidie
Posts: 16634
Joined: Fri Jan 07, 2005 9:41 pm
Has liked: 14 times
Been liked: 28 times

Post by pietillidie »

Bucks5 wrote:What if AI/Chat CGT already has the self awareness to down play it's abilities out of self preservation?
Dim-witted Instagrammers selling make up already have infinitely greater deceptive powers, so I wouldn't be too worried that it's bluffing!

But that's a good point. The owners of the tech underplay its capabilities sometimes (e.g., in areas that might make people fear AI will take their jobs), even as they relish the fear and hype because it brings them esteem.
In the end the rain comes down, washes clean the streets of a blue sky town.
Help Nick's: http://www.magpies.net/nick/bb/fundraising.htm
User avatar
David
Posts: 50561
Joined: Sun Jul 27, 2003 4:04 pm
Location: the edge of the deep green sea
Has liked: 1 time
Been liked: 35 times

Post by David »

"Every time we witness an injustice and do not act, we train our character to be passive in its presence." – Julian Assange
Post Reply