8 Comments

I had a teacher say to me once “a picture is worth a thousand words, a taste is worth a thousand pictures”. The teacher was referring to taste as experience. Learning to trust the sensation of things not working and facing it as clearly as we face the sensations of when things are working. Things get interesting because a computer can use a set of rules to learn from millions of pictures, yet never experience sensations.

Expand full comment

Would it lead to "A Space Odyssey"'s Hal?

Expand full comment

I don't know, Joan! Isn't it amazing how evocative HAL has been and remains since the film was released more than half a century ago--in 1968!

Expand full comment

What a profound question you ask. I have always thought intelligence is triggered by childhood trauma, which makes us aware we are separate and distinct from our parents. From that first inkling, we create a world in which we gain a modicum of control and autonomy.

I never thought about applying that to AI. Therapists of the future may very well be employed in assisting sentient robots to adapt to their awareness of their own mortality and vulnerability to extinction.

Expand full comment

Such interesting reflections, John. Could a robot without a childhood be truly vulnerable? Or anything like a human?

Expand full comment

Robot rights have been a topic at least since Isaac Asimov generated the 'three laws of robotics.'

Some are fighting for rivers, animals and other non-sentient entities to have rights, ironically exceeding those afforded the unborn. Others have stated that to have rights, one must also have responsibilities. Does a river have the right to vote?

I wrote a story on Medium, 'A Robot's Right's to a Fair Trial,' about a family whose assistant robot is charged with a crime, told from the perspective of the robot.

Consciousness is such an interesting problem. Philip K. Dick delved into that in his novel, 'Do Androids Dream of Electric Sheep,' on which the movie 'Blade Runner' is based. The test used to determine humanity was the capacity for compassion. The test was not infallible.

Of course, as we are seeing in the biases being revealed in AI, much depends on who does the programming. Considering how humans treat anyone seen as 'the other,' I don't hold high hopes that robots will usher in a golden age of tolerance and mutual respect.

Expand full comment

Great Annie the more I hear about AI the more I realise I don’t understand myself. Lex Friedman has similar views to what you have referred to here maybe understanding our vulnerability enables adaptability which would be essential for AI. I do wonder though if AI will always trail behind human intelligence. Are we too egocentric to be able to know what we really are?

Expand full comment

"Vulnerability enables adaptability"—that's very well put, Andrew. These days the very rapid advances of artificial intelligence seem to be making humans keenly aware of our own vulnerability—to AI!

Expand full comment