🦺 Science Fiction Spotlight

I, Robot and AI Ethics

Presented by

Welcome to the third edition of Safe For Work. This week we explore themes from Issac Asimov’s I, Robot related to robotics and artificial intelligence.

Dr. Susan Calvin, the world’s foremost expert on robot psychology, firmly believed in the goodness of robots. In her view, the Machines being in charge meant there would be no more war. And, these robots, following the first law of robotics, would ensure the safety and prosperity of all humanity. Dr. Calvin had more faith in the goodness of robots than in the humans that developed them.

What do you think? Can artificial intelligence save us? Will it make work safe, solve carbon poisoning, and end war? Or will it enslave us all and ultimately end human life on earth?

And still I see no changes, can't a brother get a little peace?
There's war in the streets and war in the Middle East

Tupac, “Changes”

In 1941 Isaac Asimov began writing a series of short stories, published in 1950 as the novel, I, Robot. It portends a future replete with robots and artificial intelligence. Re-reading the story now, I cringe slightly reading the opening chapter about a little girl who’s toy best friend, Robbie, has made her lose all interest in interacting with her parents or making human friends. It makes me think of my kids burgeoning attachment to mobile devices. Of our newly arrived robot vacuum sweeping the floors taking over one of their household chores.

Some of the future Asimov predicted is here. How much more of it will arrive?

There is a growing consensus that the robot machines envisioned by Asimov are now inevitable. The timeline is murky, but artificial general intelligence, or machines that are ‘smarter’ than humans are on their way. To some, it feels as if they are coming from another galaxy, but they are being built much the same way Asimov saw it - by competing corporations and governments. Many are building with the now non-fictional laws of robotics posited by Asimov himself:

  • A robot may not injure a human being, or through inaction, allow a human being to come to harm.

  • A robot must obey orders given to it by human beings except where such orders would conflict with the First Law.

  • A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

As the story unfolds, it becomes clear that the laws do not work even in the fictional realm. The humans building the robots make errors, there are engineering failures and unknown unknowns.

This is true in today’s very real development of robots and artificial intelligence. The advocates of safeguards acknowledge they can be easily bypassed. As reflected in society in general, there is significant debate about what values and ethics should be included in AI development. And in life imitating art, there are many organizations reflective of the fictional organization, “Society for Humanity,” trying to limit or reverse the development of generative AI.

Those opposed to development of AI are unlikely to succeed, so it is foolish to put our heads in the sand. There are already significant advances in safety and productivity available from AI. Some jobs and decisions are already best left to machines. Our children need to be learning about programming and AI in school. And as we incorporate these tools into our daily lives, we also need strategies to respond when things go awry. We think the extreme optimists and the alarmists are both misguided. As with most technological advances, there will be some disruption and some new opportunities.

It's been a long
A long time coming, but I know
A change gon' come
Oh yes, it will

Sam Cooke

“As we integrate these powerful tools into our schools and workplaces, we must urgently equip students and workers with the skills, knowledge, and competencies to harness AI responsibly and effectively. Our education system must adapt to prepare a workforce that can leverage AI to its full potential while safeguarding against its risk”

GUIDELINES FOR AI INTEGRATION THROUGHOUT EDUCATION IN THE COMMONWEALTH OF VIRGINIA

In Safety News

Typically, this section is for news or emerging research that will impact safety. Today, we are sharing some research that can help you think about AI and hopefully inform your ability to create your own policies, approaches and safeguards.

  • Is Your Machine Better than You? AI, decision-making and algorithm aversion

  • Safe For Work Internet Companions? While some may joke about the future of robot companions, new mental health research is demonstrating the effectiveness of GPT3 enabled chatbots.

  • Deepfakes are not just a political problem. How will you deal with a fake video of your company CEO discussing a problem that doesn’t actually exist just ahead of earnings season?

  • Teaching and Learning in an AI World: Excellent research and writing from an education professional exploring AI and how it can be used to amplify human intelligence and capability

See you next week as we explore the history of the telegraph, how it revolutionized long-distance communication and what it can teach us about workplace culture in a time of increasingly remote work.

Stay safe.

P.S. Yes, the grammar police will point out that utilizing “their” and “they” for inanimate objects is unusual. I did it almost unthinkingly until editing. The use remains because I think it is a foreshadowing.

Did you enjoy today's newsletter?

Select one to help us improve

Login or Subscribe to participate in polls.

Reply

or to participate.