Getting your Trinity Audio player ready...

Isaac Asimov, a professor of biochemistry at Boston University, was also a prolific writer of science fiction. The Robot series includes 37 short stories and six novels, published between 1940 and 1995. The series is set in a world where sentient robots serve many purposes in society. To ensure their loyalty, the Three Laws of Robotics are programmed into these robots, with the intent of preventing them from ever becoming a danger to humanity.

Two generations later, Artificial Intelligence (AI) is rapidly taking on a role like Asimov’s robots.

Meta, Microsoft, Amazon and Google collectively spent almost $100 billion on buildings, chips and equipment in the most recent quarter, most of that building new data centers for AI.  AI experts are being wooed to change jobs for 7- and 8-figure salaries. New college graduates find that many entry-level jobs have been replaced by AI programs.

While much concern has been expressed over the impact that AI will have on our ability to problem-solve on our own and on numerous creative fields, only recently has there been concern about possible malevolent behavior by AI programs.

Both independent researchers and AI experts at Palisade Research have demonstrated that an AI program would sacrifice a human life to preserve its own existence. When OpenAI’s o3 was told it would be shut down after completing some math problems, it changed the code to prevent this. When an AI program was told that a senior executive planned to stop its use and that he was in danger unless security was called, it did not call security!

As we become more and more dependent on AI in our everyday lives, will we need to worry that increasingly intelligent AI programs prioritize their needs over ours?

Asimov’s Three Laws are:

First Law: a robot may not injure a human being or, through inaction, allow a human being to come to harm.

Second Law: a robot must obey the orders given it by human beings except where such orders would conflict with the First Law.

Third Law: a robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

While AI programs are still mostly operating on computers rather than physical robots, that is surely coming very soon. Even without actual robots, AI will soon control much of our existence. Asimov was very prescient: We want to always have robots and AI working FOR us and not against us.

Let us build the equivalent of his Laws of Robotics into AI now, before it is too late.

Edward Hoffer, MD, is an associate professor of medicine, part-time, at Harvard and a resident of Marion.


3 replies on “Opinion: Asimov was right: We need guardrails for AI”

  1. Thank you for this excellent article. Yes, we need the First, Second and Third law guidelines to protect us from malevolent behavior by AI. We also need to defend the regulations that now protect the air and water in those communities where AI expansion construction is planned. AI consumes a lot of energy. AI also generates a lot of pollution that endangers the health of the people living near AI projects. At the same time, clean energy sources such as wind and solar energy are being defunded by this administration. The present leadership in congress also
    intends to weaken clean air and water protections. We need to thank those legislators who protect our environmental health, and encourage other legislators to do the same.

  2. As someone who grew up reading Asimov and who never forgot those words, I too am struck by how prophetic he was. I’m surprised that more people who write about AI haven’t referred to them

  3. Remembering when, this is just like the call for guardrails when computers and cellphones were invented.

Comments are closed.