iPhone3d

Cutting-edge technology for your mobile experience.

ARKit

The questions ChatGPT shouldn’t answer

If an out-of-control trolley is racing toward four AI engineers, potentially killing them, is it ethical for AI to throw a switch so only one engineer is killed instead? | Image: Cath Virginia / The Verge

Chatbots can’t think, and increasingly I am wondering whether their makers are capable of thought as well.

In mid-February OpenAI released a document called a model spec laying out how ChatGPT is supposed to “think,” particularly about ethics. A couple of weeks later, people discovered xAI’s Grok suggesting its owner Elon Musk and titular President Donald Trump deserved the death penalty. xAI’s head of engineering had to step in and fix it, substituting a response that it’s “not allowed to make that choice.” It was unusual, in that someone working on AI made the right call for a change. I doubt it has set precedent.

ChatGPT’s ethics framework was bad for my blood pressure

The fundamental question of ethics — and arguably of all philosophy — is about how to live before you die. What is a good life? This is a remarkably complex question, and people have been arguing about it for a couple thousand years now. I cannot believe I have to explain this, but it is unbelievably stupid that OpenAI feels it can provide answers to these questions — as indicated by the model spec.

ChatGPT’s ethics framework, which is probably the most extensive outline of a commercial chatb …

Read the full story at The Verge.