The paper addresses the AI shutdown problem, a long-standing challenge in AI safety. The shutdown problem asks how to design AI systems that will shut down when instructed, will not try to prevent ...
The authors argue that generative AI introduces a new class of alignment risks because interaction itself becomes a mechanism of influence. Humans adapt their behavior in response to AI outputs, ...
Hosted on MSN
The Human-AI Alignment Problem
We’re now deep into the AI era, where every week brings another feature or task that AI can accomplish. But given how far down the road we already are, it’s all the more essential to zoom out and ask ...
OpenAI researchers tried to train the company’s AI to stop “scheming” — a term the company defines as meaning “when an AI behaves one way on the surface while hiding its true goals” — but their ...
The "Petri" tool deploys AI agents to evaluate frontier models. AI's ability to discern harm is still highly imperfect. Early tests showed Claude Sonnet 4.5 and GPT-5 to be safest. Anthropic has ...
There is an all-out global race for AI dominance. The largest and most powerful companies in the world are investing billions in unprecedented computing power. The most powerful countries are ...
The new science of “emergent misalignment” explores how PG-13 training data — insecure code, superstitious numbers or even extreme-sports advice — can open the door to AI’s dark side. There should ...
Armin Alimardani previously held a part-time contract with OpenAI as a consultant. The organisation had no input into this piece. The views expressed are solely those of the author. In the classic ...
Some results have been hidden because they may be inaccessible to you
Show inaccessible results