Emmett Shear on AI Alignment, Agency, and Raising a New Intelligence
Briefly

Emmett Shear on AI Alignment, Agency, and Raising a New Intelligence
"In practice, most alignment efforts fall into two buckets: ensuring obedience ("do what I tell you") and enforcing social norms ("don't do bad things"). These approaches treat alignment as something imposed externally. According to Emmett Shear, that's not alignment - it's control. Alignment only makes sense in the context of multiple agents. It's not about restricting behavior, but about agents being oriented toward the same goal."
"To understand alignment, Emmett Shear starts with a simple definition of agency. An agent observes the world, orients itself within it, decides on a course of action, acts, and then observes the result. This observe-orient-decide-act cycle - often called an OODA loop - implies goal pursuit. Once you accept that both humans and AI systems are agents in this sense, alignment becomes a question of how their goals relate"
Alignment means agents orienting toward the same goals rather than one agent constraining another. Common approaches that prioritize obedience or externally enforced social norms equate alignment with control, which misses the relational nature of goal alignment among multiple agents. Goals evolve as agents learn and encounter new contexts, so alignment is an observable, temporary state that requires continual renegotiation. Agency can be modeled as an observe-orient-decide-act (OODA) loop that implies goal pursuit. When both humans and AI operate as agents, alignment reduces to how their goals relate and coordinate, and capability-focused alignment is necessary because fixed guardrails cannot ensure long-term concordance.
Read at Medium
Unable to calculate read time
[
|
]