Inside the Man vs. Machine Hackathon
Briefly

Inside the Man vs. Machine Hackathon
"Just over a hundred visitors had crowded into an office building in the Duboce Triangle neighborhood for a showdown that would pit teams armed with AI coding tools against those made up of only humans (all were asked to ditch their shoes at the door). The hackathon was dubbed "Man vs. Machine," and its goal was to test whether AI really does help people code faster-and better."
"Roughly 37 groups were randomly assigned "human" or "AI-supported." Later, an organizer told me several people dropped out after being placed on the human team. A panel of judges would rank projects based on four criteria: creativity, how useful it might be in the real world, technical impressiveness, and execution. Only six teams would make it to the demo. The winning team would earn a $12,500 cash prize and API credits from OpenAI and Anthropic. Second place would get $2,500."
More than 100 coders gathered at a San Francisco coworking space for a hackathon that matched AI-assisted teams against human-only teams. Organizers randomly assigned roughly 37 groups to "human" or "AI-supported" roles, and several participants dropped out after being placed on human teams. A panel of judges ranked projects by creativity, real-world usefulness, technical impressiveness, and execution, and only six teams advanced to demos. The winning team earned $12,500 plus API credits from OpenAI and Anthropic; second place earned $2,500. METR earlier found AI tools slowed experienced open-source developers by 19 percent, and the event included participants with varying experience and new project prompts. Common productivity metrics such as pull requests or lines of code can be misleading.
Read at WIRED
Unable to calculate read time
[
|
]