I tried vibe coding for free to save $1,200 a year - and it was a total disaster
Briefly

I tried vibe coding for free to save $1,200 a year - and it was a total disaster
"After using the free and local (as in on my own computer) combination of Goose, Ollama, and Qwen3-coder to build a simple WordPress plugin, I had high hopes that I might be able to give up my expensive Claude Code subscription and use a free alternative. To be fair, back when I was working on the test plugin, it took Goose five tries to get it right (more than any other AI), but it got there eventually."
"But I prefer a hands-on approach, so I always apply my DPQ benchmark as a top-tier test. What is DPQ, you ask? It's the David Patience Quotient benchmark, and it works this way. If, after spending a few days using a model or AI solution, I reach the "frak this" stage, then the model has failed the DPQ. In previous months, both Claude Code and OpenAI Codex have passed the DPQ."
A free, local setup using Goose, Ollama, and Qwen3-coder was tested to create a simple WordPress plugin. The experiment aimed to replace an expensive Claude Code subscription with on-device models. Frontier models use benchmarks like SWE-Bench Pro and GDPval-AA for performance claims. A hands-on DPQ (David Patience Quotient) benchmark judged models by whether they provoke abandonment after a few days of use. Goose required multiple tries on smaller tasks and ultimately failed the DPQ on a larger project due to random, unexplained edits that degraded the code each iteration. Lack of screenshots made Xcode error resolution slow and time costs outweighed subscription savings.
Read at ZDNET
Unable to calculate read time
[
|
]