Agentic AI refers to coding assistants that collaborate inside real codebases rather than acting as autocompletes. Tools like Cursor, Claude Code, and LangChain can operate within established projects, run tests, and make changes, but they reveal brittleness in vague test-generation requests and can amplify technical debt and inconsistent design patterns. Effective usage requires clear project structure, tests, formatters, and conventions such as Django or FastAPI. A sane git workflow for AI-sized diffs and strong human review are necessary to contain errors and maintain design coherence. Open source agents are evolving toward roles where humans edit and oversee systems instead of only typing code.
Agentic AI programming is what happens when coding assistants stop acting like autocomplete and start collaborating on real work. In this episode, we cut through the hype and incentives to define "agentic," then get hands-on with how tools like Cursor, Claude Code, and LangChain actually behave inside an established codebase. Our guest, Matt Makai, now VP of Developer Relations at DigitalOcean, creator of Full Stack Python and Plushcap, shares hard-won tactics.
We unpack what breaks, from brittle "generate a bunch of tests" requests to agents amplifying technical debt and uneven design patterns. Plus, we also discuss a sane git workflow for AI-sized diffs. You'll hear practical Claude tips, why developers write more bugs when typing less, and where open source agents are headed. Hint: The destination is humans as editors of systems, not just typists of code.
Collection
[
|
...
]