QCon AI New York 2025: AI Works, PRs Don't: How AI Is Breaking the SDLC and What To Do About It
Briefly

QCon AI New York 2025: AI Works, PRs Don't: How AI Is Breaking the SDLC and What To Do About It
"An AI-Assisted Coding study led by Hao He, Ph.D. Student in Software Engineering at Carnegie Mellon University, focused on the long-term impact of more than 2000 open-source software projects comparing those that incorporated Cursor, an agentic AI coding assistant, with a matched set of projects that didn't incorporate Cursor. The study concluded that the persistent increase in static analysis warnings and code complexity generated by Cursor provided an increase in development velocity and software quality."
"Regarding code reviews, the number of code additions can typically be 25 times larger than the number of code deletions, creating a challenge for any organization. According to this State of Code Review report conducted by Graphite, it takes approximately four hours for small organizations to merge a pull request compared to larger organizations taking approximately 13 hours. However, it was determined that this discrepancy was due to small organizations more than four times likely to altogether skip a formal code review."
AI integration into development workflows has driven substantial growth in code generation and short-term development velocity. A large CMU study of over 2,000 open-source projects comparing Cursor-enabled projects to matched controls found increased static analysis warnings and code complexity, alongside temporary gains in velocity and quality that lasted about one month before velocity declined. DORA's study labeled AI-assisted coding an amplifier that increases velocity but also raises instability. Code review workloads frequently show additions roughly 25 times deletions. Graphite's report found small organizations merge pull requests much faster partly because they skip formal reviews more often.
Read at InfoQ
Unable to calculate read time
[
|
]