
"A survey of 700 developers and engineering leaders published this week finds 89% have seen an improvement in the productivity metrics their organization tracks following the adoption of artificial intelligence (AI) tools and platforms, with 81% noting that the amount of time spent reviewing code has increased. However, just under a third of their day is now consumed by AI-related tasks that existing metrics don't track."
"Conducted by Harness, a full 94% said technical debt, validation time, and developer burnout are not being tracked by existing productivity metrics. Specific activities not being tracked include time spent reviewing AI code for accuracy (53%), fixing subtle bugs from AI code (52%), explaining AI code to teammates (48%) and context switching between tools (45%), the survey finds. Only 38% of respondents said their organization is tracking the time spent reviewing code generated by AI tools."
"There also needs to be a greater appreciation for the cognitive load that managing a small army of AI agents inevitably adds, noted Stuart. Additionally, software engineering teams need to consider which AI model might be better cost-effectively used to automate a task, versus always using the latest version of an AI model that, as it becomes more advanced, also becomes more costly to employ, noted Stuart."
"Software engineering teams should also compare the various prompts being used to determine which ones consistently work best for specific tasks, he added. Unfortunately, too many organizations are simply tracking the total number of tokens consumed by application develop"
A survey of 700 developers and engineering leaders reports that 89% have seen improvements in the productivity metrics their organizations track after adopting AI tools and platforms. 81% report that time spent reviewing code has increased. Just under a third of the day is consumed by AI-related tasks that existing metrics do not track. 94% say technical debt, validation time, and developer burnout are not tracked by existing productivity metrics. Untracked activities include reviewing AI-generated code for accuracy (53%), fixing subtle bugs from AI code (52%), explaining AI code to teammates (48%), and context switching between tools (45%). Only 38% track time spent reviewing AI-generated code. Organizations are urged to revisit productivity metrics, including token usage and cost, cognitive load from AI agents, model selection for cost-effectiveness, and prompt comparisons for consistent results.
Read at DevOps.com
Unable to calculate read time
Collection
[
|
...
]