AI’s Double-Edged Sword: Speeding Up Code, But Amplifying Risks in Legacy Systems
Table of Contents
A recent surge in artificial intelligence tools promises to dramatically accelerate software development, yet a new study warns that careless implementation could exacerbate existing vulnerabilities and introduce significant new risks, particularly when dealing with older, complex systems. A McKinsey & Company report found programmers generating code up to 45% faster with the assistance of generative AI, but experts caution that speed isn’t the only metric that matters.
At first glance, AI appears to be a software developer’s dream. However, if not used strategically, it can quickly become a developer’s nightmare, according to Edward Anderson Jr., professor of information, risk, and operations management at Texas McCombs. The core of the problem lies in the application of AI to “legacy systems”—those burdened with outdated software, often riddled with makeshift solutions and suboptimal programming practices.
These technical shortcomings aren’t merely inconveniences; they represent a substantial economic drag. The Consortium for Information & Software Quality estimates that such technical debt costs U.S. companies a staggering $1.5 trillion annually in lost productivity and increased exposure to cybercrime. The consequences can even be catastrophic, as demonstrated by the 2022 Southwest Airlines system crash, which stranded passengers on nearly 17,000 flights due to a failure in its 20-year-old scheduling system.
The danger, Anderson warns, is that AI, when used to patch these fragile systems, can actually worsen their condition. “AI trains on existing code, with all its defects,” he explains. “Thus, it tends to create more technical debt per line of code than trained, experienced human software engineers would.” In essence, AI can automate and amplify existing flaws, creating a vicious cycle of increasing vulnerability.
To mitigate these risks, Anderson, along with researchers Geoffrey Parker of Dartmouth College and Burcu Tan of the University of New Mexico, interviewed dozens of programmers across various industries. Their research yielded several key best practices for responsible AI-assisted software development.
Prioritizing Technical Debt is Paramount
Companies often defer addressing technical debt in the rush to market, effectively “kicking the can down the road,” as Southwest Airlines did. Anderson argues that this approach is unsustainable, especially with the introduction of AI. Instead, companies should integrate the overhaul of technical debt into developers’ daily workflows, particularly when leveraging AI for repairs.
“This is about organizational processes,” Anderson stated. “If you’re going to use AI, and there’s a chance that you could be increasing the rate of technical debt generation, you’re going to have to allocate more time to doing the retirement.” This requires a fundamental shift in mindset, viewing technical debt reduction not as a reactive fix, but as a proactive component of the development process.
Establishing Clear AI Coding Guidelines
While C-suite policies may address AI usage in broad terms, specific protocols for daily software development remain largely undefined. Anderson emphasizes the need for software teams to meticulously document when and why they are utilizing AI. Crucially, a human element must remain central to the process.
“You really want to make sure you’ve got somebody who has a lot of training in software engineering and experience to catch the AI when it’s making mistakes,” he said. This human oversight serves as a critical safeguard against the propagation of errors and ensures that AI-generated code aligns with established quality standards.
Investing in Developer Training
The shrinking pool of experienced developers, coupled with an influx of newer coders, presents another challenge. Inexperienced programmers may deploy AI tools without fully understanding their limitations, particularly within complex legacy environments.
Knowledge transfer is therefore essential. Anderson suggests incorporating formal mentoring into the performance goals of senior programmers, requiring them not only to review junior developers’ code but also to provide training in the effective and responsible use of AI. “Let me be clear: I think AI is a productivity booster,” Anderson concluded. “It’s just that you have to use it thoughtfully—and give software engineers the time to do that.”
The findings of this research are detailed in “The Hidden Costs of Coding With Generative AI,” published in the MIT Sloan Management Review (2025). DOI: 10.63383/hadw7619.
More information: Edward Anderson et al, The Hidden Costs of Coding With Generative AI, MIT Sloan Management Review (2025). DOI: 10.63383/hadw7619.
Provided by University of Texas at Austin.
Citation: AI is quick but risky for updating old software, researchers warn (2026, January 6) retrieved 6 January 2026 from https://techxplore.com/news/2026-01-ai-quick-risky-software.html.
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.
