Expanding AI use has become the top priority for technology leaders, with 73% identifying it as their primary focus in 2025, according to the second part of the 2025 Reveal Software Development Challenges Survey of 250 technology leaders, released June 19 by Infragistics. The leaders were surveyed December 2024 to January 2025.
The data shows 75% of organizations already leveraged AI for software creation in 2024, and of those who haven’t, 50% plan to adopt it in 2025.
The main driver of AI adoption is task automation to boost productivity, more than half the survey respondents (55%) said. Other notable ways companies are using AI in software creation include optimizing code; improving diagnostics; testing software; fixing coding errors; eliminating repetitive or administrative tasks; creating personalized experiences for customers; reducing development time/improving productivity; and addressing limited resources.
Even as momentum toward AI integration continues apace, serious concerns continue to surface. More than a third (37%) of respondents flagged the risk of errors, bugs, and inefficiencies in AI-generated code—issues that could compromise reliability and performance. Another 37% highlighted the potential for security vulnerabilities, underscoring the need for robust safeguards as AI becomes more deeply embedded in critical systems.
“AI is accelerating innovation across the software development lifecycle, streamlining tasks from code generation to testing and deployment,” said Casey Ciniello, Reveal and Slingshot senior product manager, Infragistics. “However, integrating AI into development workflows introduces critical considerations—particularly around accuracy, data integrity, security vulnerabilities, and compliance. Organizations must implement governance frameworks and technical safeguards to ensure safe, strategic implementation.”
The tech industry is acutely aware of the complex challenges posed by AI. According to the Reveal survey, data privacy emerged as the most pressing concern, with 78% of respondents citing it as their top issue. Transparency (57%) and data safety (55%) closely follow, reflecting widespread unease about how AI systems are developed and deployed.
In response, companies are taking action. More than 60% are implementing ethical AI guidelines, 59% are implementing formal privacy policies to protect against misuse, and 54% are introducing protections for sensitive information.
As AI becomes more deeply embedded in software development, the ethical issues are rising. The survey reveals that privacy violations (38%), bias in AI models (37%), and the deployment of AI applications that have not been securely tested (36%) are among the most pressing concerns for organizations in 2025. These risks not only threaten user trust but also expose companies to legal and reputational fallout. As AI and software development continue to converge, organizations must take a proactive approach to governance and risk mitigation.
The data shows that, contrary to fears that AI would result in mass layoffs, its implementation actually increases jobs. Among companies that have adopted AI, 55% reported job creation, with 63% of those adding up to 25 positions. AI is, in effect, reshaping roles and creating opportunities in a rapidly evolving tech environment.