The Smart Way to Bring AI into Software Development

Posted by Martin Pownall in Insights

Across the industry, we’re seeing more organisations exploring AI across the development lifecycle, and rightly so. But at IDS, our job is to help customers ask the right questions before they jump in head-first:

• And critically, how will you stay in control?

• How will AI improve your processes?

• How will it impact your users?

A stark reminder of the risks of uncritical adoption came when a high-profile AI-driven app development platform, once backed by major investors and valued at over $1 billion – collapsed unexpectedly. Many customers were left unable to access their applications, code repositories, and data. This exposed major vulnerabilities and highlighted what can go wrong when an AI service is oversold and under-delivers on its technical capabilities.

Reporting on the collapse suggested that the company’s “AI” was heavily dependent on human developers behind the scenes, a misalignment between product claims and engineering reality. This resulted in inflated revenue projections and a business model that was ultimately unsustainable.

Relying on a single third-party AI platform creates serious vendor lock-in. If that provider changes strategy, raises prices, suffers financial issues, or shuts down, your entire development pipeline – and your deployed applications can be at risk. Migrating away from proprietary tools late in the game can be costly and disruptive.

Not all AI claims are created equal. Some vendors exaggerate to sell products or attract investment, a phenomenon now dubbed “AI-washing.” Technical due diligence is essential; organisations must look beyond marketing to understand the real depth of the AI being offered.

AI can generate code, suggest improvements, and optimise workflows – but over-reliance can erode internal skills. Critical thinking, architectural understanding, and hands-on debugging remain core competencies for resilient development teams. A hybrid approach where AI augments rather than replaces human expertise, delivers the best outcomes.

Entrusting critical data or pipelines to external AI services without proper governance exposes companies to data security, compliance, and business continuity risks. You must be clear about data ownership, portability, and exit strategies before integrating third-party platforms.

For sustained success and resilience, we always advise our clients to maintain ownership of their core infrastructure and data, whether that’s private cloud, hybrid cloud, or a multi-cloud strategy.

Here’s what that means in practice:

  • Business Continuity: You’re protected if an AI partner changes direction or ceases operations.
  • Data Control: Keeping sensitive data in your environment reduces dependency on third-party security and helps meet regulatory requirements (e.g., GDPR, ISO/27001).
  • Smarter Integration: Use AI to complement your team, not replace them, and plug in best-in-class tools as your needs evolve.
  • True Scalability: Scale resources at your pace, without being constrained by one vendor’s offerings, licensing structures, or roadmap.

There’s also a fast-maturing ecosystem of open-source models and standards (e.g., LLMs deployed within your own VPC, open-source toolchains for model evaluation and governance, rising adoption of MLOps best practices) that can help organisations harness AI safely and responsibly.

AI is here to stay, and it offers tremendous potential, from accelerating delivery to improving quality and innovation. But the foundation matters. Use AI to support your development teams, improve efficiency, and speed up delivery – but stay in control of your infrastructure and data.

If you’re shaping your 2026 roadmap

let’s talk about how we
can help you get there.

Let’s talk →