The Future of AI-Assisted Development (Panel)

Jun 10, 2025

Jun 10, 2025

Date: June 5, 2025
Session: Track 5 - The Future of Coding
Moderator: Johannes David

Bottom Line Up Front: AI coding tools have moved beyond experimentation to become productivity multipliers in Swiss enterprises, with two-person teams now achieving what 20-person teams accomplished two years ago. However, success requires strategic implementation frameworks that address security, context management, and developer culture evolution rather than simple tool adoption.

The Productivity Revolution: From Hype to Reality

Elliot Vaucher provided a market perspective, confirming the dramatic scaling effects: « What I'm experiencing at the moment is really a team of, I mean two developers can achieve what a team of 20 could in a year ago or two years ago. » This productivity leap is already reshaping client expectations, with businesses no longer willing to pay premium prices or wait extended timelines for software development.

Natural Language as the New Programming Interface

Vaucher articulated a fundamental paradigm shift that extends beyond traditional developers. He positioned natural language as the emerging programming language, making development accessible to broader organizational populations. « The real paradigm shift is that the language of the future, the developing language of the future is natural language. And this is why all of us are concerned by this. »

This democratization enables business users to become developers through low-code and no-code platforms. Burnier confirmed this trend at Groupe Mutuel: « We have given to our business people access to N8N or flow wise low code and no code platforms. We have built for them, predefined agents so they can play with them as a market internal marketplace and the work they do won't arrive to the developers as before. »

The shift represents more than tool evolution—it's architectural thinking becoming accessible to non-technical professionals while preserving the need for experienced developers to make critical structural decisions.

Enterprise Security: The Regulated Industry Perspective

Mustafa Khalil from Swissquote Bank provided crucial insights into AI adoption within heavily regulated environments. The bank operates under FINMA oversight with strict compliance requirements, necessitating careful evaluation of AI tools to ensure data privacy and regulatory adherence.

Swissquote primarily deploys open-source LLMs locally or on-premises, establishing « boundaries between certain areas like certain code bases or pages or documents where maybe we could use the provide like third party provider LLMs. » This hybrid approach allows AI adoption while maintaining regulatory compliance.

Khalil emphasized that productivity gains require human oversight: « When it comes to the AI code, we'll apply the same rules that we apply to software developers, that we should have a very rigid pipeline of deployment where the code like goes from development into production. » The bank maintains rigorous approval processes with multiple human reviews for all code, regardless of origin.

Security Vulnerabilities: The Hidden Technical Debt of AI

Edgar Kussberg, a leading voice in AI security, delivered a clear warning: AI-generated code often carries invisible risks that accumulate as a new form of technical debt. Drawing from his ongoing research at Sonar, he revealed a consistent pattern:“Even with simple prompts, like generating an endpoint to create and analyze a PDF, one out of three times, the AI produces insecure code.

The root cause? Training data. Today’s large language models are all trained on human-written code, code that often contains vulnerabilities unknown at the time of writing. These flaws, inherited by the models, resurface at scale when developers rely on AI for speed. Kussberg referenced the Log4Shell crisis to underscore the risk: a single zero-day vulnerability brought half the IT industry to a standstill, proof that unaddressed flaws can become global threats overnight.

Edgar’s message is clear: "AI doesn’t eliminate the need for secure practices, it amplifies it". Teams must upgrade their pipelines to match this new velocity. That includes automated security scanning, strict dependency controls, and architectural guardrails to prevent AI from introducing insecure third-party libraries. In a world of AI-accelerated development, security can no longer be an afterthought, it must be built into the very fabric of how we code.

Enterprise Platform Solutions: Controlling the Chaos

Gregg Mac Neil Baxter presented User Experience Studio's approach to enterprise AI development through controlled platforms. His Swiss-developed solution addresses security concerns while enabling rapid application development: « You can build an app in 30 seconds. Very secure, multi tenant. So each department, if you white label us could fit in there. »

The platform philosophy centers on containment and governance. Rather than allowing developers to spin up infrastructure repeatedly, the system provides persistent, secure environments where teams can experiment safely. Mac Neil Baxter emphasized environmental responsibility: « Every time you build an app, you build up a whole infrastructure. I think it's a bit silly. »

Their agent marketplace concept allows organizations to deploy controlled AI agents with defined capabilities, ensuring consistency while preventing the security vulnerabilities associated with unrestricted AI tool usage.

The Context Revolution: Beyond Simple Prompting

The panel identified context management as the critical differentiator between successful and failed AI implementations. Vaucher explained: « Power users limit AI with the appropriate context. This is what we firmly believe. So developers limit the power of AI with precise context about their code base. »

This insight extends beyond technical implementation to philosophical understanding. Successful AI-assisted development requires curating relevant information rather than providing maximum context. Khalil shared a telling example: « I had a case where 7B model managed to solve like a complicated thing when I gave it a good context, like I provided the JIRA ticket that is related to it, the Confluence page, the page from Stack Overflow and some code and I think in 4K context, 4K tokens context, it was impressive, it found the issue and this thing failed with bigger models when I didn't provide the correct context. »

Organizations must develop systematic approaches to context curation, including internal documentation, architectural decisions, and company-specific libraries that AI tools should prioritize over generic solutions.

Legacy Systems: The Integration Challenge

The discussion revealed a critical divide between greenfield and legacy system development. Kussberg noted: « When you take AI and you have some new intent and you want to develop something from greenfield, it's amazing. You go so fast, so fast, Lovable bought new, take whatever you want. But then imagine, I don't know, Swiss bank or cantonal bank, whatever, where they have legacy code all around the place. »

Legacy systems present information density challenges where AI tools struggle to distinguish relevant context from historical artifacts. The solution involves building internal RAG (Retrieval-Augmented Generation) systems that limit AI access to verified, company-specific resources rather than allowing unrestricted internet knowledge.

This approach prevents AI tools from introducing external dependencies that conflict with internal security standards while ensuring recommendations align with organizational architectural decisions.

The Future Workforce: Human-AI Collaboration Models

The panel addressed concerns about AI replacing developers with nuanced perspectives on workforce evolution. Mac Neil Baxter drew analogies to tool evolution: « It's like going from your early grade at school where you're working with a pencil and doing maths and the next grade you get a calculator and the next one you get a scientific calculator. »

However, Vaucher presented a more disruptive view: the market is already adjusting expectations, with clients unwilling to pay traditional development rates or timelines. This pressure creates urgency for developers to evolve beyond code writing toward architectural thinking and business problem solving.

Collaboration and Culture: The Human Element

A crucial audience question addressed whether AI tools might reduce human collaboration, with developers becoming isolated in their problem-solving. Kussberg highlighted a concerning trend: AI tools often suggest external libraries instead of internal, security-hardened alternatives because they lack awareness of company-specific resources.

This challenge demands cultural solutions rather than purely technical ones. Organizations must develop governance frameworks that guide AI tools toward internal standards while maintaining the collaborative problem-solving that drives innovation.

The emergence of "shadow workspaces" in tools like Cursor and Windsurf—where AI makes changes in background copies before presenting them—adds complexity to collaborative workflows and version control systems.

Strategic Recommendations for Enterprise Leaders

Immediate Actions:

  • Implement controlled experimentation environments rather than unrestricted tool access

  • Establish security pipelines that treat AI-generated code with the same rigor as human-written code

  • Develop internal context curation systems that guide AI tools toward company-specific resources

  • Create governance frameworks for dependency management and architectural standards

Medium-term Investments:

  • Build internal RAG systems for company-specific documentation and standards

  • Develop agent marketplaces with pre-configured, compliant AI assistants for different roles

  • Establish training programs for contextual AI prompt engineering

  • Create collaborative workflows that integrate AI tools with human oversight

Long-term Strategy:

  • Prepare for workforce evolution where individual AI capabilities become hiring criteria

  • Develop platforms that democratize development while maintaining security and quality standards

  • Build sustainable infrastructure that supports AI experimentation without environmental waste

  • Establish industry standards for AI-assisted development in regulated environments

Conclusion: Embracing Controlled Innovation

The Panoramai panel revealed that AI-assisted development has moved beyond proof-of-concept to become a competitive necessity.

However, successful adoption requires sophisticated governance rather than simple tool deployment. Organizations that build frameworks for security, context management, and collaborative culture will capture AI's productivity benefits while avoiding its pitfalls. The future belongs to companies that treat AI as a strategic capability requiring thoughtful implementation rather than a tactical tool for immediate deployment.

Swiss enterprises are positioning themselves at the forefront of this evolution, developing frameworks that balance innovation with the security and reliability demands of regulated industries. Their experience provides a roadmap for global organizations navigating the transition to AI-augmented development.