Edgar Kussberg

The Security Guardian

Professional Background
With over two decades in the technology space, Edgar Kussberg is a hands-on product and engineering leader known for bridging the gap between code quality, AI, and security. Currently based in Geneva, he leads Sonar’s initiatives in AI-assisted code remediation, IDE integration, and agent-driven development. Prior to Sonar, Edgar played a pivotal role at Snyk, where he defined the company’s AI strategy, launched AI engines for vulnerability detection and remediation, and led the development of security learning products used by millions of developers worldwide.

Expertise & Perspective
Edgar represents the quality and security first mindset for AI in development. His work at Sonar includes leading a team of researchers investigating how AI models generate code and in particularly, how this new speed of coding amplifies existing issues related to code quality and security. With practical experience in both symbolic and neural AI systems, he consistently challenges the assumptions around AI trustworthiness and software supply chain safety.

Key Insights from Panoramai

On the Evolution of Development:

“Early development wasn’t fun. You could spend days coding, only to be blocked by a single issue that dragged you down. With limited resources, you’d experiment endlessly, until something finally worked, though you had no idea why. That same mysterious code would often get shipped to production. Today, AI gives me the ability to instantly understand code I’ve never seen before, even in areas where I’m not an expert. That kind of speed is an incredible superpower developers now have, almost effortlessly.”

Edgar captured the uncertainty and chaos of pre-AI development, a sentiment many engineers remember, before the emergence of intelligent tooling that transformed how we build and reason about software.

On Training Data Risk:

“All the LLMs we have today, open source or not, are trained on human-written code. And there just isn’t enough high-quality synthetic code yet. The problem is, security flaws usually aren’t known at the time code is written.”

Edgar emphasized a critical blind spot in current AI development: models trained on flawed, real-world code inevitably absorb and reproduce those vulnerabilities. This creates systemic security risks baked into the foundation of AI-assisted development.

On Production Security Realities:

“I don’t know how many of you remember Log4Shell, Log4j, just two days before Christmas. Half the IT world was panicking. It was a zero-day vulnerability, which meant every company using that technology had it in production, and it was exploitable.”

Edgar used this striking example to highlight how quickly and universally security flaws can ripple across organizations. When critical components are widely adopted, a single vulnerability can create global exposure almost overnight.

On AI Code Generation Risks:

“We run pipelines to test a range of security use cases. Even for simple prompts, like generating an endpoint to create and analyze a PDF, one out of three times the AI still produces flawed code.”

His research underscores a key risk: AI-generated code may feel magical, but it remains statistically unreliable. Even basic requests can result in insecure implementations, raising the stakes for organizations adopting AI tools at scale.

On the Developer Productivity Paradox:

“Productivity isn’t free. The ‘vibe coding’ we see today, fast, AI-driven code generation, often comes at the cost of quality. We’re actually seeing an increase in security issues, especially as those flaws contribute to real-world data breaches.”

Edgar cautioned that while AI accelerates delivery, it also amplifies risks. Without proper review and controls, speed becomes a liability rather than a competitive advantage.

On Legacy System Challenges:

“If you’re starting a greenfield project, AI feels amazing, things move incredibly fast. But now imagine working in a Swiss bank, full of legacy systems and tangled dependencies... that’s a different world entirely.”

He drew a clear line between AI’s strengths in clean, modern environments and its limitations within complex, regulated ecosystems. Adopting AI in the real world requires more than speed, it demands system awareness.

On Development Culture Evolution:

“AI tools often pull in external dependencies, random stuff from the internet. But in our company, we rely on vetted internal libraries. They're secure, and we enforce their usage.”

Kussberg pointed out that AI doesn’t inherently understand organizational context or standards. Left unchecked, it can undermine internal security protocols by introducing unfamiliar or unsafe libraries.

On Governance for AI Assistants:

“You can’t really fine-tune the model for everything. But you can give it structure, your own system prompt, company architecture decisions, and security guardrails. That needs to be centralized.”

He argued for robust governance around AI usage, frameworks that define what tools can suggest, how they behave, and how they align with company-specific standards. Centralization, he emphasized, is key to safe and scalable adoption.

Strategic Vision

Edgar doesn’t see security as a blocker to AI innovation, but as its critical foundation. He champions robust CI/CD pipelines that integrate AI-generated code responsibly, treating these tools not as infallible engines, but as unvetted collaborators. With supervision, structure, and secure defaults, organizations can harness AI’s potential without compromising safety.

Technical Philosophy

“We’re not secure just because we use AWS or Google, or because our server is locked away in a Matterhorn cellar. If you’re online, you’re vulnerable.”

For Edgar, true security starts at the code level. Infrastructure alone can’t protect against exploitability; organizations need deep, proactive defenses baked into the development lifecycle.

Industry Impact

Edgar stands at the intersection of AI, product innovation, and secure software development. His work has already shaped how millions of developers approach code quality and security in an era increasingly powered by intelligent tools. With a deep understanding of both the technical and human aspects of AI adoption, he is not just building better tools, he’s enabling a safer, smarter future for software teams worldwide.

Experienced product leader and technologist currently serving as Group Product Manager for AI Code Remediation, Agents & IDE Experience at Sonar. Previously led AI initiatives at Snyk as AI Group Lead, where he developed the company's AI strategy, secured Fortune 500 buy-in, and launched "Snyk Learn," a security education product now used by over 100,000 developers. Earlier entrepreneurial experience includes revolutionizing quick-commerce delivery in Switzerland at STASH and serving as Senior Product Manager at Numbrs, where he translated business requirements into focused product roadmaps. At LEAD Energy AG, he directed R&D and engineering departments, coordinating international IoT initiatives. With over two decades of experience spanning IoT, Cloud Applications, Big Data, AI/ML, Mobile Apps, FinTech, and InsureTech, he combines hands-on technical expertise with strategic product leadership to deliver user-centric solutions that drive business growth.