AI Security Threats and Defense Strategies (Keynote)

Jun 9, 2025

Jun 9, 2025

Enterprise AI adoption has reached a critical inflection point where 90% of organizations now deploy AI systems, yet growing security concerns highlight the urgent need for comprehensive threat mitigation strategies. Social engineering attacks amplified by generative AI capabilities represent the most immediate and scalable threat to Swiss and European enterprises.

Date: June 5th, Track 1

The New Threat Landscape: AI-Powered Adversaries

Doron Bar Shalom, Director of Product Innovation at Microsoft Security's Global CTO Office, delivered a compelling keynote on the evolution of cybersecurity threats in the AI era. Drawing from his 15+ years in cybersecurity across agencies and startups, Bar Shalom illustrated how threat actors have fundamentally transformed their operational capabilities through AI.

His presentation centered on a real-world scenario involving "Contoso Dynamics," a SaaS company serving critical industries including retail, pharma, and food sectors. This fictional but representative case study demonstrates how sophisticated threat actors—cybercrime groups with espionage links—now leverage AI to conduct « surgical and multi-stage attacks » targeting high-value corporate assets.

Intelligence Gathering Revolution

Modern threat actors have gained unprecedented advantages through AI-enhanced Open Source Intelligence (OSINT) operations. Bar Shalom explained how attackers systematically harvest data from LinkedIn, GitHub, and corporate blogs to build detailed organizational profiles. The game-changing factor lies in AI's ability to process this massive data collection instantly.

« What have changed in the last two years that the massive data that they are going to scrape now they can analyze with AI and ask the relevant question and get immediately the relevant answer » Bar Shalom emphasized. This creates what he termed an « asymmetric advantage » over organizational defenders, as threat actors can identify key personnel—lead machine learning engineers, DevOps managers, VP strategy roles—and understand critical infrastructure components before launching attacks.

Weaponized Social Engineering

The keynote revealed how generative AI has transformed phishing campaigns from broad-spectrum attacks to precision-targeted operations. Threat actors now craft highly personalized campaigns using language and concepts specifically relevant to individual targets, executing these at scale across entire organizations.

Bar Shalom emphasized the dual threat nature: « I think both. Because first, first thing is because you can use massive data and those tools so you can do a massive phishing campaign not for specific business... But when I do it for a specific business, it is more powerful »

This evolution has given rise to advanced SPEAR phishing campaigns that combine publicly available personal information with AI-generated, contextually appropriate messaging to create nearly undetectable social engineering attacks.

Defensive AI: Fighting Fire with Fire

Microsoft Security's approach to countering AI-powered threats involves deploying generative AI defensively across multiple operational layers.

Security Operations Center Enhancement

Bar Shalom detailed how Security Operations Centers (SOCs) now leverage LLMs to process massive signal volumes from security products. This automation enables natural language querying of security logs, replacing complex query languages like KQL with intuitive questions like « show something that happened with RDPs »

The defensive strategy includes adversarial AI testing, where organizations use AI to attack their own systems and products. « We are using AI in order to attack our own products and our own customers. We're doing it in a protective way, but we are doing it in order to simulate attacks » Bar Shalom explained.

Challenge of Signature-Based Detection

Traditional threat intelligence relies on known Tactics, Techniques, and Procedures (TTPs)—essentially signatures of threat actor behavior. However, generative AI enables code obfuscation at scale, rendering historical signature-based detection methods ineffective. « The signature is not relevant anymore » Bar Shalom stated, forcing defenders to develop new detection methodologies.

Enterprise AI Risk Categories

Bar Shalom outlined three critical risk layers affecting enterprise AI deployment:

AI Usage Layer

  • Data Leakage: Employees inadvertently sharing proprietary data through public AI tools like ChatGPT or Gemini

  • Application Control: Managing approved LLM-based applications and their associated plugins

  • Governance Gaps: Lack of policies governing enterprise AI tool usage

AI Development Layer

  • Prompt Injection Attacks: Malicious prompts designed to extract system data or breach organizational boundaries

  • Insecure Extensions: Risks associated with Model Control Protocol (MCP) servers acting as translators between AI systems and enterprise applications

  • Data Exposure: Sensitive information leakage through AI application responses

AI Platform Layer

  • Training Data Poisoning: Contamination of proprietary model training datasets

  • Model Theft: Unauthorized extraction of proprietary AI models

  • Safety vs. Security: Distinguishing between security threats and AI safety concerns

The MCP Security Challenge

Bar Shalom highlighted Model Control Protocol (MCP) servers as an emerging threat vector. These systems replace traditional APIs as connectors between AI applications and enterprise systems like SAP. The security challenge lies in verifying that open-source MCP servers perform only their declared functions without hidden malicious capabilities.

« We need to analyze the code as well in order to see that the intent is equal to the code that the MCP is using » Bar Shalom explained, emphasizing the need for comprehensive MCP security auditing procedures.

Multimodal Threat Evolution

The keynote concluded with a practical demonstration of prompt injection through a seemingly innocent email containing hidden white text instructions. This attack vector becomes exponentially more dangerous in multimodal AI environments.

Bar Shalom painted a concerning picture of future attack surfaces: « Imagine that we are going to have multimodal. I'm going to have my glasses with camera and they're going to see the context of the environment... Someone can put a sticker in the road and I'm going to... they are going to use it as a prompt injection »

This scenario illustrates how physical world elements—stickers, signs, or visual cues—could serve as prompt injection vectors in augmented reality and computer vision systems.

Strategic Implications for Swiss Enterprise

The keynote revealed that while AI adoption accelerates (90% of enterprises according to McKinsey), organizational confidence in AI security continues declining. This creates a critical gap between AI deployment velocity and security preparedness, particularly relevant for Swiss companies operating under stringent data protection requirements.

Bar Shalom emphasized that many security risks predate generative AI but are now amplified by AI capabilities. Swiss enterprises must develop comprehensive AI governance frameworks that address both traditional cybersecurity concerns and emerging AI-specific threats while maintaining compliance with European data protection standards.

The interactive Q&A portion of the keynote highlighted the urgent need for executive-level understanding of these evolving threat landscapes. As Bar Shalom concluded, these threats are « not the future, it's happening right now » requiring immediate organizational response and strategic planning.