Bayrem Kaabachi

Bridging Research and Clinical Reality

At the Panoramai AI Summit, Bayrem Kaabachi, data scientist at CHUV (Lausanne University Hospital) and PhD candidate, provided sobering insights into healthcare AI implementation that challenge Silicon Valley deployment timelines. His experience developing life-saving algorithms reveals the complex journey from research breakthrough to patient care.

The Three-Year Reality

« It takes time to go from those jupyter notebooks to an implementation inside the hospital », Kaabachi explained, describing a three-year process to deploy sepsis prediction algorithms that could be developed within weeks. The ESCHAR sepsis prevention system exemplifies these challenges—while machine learning models could predict patient risk effectively, surrounding infrastructure, validation protocols, and clinical integration required extensive development.

Collaborative AI Development

Kaabachi emphasized the imperative of interdisciplinary collaboration: « We have to work with nurses, we have to work with doctors, we have to create this AI kind of together ». His approach recognizes that healthcare AI success depends not just on algorithmic accuracy but on clinical workflow integration and physician acceptance. He noted that CHUV physicians show « both extremes »—some completely resistant, others « over eager » for research opportunities.

Responsible AI Research

His current research focuses on differential privacy techniques to protect patient data when training large language models, addressing a critical healthcare challenge: ensuring AI systems don't inadvertently expose individual patient information. This work reflects broader commitment to responsible AI development, balancing innovation with privacy protection.

Infrastructure Foundation

Kaabachi outlined CHUV's systematic approach, beginning with 2017's Swiss Personalized Health Network for data interoperability, establishing a 2021 Biomedical Data Science Center with 50+ data scientists achieving gender parity and international diversity (18 nationalities), and developing a Trusted Research Environment for external collaboration while maintaining data security.

Validation Imperative

His team applies pharmaceutical research methodologies to AI validation, using randomized controlled trials for « consultation augmented by LLM » to ensure patient safety. This rigorous approach contrasts with rapid deployment models, prioritizing clinical evidence over speed.

Key Achievement: Kaabachi illustrated how healthcare AI leaders can balance innovation urgency with patient safety requirements, establishing validation frameworks that could influence broader enterprise AI implementation standards.

Data scientist and PhD student at CHUV (Lausanne University Hospital) specializing in trustworthy AI with a focus on privacy-enhancing technologies for medical data. Currently developing private synthetic data solutions and advanced anonymization tools to support medical researchers while ensuring ethical and secure data use. Previously worked as a Data Science Intern at CHUV, where he applied Generative Adversarial Networks to medical use cases and assessed the utility and privacy of synthetic medical data. Earlier experience includes roles as a Developer-Analyst at the University of Lausanne working with topological data analysis, a Cybersecurity Intern at Capgemini implementing cryptographic solutions, and a Data Science Research Intern at Inria using NLP tools for textual data analysis. His work focuses on addressing key challenges in medical data including data imbalance, privacy concerns, and de-anonymization risks in electronic health records.