Manuel Gustavo Isaac
Anticipating AI's Impact on Human Identity
At the Panoramai AI Summit, Manuel Gustavo Isaac, philosopher and researcher at the Geneva Science and Diplomacy Anticipator (GESDA), brought essential philosophical rigor to discussions of AI's societal implications. As an historian and philosopher of science and technology, he provided frameworks for understanding AI's transformation of fundamental human relationships.
The Science Breakthrough Radar
Manuel presented GESDA's comprehensive anticipation methodology, featuring the Science Breakthrough Radar that covers over 40 emerging scientific topics and identifies 300+ breakthroughs across 5-25 year timeframes. This global research initiative, informed by 2000+ leading scientists worldwide, serves as the foundation for evidence-based policy recommendations to international organizations including the UN Scientific Advisory Board.
His approach distinguishes GESDA by starting with scientific research rather than global problems, ensuring anticipation work remains grounded in empirical evidence rather than speculation about future challenges.
Planetarized Humanity Initiative
Manuel introduced GESDA's "Planetarized Humanity" concept, addressing three fundamental philosophical questions that AI advancement will reshape:
Who are we as humans in the context of artificial intelligence?
How can we live together as societies with AI integration?
How do we maintain sustainable relationships with our planet?
This framework recognizes that AI transformation extends beyond technological change to challenge basic assumptions about human identity, social cooperation, and environmental stewardship.
Trust and Transparency in AI Systems
Manuel provided crucial conceptual clarity around trust in AI relationships, distinguishing between epistemic trust (based on truthfulness and expertise requiring transparency) and moral trust (based on benevolence and moral agency). This distinction proves critical for preventing misalignment between stakeholders who design, deploy, and use AI systems.
His insight that moral trust relies on an « unquestioning attitude » - like trusting children without demanding complete transparency - helps explain why applying human trust concepts to AI systems can actually prevent the transparency necessary for safe deployment.
Global Governance Perspective
Manuel emphasized the need for inclusive dialogue across diverse cultural frameworks, noting that Western rights-based approaches embedded in regulations like the AI Act may not align with values in other cultures. His call for more inclusive governance recognizes that effective AI governance requires engaging constructively with different philosophical and cultural perspectives on technology's role in society.
Practical Wisdom for Leaders
Manuel's closing advice balanced technical and humanistic concerns, reminding business leaders that while technical challenges represent significant hurdles, the societal and anthropological dimensions of AI deployment require equal attention. This perspective ensures that AI implementation serves human flourishing rather than mere optimization.
Key Takeaway: Manuel demonstrated how philosophical rigor and global perspective can inform practical AI governance, providing conceptual frameworks that help leaders navigate the fundamental questions AI raises about human identity, social cooperation, and ethical technology development.