Top Qs
Timeline
Chat
Perspective

International Association for Safe and Ethical AI

Nonprofit organization From Wikipedia, the free encyclopedia

Remove ads

The International Association for Safe and Ethical AI (IASEAI) is a non-profit organization. The organization's stated mission is to address the risks and opportunities associated with advances in artificial intelligence (AI). IASEAI was founded to promote safety and ethics in AI development and deployment. The organization focuses on shaping policy, supporting research, and encouraging a global community of experts and stakeholders.[1]

Quick Facts Abbreviation, Formation ...
Remove ads

Activities

IASEAI is involved in policy development, research and awards, education, and community-building. The organization develops policy analyses related to standards, regulation, international cooperation, and research funding, which are published as position papers. There is also support to research into both the technical and sociotechnical aspects of AI safety and ethics.

Inaugural conference

Summarize
Perspective

The inaugural IASEAI conference, IASEAI '25, was held on February 6–7, 2025 in Paris, prior to the Paris AI Action Summit. The event brought together experts from academia, civil society, industry, and government to discuss developments in AI safety and ethics. The program featured over 40 talks, keynote addresses, and specialized tracks on global coordination, safety engineering, disinformation, interpretability, and AI alignment.[2][3][4]

Notable participants included:

The conference also included presentations from early-career researchers and practitioners, such as Aida Brankovic of the Australian e-Health Research Centre (AEHRC). Brankovic presented guidelines developed to mitigate ethical risks in clinical decision support AI systems.[5] Other participants included Georgios Chalkiadakis of the Technical University of Crete.[6]

Topics addressed included reinforcement learning from human feedback (RLHF), AI governance, regulatory frameworks, agentic AI, misinformation, and transparency. Geoffrey Hinton's keynote, What Is Understanding?, explored how AI systems process meaning. Gillian Hadfield called for anticipatory legal capacity, and Evi Micha introduced a framework for aligning AI using “linear social choice.”[3]

The conference was noted for its emphasis on AI safety in contrast to the broader Paris AI Action Summit, which some observers said focused more on economic and geopolitical aspects of AI. Attendee Paul Salmon, a professor of human factors, criticized the broader summit for sidelining safety issues in favor of commercial narratives and outlined five “comforting myths” that obscure public understanding of AI risks.[7]

At the conclusion of the event, IASEAI issued a ten-point Call to Action for lawmakers, researchers, and civil society, recommending global cooperation, binding safety standards, and expanded public research funding.[8]

Remove ads

Board and committee

  • Amir Banifatemi – Member of the Board
  • Mark Nitzberg – Interim Executive Director, Secretary-Treasurer, Member of the Board
  • Stuart Russell – Member of the Board; Professor of Computer Science, University of California, Berkeley

The steering committee includes:

  • Yoshua Bengio – Université de Montréal, Mila
  • Kate Crawford – University of Southern California, Microsoft Research
  • Tino Cuéllar – Carnegie Endowment for International Peace
  • Gillian Hadfield – Johns Hopkins University
  • Eric Horvitz – Microsoft
  • Will Marshall – Planet Labs
  • Jason Matheny – RAND Corporation
  • Alondra Nelson – Institute for Advanced Study
  • Aza Raskin – Center for Humane Technology
  • Francesca Rossi – IBM
  • Bart Selman – Cornell University
  • Max Tegmark – Massachusetts Institute of Technology
  • Andy Yao – Tsinghua University
  • Zhang Ya-Qin – Tsinghua University

References

Loading related searches...

Wikiwand - on

Seamless Wikipedia browsing. On steroids.

Remove ads