Top Qs
Timeline
Chat
Perspective

Leopold Aschenbrenner

German AI researcher From Wikipedia, the free encyclopedia

Remove ads

Leopold Aschenbrenner (born 2001 or 2002[1]) is a German artificial intelligence (AI) researcher and investor. He was part of OpenAI's "Superalignment" team before he was fired in April 2024 over an alleged information leak, which Aschenbrenner disputes. He has published a popular essay called "Situational Awareness" about the emergence of artificial general intelligence and related security risks.[2] He is the founder and CIO of Situational Awareness LP, a hedge fund investing in companies involved in the development of AI technology.[3]

Quick facts Born, Education ...
Remove ads

Biography

Summarize
Perspective

Aschenbrenner was born in Germany. He was educated at the John F. Kennedy School in Berlin and graduated as valedictorian from Columbia University in 2021 at the age of 19, majoring in economics and mathematics-statistics.[1][4][5] He did research for the Global Priorities Initiative at Oxford University and co-authored a 2024 working paper with Philip Trammell of Oxford. Aschenbrenner was a member of the FTX Future Fund team, an effective altruism philanthropic initiative created by the FTX Foundation,[6] from February 2022 until his resignation prior to FTX's bankruptcy in November of that year.[7][8]

OpenAI

Aschenbrenner joined OpenAI in 2023, on a team called "Superalignment", headed by Jan Leike and Ilya Sutskever. The team pursued technical breakthroughs to steer and control AI systems smarter than humans.[9] As a member of the team, Aschenbrenner co-authored “Weak to Strong Generalization”,[10] which was presented at the 2024 International Conference on Machine Learning.[11]

In April 2023, a hacker gained access to OpenAI's internal messaging system and stole information, an event that OpenAI kept private.[12] Subsequently, Aschenbrenner wrote a memo to OpenAI's board of directors about the possibility of industrial espionage by Chinese and other foreign entities, arguing that OpenAI's security was insufficient. According to Aschenbrenner, this memo led to tensions between the board and the leadership about security, and he received a warning from human resources. OpenAI later fired him in April 2024 over an alleged information leak, which Aschenbrenner said was about a benign brainstorming document shared to three external researchers for feedback. OpenAI stated that the firing is unrelated to the security memo, whereas Aschenbrenner said that it was made explicit to him at the time that it was a major reason.[13][14] The "Superalignment" team was dissolved one month later, with the departure from OpenAI of other researchers such as Ilya Sutskever and Jan Leike.[15]

Remove ads

Situational Awareness essay

In 2024, Aschenbrenner wrote a 165-page essay named "Situational Awareness: The Decade Ahead".[16] It contains sections that predict the emergence of AGI, imagine a path from AGI to superintelligence, describe four risks to humanity, outline a way for humans to deal with superintelligent machines, and articulate the principles of an "AGI realism". He specifically warns that the United States needs to defend against the use of AI technologies by countries such as Russia and China.[17] Aschenbrenner argues that by 2027 AI systems will have the capacity to conduct their own AI research. Hundreds of millions of AGIs could then automate AI research, compressing a decade of algorithmic progress into less than a year, which would lead to "runaway superintelligence".[18]

Remove ads

Investment firm

After publishing "Situational Awareness" in 2024, Aschenbrenner founded Situational Awareness LP, an investment firm backed by Patrick and John Collison, Daniel Gross, and Nat Friedman.[17][19] Named after his essay "Situational Awareness", the AI-focused hedge fund manages over $1.5 billion as of 2025.[3]

References

Loading related searches...

Wikiwand - on

Seamless Wikipedia browsing. On steroids.

Remove ads