What is AI Risk?
Understanding the potential consequences of powerful AI technology and what could happen if it surpasses human capability
AI Risk Defined
AI risk is the consideration of the potential consequences of powerful AI technology, and what could happen if that technology becomes so advanced that it surpasses human capability.
The Promise
AI is already helping us in amazing ways: medical breakthroughs, language translation, safer cars, and countless other innovations that improve our lives.
The Risk
As AI becomes more powerful, faster, and acts in ways humans may not fully understand, some researchers worry the risks could grow much larger.
Artificial General Intelligence (AGI)
The trajectory suggests AI may become as capable as humans across a wide range of areas, known as 'artificial general intelligence'. At this point, many researchers believe AI won't be another tool we can control, but a force that could reshape society in unpredictable ways.
The Debate
Some AI experts argue that if AGI is developed without strong safety measures and global cooperation, it could pose an existential risk to human kind - meaning, it could mean the end of human life on our planet, perhaps forever. Others disagree and think such fears are exaggerated. Surveys of AI researchers show wide disagreement: some believe the risk is effectively zero, while others believe it could be catastrophic.
What most agree on: the future of AI and its limits are uncertain, so the stakes are unusually high.
What Experts Are Saying
Geoffrey Hinton
The 'Godfather of AI'
Former Google AI
"There is 10% to 20% chance AI will lead to human extinction in 30 years"
Elon Musk
CEO of Tesla & SpaceX
Tesla, SpaceX, xAI
"Mark my words, AI is far more dangerous than nukes"
Yoshua Bengio
AI Pioneer & Turing Award Winner
University of Montreal
"We need to be very careful about the development of AI systems"
It's Not About Being For or Against Technology
At its heart, the AI risk conversation isn’t about being ‘for’ or ‘against’ technology. Instead, it’s about recognising that AI could be one of the most powerful inventions in human history, capable of improving billions of lives but also carrying the potential for unintended and possibly irreversible harm.
At There’s Always One, we believe everyone should be as aware of the risks of AI as the idea of AI itself. It’s not about labelling a technology as good or bad, but helping to facilitate open discussions that include voices from around the world, not just those who are building (and stand to benefit from) the technology.
