Experts are looking at ways to manage the extreme risks posed by advanced AI. 

The rapid development of artificial intelligence (AI), especially generalist AI systems that match or exceed human capabilities, has raised significant concerns about potential risks to humanity. 

Researchers worldwide have highlighted the absence of a unified approach to managing these extreme risks. 

They advocate for proactive and adaptive governance to mitigate these dangers, urging significant investment from big tech companies and public funders in risk assessment and mitigation strategies. 

Furthermore, they call upon global legal institutions and governments to enforce stringent standards to prevent AI misuse.

In a new article, researchers underscore the urgency of developing robust governance mechanisms to manage AI's potential harms.

Their recommendations stress the need for major technology firms and public funders to allocate at least one-third of their budgets to AI safety research and mitigation efforts. 

“To steer AI toward positive outcomes and away from catastrophe, we need to reorient. There is a responsible path – if we have the wisdom to take it,” the researchers argue. 

The rapid progression of AI technology, particularly in developing generalist AI systems, poses grave societal risks. 

These include amplifying social injustice, eroding social stability, enabling large-scale cybercriminal activity, and facilitating automated warfare, customised mass manipulation, and pervasive surveillance. 

The most alarming threat is the potential loss of human control over autonomous AI systems, which could act independently and pursue unintended goals.

Technology companies are in a fierce race to develop AI systems that surpass human abilities across various critical domains. 

This intense competition has resulted in significant investment and resource allocation to enhance AI capabilities. However, there is a stark imbalance, with much less focus on ensuring the safe and ethical development and deployment of these advanced systems.

Bengio et al. emphasise that humanity is not currently equipped to handle the potential risks associated with advanced AI. 

They highlight that, compared to the efforts to make AI more powerful, very few resources are dedicated to addressing safety concerns. 

Only an estimated 1 to 3 per cent of AI publications focus on safety. The researchers outline urgent priorities for AI research and development, stressing the need for breakthroughs in several areas to enable reliably safe AI.

To ensure AI systems' safety and ethical use, the experts say significant technical and governance challenges must be addressed. These include:

  • Oversight and Honesty: Developing robust methods to oversee and test AI systems to prevent them from exploiting technical weaknesses

  • Robustness: Ensuring AI systems behave predictably in new situations and improving aspects of robustness that do not scale with model size

  • Interpretability and Transparency: Enhancing understanding of AI decision-making processes, which are often opaque in larger models

  • Inclusive AI Development: Mitigating biases and integrating the values of diverse populations affected by AI advancement

  • Evaluating Dangerous Capabilities: Implementing rigorous methods to assess AI capabilities and predict potential threats before training and deployment

  • Risk Assessment: Developing comprehensive risk assessment methodologies to understand societal risks associated with frontier AI systems

The team says that the potential risks posed by advanced AI are immense, but so are the opportunities if managed properly. 

AI could revolutionise disease treatment, elevate living standards, and protect ecosystems. 

However, without sufficient governance and safety measures, the advancement of AI could lead to catastrophic outcomes. 

“There is a responsible path – if we have the wisdom to take it,” the authors say.