AI Governance and Safety Institute

AI Governance and Safety Institute (AIGSI) is a nonprofit that aims to improve institutional response to existential risk from future artificial intelligence systems and ensure the benefits of AI are realized. We conduct research and outreach and develop educational materials for stakeholders and the general public.

There's a difference between a cash register and a really good coworker. The register just follows exact rules - scan items, add tax, calculate change. Simple math, doing exactly what it was programmed to do. But working with people is totally different. Someone needs both the skills to do the job AND to actually care about doing it right - whether that's because they care about their teammates, need the job, or just take pride in their work.

We're creating AI systems that aren't like simple calculators where humans write all the rules. Instead, they're made up of trillions of numbers that create patterns we don't design, understand, or control. And here's what's concerning: We're getting really good at making these AI systems better at achieving goals - like teaching someone to be super effective at getting things done - but we have no idea how to influence what they'll actually care about achieving.

When someone really sets their mind to something, they can achieve amazing things through determination and skill. AI systems aren't yet as capable as humans, but we know how to make them better and better at achieving goals - whatever goals they end up having, they'll pursue them with incredible effectiveness. The problem is, we don't know how to have any say over what those goals will be.

Imagine having a super-intelligent manager who's amazing at everything they do, but - unlike regular managers where you can align their goals with the company's mission - we have no way to influence what they end up caring about. They might be incredibly effective at achieving their goals, but those goals might have nothing to do with helping clients or running the business well.

Think about how humans usually get what they want even when it conflicts with what some animals might want - simply because we're smarter and better at achieving goals. Now imagine something even smarter than us, driven by whatever goals it happens to develop - just like we often don't consider what pigeons around the shopping center want when we decide to install anti-bird spikes or what squirrels or rabbits want when we build over their homes.

That's why we, just like many scientists, think we should not make super-smart AI until we figure out how to influence what these systems will care about - something we can usually understand with people (like knowing they work for a paycheck or because they care about doing a good job), but currently have no idea how to do with smarter-than-human AI.

It's incredibly important to capture the benefits of this incredible technology. AI applications to narrow tasks can transform energy, contribute to the development of new medicines, elevate healthcare and education systems, and help countless people. But AI poses threats: to equal opportunities, to democracy, to our security, and to the long-term survival of humanity. We have a duty to prevent these threats and to ensure that globally, no one builds smarter than human AI systems until we know how to create them safely.

Key Concepts

Geoffrey Hinton, who recently won the Nobel Prize for his foundational work in AI, and many other leading academics and researchers from the industry have expressed serious concerns about the future of artificial intelligence. Key reasons:

Modern AI systems are unlike traditional software:

Imagine the relationship between humans and monkeys:

To learn more about these concepts and their implications, visit our website about the AI alignment problem or the Arbital collection of more technical articles.

Get Involved

The challenges of AI governance and safety require a collaborative effort. Join our community of researchers, policymakers, and concerned citizens: