(US) Policy Advisor – AI Safety & Security

ControlAI
  • Location
    Washington, D.C.
  • Sector
    Non Profit
  • Experience
    Early Career
  • Posted
    Apr 04

Position description

is a non-profit organisation that works with a singular mission to prevent the development of unsafe superintelligence and secure a great future for humanity.

We develop policy and legislation; secure media coverage; produce high-quality videos and infographics; design and run effective digital and physical campaigns; organise events; and influence policymakers. In less than two years of operations, we have secured public support for our campaigns from high-ranking politicians, have authored draft bills for the UK and US, have created multiple viral videos, and have led international coalitions.

For example, since our founding in 2023, ControlAI has gathered over 25 UK lawmakers in acknowledging the risk of superintelligence, over 60 UK lawmakers in support of our policy to ban deepfakes across the supply chain, ensured foundation models were kept in scope in the EU AI Act against concerted corporate lobbying, and developed a comprehensive policy plan to address extinction risk from AI.

We are now beginning operations in the US and looking to expand our office and efforts in Washington D.C.

We are expanding our U.S. policy team and seeking two policy analysts—one with a right-of-center perspective, one with a left-of-center perspective—to drive engagement with policymakers, media, and civil society on AI safety and security. This role reports up to ControlAI’s US Director, and will work closely with global senior leadership.

Superintelligent AI presents profound challenges for national security, economic stability, and democratic governance. As concern about AI risk grows in Washington, we need credible, politically attuned voices to shape the policy debate.

This role involves developing AI policy solutions, meeting directly with politicians and civil society to advocate about the risks and solution, and building coalitions within relevant political circles to ensure AI safety remains a bipartisan priority.

Key roles and responsibilities

  • Provide expert briefings to policymakers and civil servants on superintelligence risks and governance solutions.
  • Apply a results-driven “campaigning” approach toward policy goals, ensuring commitments to action and coalition growth across government, academia, and industry.
  • Test messaging, iterate fast, and continually optimize based on feedback and results.
  • Track, analyze, and report on progress toward key deliverables—e.g. “How do we hit 30 engaged members of Congress by June?”

Experience and skills expected of the post holder

  • BSc. or MSc. in Computer Science, Economics, Public Policy, or a related field.
  • 2+ years of experience in U.S. policy, government, think tanks, or advocacy.
  • Strong political instincts and deep familiarity with conservative or progressive policy networks.
  • Relentlessly goal-driven, comfortable with ambiguity, and energized by ownership. Excited by directions like “present to this committee, figure it out.”
  • Exceptional communication skills—clear, persuasive, and audience-aware across written, spoken, and visual formats.
  • Past experience with AI safety, another technical subject, or the ability to learn quickly about new technologies.”

 

Application instructions

Please be sure to indicate you saw this position on Globaljobs.org