Full-Time AI Offense Defense Dynamics Lead Researcher
Future Of Life Organizations is hiring a remote Full-Time AI Offense Defense Dynamics Lead Researcher. The career level for this job opening is Expert and is accepting Worldwide based applicants remotely. Read complete job description before applying.
Future Of Life Organizations
Job Title
Posted
Career Level
Career Level
Locations Accepted
Salary
Share
Job Details
Lead research to decode offense-defense dynamics of AI, examining how specific attributes of AI technologies influence their propensity to either enhance societal safety or amplify risks. Develop quantitative and qualitative frameworks that analyze how AI capabilities proliferate through society.
Responsibilities
- Develop quantitative system dynamics models capturing the interrelationships between technological, social, and institutional factors that influence AI risk landscapes
- Design detailed analytical models and simulations to identify critical leverage points where policy interventions could shift offense-defense balances toward safer outcomes
- Expand and operationalize our current offense/defense dynamics taxonomy and nascent framework
- Build empirically-informed analytical frameworks using documented cases of AI misuse and beneficial deployed uses to validate theoretical models
- Research how specific technical characteristics interact with sociotechnical contexts to determine offense-defense balances
- Build public understanding of offense-defense dynamics through blog posts, articles, conference talks, and media engagement
- Create tools and methodologies to assess new AI models upon release for their likely offense-defense implications
- Draft evidence-based guidance for AI governance that accounts for complex interdependencies between technological capabilities and deployment contexts
- Translate research findings into actionable guidance for key stakeholders
Requirements
- A M.Sc. or higher in either Computer Science, Cybersecurity, Criminology, Security Studies, AI Policy, Risk Management, or a related field
- Demonstrated experience with complex systems modeling, risk assessment methodologies, or security analysis
- Strong understanding of dual-use technologies and the factors that influence whether capabilities favor offensive or defensive applications
- Deep understanding of modern AI systems, including large language models, multimodal models, and autonomous agents
- Experience in any of the following: Security mindset, Security studies research, Cybersecurity, Safety engineering, AI governance, Operational risk management, Systems dynamics modeling, Network theory, Complexity science, Adversarial analysis, or Technical standards development
- Ability to develop both qualitative frameworks and quantitative models that capture sociotechnical interactions
- Record of relevant publications or research contributions related to technology risk, governance, or security
- Exceptional analytical thinking with ability to identify non-obvious path dependencies and feedback loops in complex systems
Pluses
- PhD in a relevant field
- Experience with system dynamics modeling, hypergraph techniques, or other complex network analysis methods
- Skills in developing interactive tools or dashboards for risk visualization and communication
- Background in interdisciplinary research bridging technical and social science domains
- Demonstrated aptitude in top-down techniques and first-principles thinking
- Experience with the quantification of qualitative risk factors or developing proxy metrics for complex phenomena
- Background in compiling and analyzing incident databases or case studies for pattern recognition
- Familiarity with empirical approaches to technology assessment and impact prediction
- Knowledge of international relations theory as it applies to technology proliferation dynamics