How to Enter the Responsible Tech and AI Job Market: Theodora Skeadas, HKS MPP 2016, BA 2012
Source: https://careerservices.fas.harvard.edu/blog/2026/03/16/how-to-enter-the-responsible-tech-and-ai-job-market-theodora-skeadas-hks-mpp-2016-ba-2012/ Parent: https://careerservices.fas.harvard.edu/channels/expand-your-network/
- Share This:
- Copy Link
My own journey into this space
I became interested in the issues of online speech governance, content moderation, trust and safety, and AI governance through years of work and study in the Middle East and North Africa region, after witnessing firsthand the role of social media in advancing both political movements for greater democracy and hateful behavior, disinformation, and violence.
I witnessed a range of complicated online safety issues during my years spent working with non-governmental organizations in Greece, Morocco, and Turkey during the Arab Spring. I lived in Morocco during Tunisia’s and Egypt’s 2011 revolution, which yielded significant proactive constitutional reform to Morocco, and then Turkey during the 2013 Gezi Park protests, when people expressed widespread discontent with the government’s policies. I observed how transformational platforms like Twitter were in advancing critical political and social conversations. I witnessed governments block Internet services to control the conversation, and challenging debates erupt around important issues like online violence against women, systemic Islamophobic bias in platform algorithms, the proliferation of disinformation and fake news, and mental health and suicide. In Morocco, I worked for nonprofits in Casablanca and Rabat at the intersection of education, youth empowerment, community development, poverty alleviation, and conflict resolution. I used my French to work with immigrant women from Francophone African countries including Mali, Niger, and Mauritania, and my Moroccan Darija to provide educational services to children and adult women living in a low-income neighborhood of Casablanca. In Turkey, I taught at Akdeniz University and researched the barriers to employment for Syrian refugee youth in southeast Turkey and Kurdish Iraq.
I later spent six years at Booz Allen Hamilton, examining public sentiment, social movements, and disinformation using social media for the U.S. Federal Government. I analyzed qualitative and quantitative data across topics including countering violent extremism, counter-terrorism, and cyber security with tools including sentiment analysis, Tableau Software, natural language processing, and econometrics. My work covered issues including how ISIS employed sophisticated social media strategies to recruit tens of thousands of members, how Al Shabaab in Somalia used radio to disseminate its own message, how Turks used digital media to organize around the 2017 constitutional referendum, and how Eastern Europeans responded on social media to NATO’s military expansion in the region. Over these years, I observed growing sophistication in social media across all kinds of users, including violent non-state organizations, non-violent civic protestors, and journalists.
During my two years at Twitter, I worked to foster healthy global conversations and empower digital rights for our global users. I managed all aspects of our Trust and Safety Council, Twitter’s largest public consultative body. I managed a trusted partners program to provide timely support to journalists and human rights defenders globally, to combat issues including platform manipulation, impersonation, fraud, human trafficking, terrorism, and child sexual abuse material. I led the Public Policy team’s knowledge management efforts on issues including antitrust, disinformation and hate speech, and open internet. I created and circulated a monthly research newsletter to the Public Policy team, detailing cutting-edge research on various human rights and privacy-related issues. In partnership with other stakeholders, I helped develop global policies and coordinate global consultations on issues like our world leaders policy, gender-based violence, and human rights impact assessments. I helped develop Twitter’s Content Governance Initiative, a framework comprising guiding principles and standardized guidelines on policy development, enforcement, and appeals. I drove a knowledge management effort across our global civic integrity team, and supported our crisis response work. Last, I supported the Twitter Moderation Research Consortium, which shared takedown data on state-backed information operations with researchers, to boost independent researcher access to data.
Following the 2022 Twitter layoffs, I transitioned into the independent consulting space. For over 3 years, I have worked with a range of non-profits, governments, and companies on issues including AI governance, tech-facilitated gender-based violence, government efforts to combat disinformation, information integrity, journalist safety, fraud, election integrity, and AI philanthropy. Additionally, I joined DoorDash full-time as the Community Policy Manager nearly 2 years ago. At DoorDash, I build trust and safety policies for the company. And, as the part-time Head of Red Teaming at Humane Intelligence, a nonprofit, I develop hands-on, measurable methods of real-time assessments of societal impact of AI models. I am also now enrolled part-time as a philosophy PhD student at King’s College London Department of War Studies, exploring the relationship between online and offline harms.
Overall, I’ve observed that degradation of trust in media and public institutions, an increase in disinformation, and the proliferation of hateful behaviors and violence, has challenged and changed this work over time. As such, there is significant work ahead of us in ensuring that internet-based services remain free of illegal and harmful material while defending freedom of expression. My life and work experiences continue to advance my commitment to this challenging and meaningful work.
Transitioning into this field
Here are a list of resources that can help you access this space:
Job search databases
- Business & Human Rights (BHR) Group
- Internet Law and Policy Foundry
- All Tech Is Human (ATIH)
- Trust and Safety Professional Association (TSPA)
- Pay it Forward jobs board
- Digital Rights jobs board
- Democracy jobs board
- 80000 Hours Job Board
Communities to join (especially their Slack channels)
- All Tech is Human
- Integrity Institute
- Trust and Safety Professional Association
- Prosocial Design Network
- Coalition for Independent Technology Research
- Coalition Against Online Violence
Newsletters to subscribe to
- Tech Policy Press
- Electronic Frontier Foundation
- Data & Society Research Institute
- All Tech is Human
- Technology 202 (Washington Post)
- Center for Democracy & Technology
- Berkman Klein Center
- Everything in Moderation
- Anchor Change with Katie Harbath
- Rest of World
- Access Now
Mentoring opportunities
- Successif offers free, expert-guided career development for experienced professionals
- Horizon Institute for Public Service’s Expert guidance for students & professionals on public service careers in emerging technology policy
- Trust and Safety Professional Association’s coffee hours you can sign up for
- Jeff Dunn’s mentoring program for trust and safety
Resources for the Global Majority
- Tech Global Institute is a great resource (based in Bangladesh)
- Rest of World does great reporting on Global Majority issues
- Access Now, Article19, Frontline Defenders, and Witness focus a lot on the Global Majority
- There are some great region-specific groups that you can also connect with:
- MENA region: SMEX (MENA-wide), 7amleh (Palestine), and Digital Rights Foundation (Pakistan)
- Asia region: Wahid Institute (Indonesia), TELL (Japan), Aarambh India (India), Mental Health PH (Philippines)
- Latin America: Fundacion Multitudes (Chile), Fundación Karisma (Colombia), Jacarandas (Colombia), El Veinte (Colombia), Sisma Mujer (Colombia), Cazadores de Fake News (Venezuela), Te Protejo (Colombia/Mexico)
- Africa: Techworker Community Africa (Kenya), Kuram (Nigeria), TechHer (Nigeria), The Initiative for Equal Rights (Nigeria)
List of AI fellowships
- Second Look Fellows pilot
- FATE: Fairness, Accountability, Transparency, and Ethics in AI – interns and postdocs
- Digital Leadership Scholarship
- Anthropic AI Safety Fellow
- Policy Fellowship at Centre for Science and Policy, University of Cambridge
- Fellowship at Harvard’s Berkman Klein Center
- Research Fellowship at Pivotal
- Ethics and Technology Practitioner Fellowship, Stanford
- Horizon Institute for Public Service’s Policy Fellowships
Frameworks or taxonomies of harm
- Institute for AI Policy and Strategy’s AI Agent Governance: A Field Guide
- Ada Lovelace Institute’s An Autonomy-Based Classification: Liability in the Age of AI Agents
- OpenAI’s A practical guide to building agents
- Artificial Intelligence Underwriting Company AIUC-1 standard
- NVIDIA’s A Safety and Security Framework for Real-World Agentic Systems
- IBM’s AI agents: Opportunities, risks, and mitigations
- Microsoft’s Taxonomy of Failure Mode in Agentic AI Systems
- Microsoft’s Foundations of Assessing Harm
- Sociotechnical Harms of Algorithmic Systems: Scoping a Taxonomy for Harm Reduction
- Digital Action’s online harms taxonomy
- Digital Trust & Safety Partnership
- Trust and Safety Professional Association’s Abuse Types
- A Framework of Severity for Harmful Content Online
- Unified Typology of Harmful Content
- ISO 42001
- American National Standards Institute – ANSI
- National Institute for Standards and Technology (NIST) AI RMF
Additional resources
- Technologists for the Public Good’s community resources
- Brooke’s list of lists
- Michael Oghia’s resources on getting hired
- Sample tracker to monitor outreach and engagement
- List of tech/AI policy conferences in 2026
About the Author:
Theodora Skeadas is a policy professional with 13 years of experience at the intersection of technology, society, and safety. As Head of Red Teaming at Humane Intelligence, she develops hands-on, measurable methods of real-time assessments of societal impact of AI models. As DoorDash‘s Community Policy Manager, she builds trust and safety policies for the company. She is a part-time PhD student at King’s College London Department of War Studies, exploring the relationship between online and offline harms. She chairs the Advisory Board of All Tech is Human, and is Co-Chair of the Board of Directors at Integrity Institute.
Theodora graduated from Harvard College with a B.A. in Philosophy and Government, and minors in Near Eastern Languages and Civilizations, Modern Standard Arabic, and Modern Greek, and Harvard Kennedy School with an MPP.
By Guest or External Blog
Guest or External Blog