Responsible Tech Jobs | Detailed List
All Tech Is Human curates a list of Responsible Tech positions; these are roles that are focused on reducing the harms of technology, diversifying the tech pipeline, and ensuring that technology is aligned with the public interest.
You can view our regularly-updated Responsible Tech Job Board or scan below for an expanded look at many of our listed roles.
If you are looking for a Responsible Tech role, we have a range of activities to assist you on your career growth:
Join our Slack community (over 1600 people across the globe | sign up | Already a member? Sign in here)
Read our Responsible Tech Guide, which provides an overview of the nascent field and features a broad range of interviews and advice.
Attend our ASK A MENTOR! video interview series to learn directly from those actively working in the field.
Attend a Responsible Tech Mixer (online), which is a casual gathering of a diverse range of individuals both working in Responsible Tech and those looking to get more involved.
Organizations
Algorithmic Justice League | Director of Education [Remote]
“The Director of Education leads AJL's efforts to educate our members, partners, allies, and audiences as we build a powerful movement for equitable and accountable AI. They develop creative, compelling educational materials focused on specific groups of educators and learners, and use methods like popular education, creative science communication, and project based learning to engage, educate, and inspire people to learn more about the AI systems that increasingly govern our lives. They develop educational materials and projects for a wide range of learners across age ranges, learning styles, and cultural contexts, including those from communities that are most directly harmed by AI systems, as well as high school and college students, policymakers, and educators.”
Center for Democracy & Tech | Director, Privacy & Data Project [Washington, D.C. (Remote during Covid-19)]
“The Director, Privacy & Data Project will lead a growing team of 5+ counsels focused on changing the law and business practices to protect consumers’ and workers’ privacy, prevent discriminatory uses of data, and promote responsible use of artificial intelligence (AI). Key workstreams include: advocating for meaningful federal privacy legislation, direct-to-company advocacy to improve corporate data practices, working with federal and state agencies to promote effective oversight and regulation, and overseeing specific grant projects focused on health privacy, worker privacy, and the impact of algorithm-driven decision systems for people with disabilities and other historically marginalized communities. Each of these workstreams places a strong focus on equity and the risks of discriminatory uses of data.”
Hewlett Foundation | Cyber Initiative and Special Projects Fellow [Menlo Park, CA]
“The Cyber Initiative and Special Projects Program Fellow will play an integral role in two distinct grantmaking efforts housed in the foundation president’s office. The fellow will be a member of the Cyber Initiative team, working closely with and reporting to the Director of the Cyber Initiative, who will provide the fellow with direction on the initiative’s strategy, activities, and ongoing annual events. Launched in March 2014, the goal of the initiative (and its $130 million grantmaking budget) is to build a more capable and diverse field of cyber policy experts and expertise. The Cyber Initiative takes a broad view of cyber policy to include issues ranging from encryption to net neutrality to internet governance to cyber conflict. The Initiative will sunset in 2023 (coterminous with the fellow’s term), so the fellow will play an integral role helping to make the Initiative’s final grants, help promote and organize collaborations, and ensure its lasting impact.”
Patrick J. McGovern Foundation | Director, Direct Data and AI Services [Boston, MA or San Francisco, CA (Remote during Covid-19)]
“Director of Direct Data and AI Services will build and manage a team of data engineers and scientists to provide technology and data services to nonprofits that aim to use data to have a positive impact on people and their communities. Working closely with the Vice President and the President of PJMF, the Director will develop a strategy that delivers value and expertise to nonprofits using a variety of different technologies across multiple programmatic areas. The Director and their team will translate strategic plans into execution through the use of their deep technical expertise, building out a portfolio of nonprofits that will collectively demonstrate the various ways to increase data maturity in the nonprofit sector. The Director will also contribute to identifying field-level interventions and the development of educational resources aimed at building the competency of using data for impact of a larger cohort of nonprofit organizations.”
Companies
Article One | Responsible Innovation Manager [San Francisco or other major tech hubs in US, UK, and Europe]
“As Article One’s Manager, Responsible Innovation, you will drive transformative change at the world’s most influential companies, helping our clients embed ethics and human rights in product design, development, and deployment. You will help build Article One’s responsible innovation practice by managing consulting projects, generating leads, optimizing existing and designing new advisory services, building and leading a team of consultants, and setting goals for an impactful and transformative practice. In addition, you will play a leading role in advancing Article One’s Roundtable on Human Rights and Artificial Intelligence, a collaboration platform for leading AI companies.”
Deloitte | Manager of Digital Ethics | [Amsterdam]
advising our clients on organisational and technical challenges related to Digital Ethics;
solving a contributing ideas about relevant ethical and social issues related to data and technology;
conducting ethical impact assessments, based on a predetermined framework of standards;
advising on the protection of personal data in coordination with our colleagues from the Deloitte Cyber Privacy team;
“As a Manager at the Deloitte Digital Ethics team you share our passion for Ethics of Data & Technology. We advise our clients on ethical considerations in the use of and in the development of digital solutions and technologies such as AI and Machine Learning. This also means that we help our clients to translate their business strategy into a practical interpretation of how the client can harness data and advanced technologies in a responsible manner.”
Credo AI | Head of Marketing [Palo Alto, can be remote]
“We’re looking for a critical member of the Credo AI team who likes to operate at the intersection of marketing, product marketing, and building an ecosystem, to help define our communication strategy for our impactful customer solutions. You will be responsible for creating the most compelling content to help customers understand the use cases and value propositions, and building the right programs to increase customer engagement and service adoption. You deeply resonate with our vision to empower organizations to deliver Ethical AI at Scale.”
“Bachelor’s and/or Master's Degree Computer Science, engineering, technology background is a plus...Experience in Machine learning AI, privacy, security, compliance, or areas related to improving overall trustworthy user experience when it comes to technology is a plus.”
IBM, Racial and Social Justice Visiting Scholars Program
Ideal applicants are deeply committed to advancing social justice, thrive on collaboration and working side-by-side with people of all backgrounds and disciplines, and possess good verbal and written communication skills.
Applicants from many different backgrounds and expertise are encouraged to apply. This includes (but is not limited to) lawyers, advocates, grassroots organizers, researchers, and others with unique racial and/or social-justice oriented perspectives.
We also encourage applicants with previous work or volunteer experience with the needs of communities of color, low-income communities, and those otherwise disproportionately affected by discrimination in areas such healthcare, employment, education, and criminal justice to apply.
IBM Research will consider those looking for employment and those interested in taking a leave of absence or sabbatical from their current employment as applicants to this program.
Required Technical and Professional Expertise:
Undergraduate degree in any field (e.g. law, economics, human rights, sociology, public health, public policy, computer science) or equivalent experience
Demonstrated experience in racial or social-justice oriented topics.
Ability to think strategically about ways to address fundamental societal challenges and define a well-crafted research topic in this space
Facebook | Postdoctoral Researcher, AI and Social Computing (PhD) | [NYC]
“Facebook is seeking a Postdoctoral Researcher to join Facebook AI Research, a research organization focused on making significant progress in AI. Individuals in this role are expected to be recognized experts in machine learning and computational social science. The ideal candidate will have a keen interest in developing new models at the intersection of artificial intelligence, computational social science, and social computing using innovative techniques for learning from graph-structured and temporal data with special attention to responsible AI. Term length would be considered on a case-by-case basis.”
H&M Group | Stakeholder Engagement Lead to the Responsible AI and Data team | [Stockholm]
“Responsible AI & Data is a multi-disciplinary area, bridging the fields of machine learning and data science with law, psychology, human rights, ethics, and philosophy. You will be a core part of the growing Responsible AI & Data team that is leading H&M Group’s work in sustainable and ethical AI and data...Your work as a Stakeholder Engagement Lead is both strategic and operational, and directed towards internal as well as external stakeholders.”
Facebook | Responsible Innovation Program Manager [Contract]
“The role will assist the Responsible Innovation (RI) team with the research and development of issues-oriented guidance documents to help product teams proactively identify and mitigate potential harms to individuals, communities, and society. You will also provide significant operational support as we collaboratively scope and deliver these tools to teams across the company.”
“Conduct and distill research on potential harms associated with a broad range of high-impact social issues encountered by Facebook, Inc. product teams, including free speech, fairness, and wellbeing.”
Microsoft | Senior Program Manager, Responsible AI [Atlanta, GA /Redmond, WA / San Francisco, CA / Washington, D.C]
“[T]he Responsible AI Program Manager, Sensitive Uses will support the Sensitive Uses program within Microsoft’s Office of Responsible AI. The Sensitive Uses program is responsible for developing governance frameworks and providing timely guidance for Microsoft’s most sensitive AI-driven initiatives, products, partnerships, and customer-facing projects. From collaborating with engineering colleagues on product development to working with research and sales teams from around the world, Sensitive Uses is where responsible AI principles meet real-word practices. The area of focus for this role will be to support Sensitive Uses program management, governance, and policy processes for the responsible development and deployment of AI systems. You will research, define requirements, evaluate use cases, develop strategic initiatives, manage internal programs, and socialize policies that support Microsoft’s ability to develop and deploy AI systems safely and responsibly. You will also work with strategic Microsoft partners developing and deploying next-generation AI technologies.”
Parity | Vice President of Machine Learning
Parity is seeking a machine learning leader to take product ownership over our core product, a large language model called Nenya. Nenya prescribes action plans for algorithmic oversight based on a vast database of relevant academic research.
The ideal candidate:
5+ years of machine learning experience
Experience with NLP modeling
Deep knowledge of algorithmic fairness
Preferred but not required:
Experience conducting fairness research in an academic or industry setting
Background building ML-infused products
Salesforce | Senior Director, Product Management, Analytics - Tableau [Seattle, Boston, Palo Alto, San Francisco]
“In this role as Senior Director of Product Management on the Analytics team, you will prioritize and orchestrate our key investments across security, privacy, scalability, accessibility, performance, ethics by design, and other key product and platform requirements that help to make Trust a differentiator for Tableau. Your job is to define the vision and roadmap, aligning execution across the entirety of our Analytics portfolio while defining and modeling the best-practices that embody our core principle of customer trust.”
Snap | Engineering Manager - Trust & Safety | [Mountain View or Los Angeles]
“We’re looking for an Engineering Manager, Spam & Abuse to join our Information Security Group! As a member of the Spam and Abuse team, you will continue to grow and lead the team, developing software and services to defend our platform against bot operators, spammers, and bad actors who threaten the integrity of our platform.”
Preferred qualifications:
Experience with either abuse prevention or production machine learning systems
Strong data science skills, or a background in statistics
Strong experience working with Trust & Safety and Customer support organizations at scale
Experience collaborating with internal and external stakeholders at all levels of a company
Spotify | Machine Learning Research Scientist, Algorithmic Impact & Responsibility (NYC or remote; full-time)
“[S]eeking an experienced Researcher to join our Algorithmic Impact & Responsibility effort. This effort focuses on empowering Spotify teams to assess the algorithmic impact of their products on audio culture, avoid algorithmic harms and unintended data or machine learning side effects, and better serve worldwide audiences and creators.”
Spotify | Natural Language Processing Research Scientist (NYC or remote; full-time)
“[S]seeking an outstanding Natural Language Processing/Machine Learning Research Scientist to join our growing Trust & Safety / Algorithmic Impact team.”
“You will develop proactive and scalable signals for detecting abusive content across audio, text, image, and video formats.”
Twitter | Head of Site Integrity, Americas (Trust & Safety) [Based in US]
“You’ll develop a deep understanding of the full spectrum of Site Integrity policies and the threats facing Twitter and the people who use the platform — and help lead our team’s efforts to protect against them. With your guidance, the Site Integrity Americas team will build region-specific solutions, while contributing to our global strategy and org- and company-wide goals.”
“You have multidisciplinary experience, including in technical or quantitative fields, and are comfortable using diverse skills, techniques, and technologies to enable data-informed decision-making.”
Twitter | META (Machine learning Ethics, Transparency, & Accountability) Applied Engineering Manager [San Francisco, CA or Remote]
“You will lead diverse, smart, and driven engineers while balancing fairness considerations with business requirements. You will communicate fearlessly and take an active role in shaping the future of Twitter engineering while embodying our core values. You will work with our leadership, machine learning engineers, researchers, designers and PM to understand implications of automated decision systems and to ensure that existing ML systems are fair and equitable.”
Universities
Georgetown University, Ethics Labs | Assistant Research Professor in design / creative pedagogy [Washington DC]
“Ethics Lab is a unique team at Georgetown University developing creative methods for students, policy teams, and organizations to build ethics into their work. Our approach combines deep expertise in moral philosophy and the creative methods of design to develop educational exercises and facilitate conversations that surface the values at stake in emerging, complex situations. This work is active, multidisciplinary, highly collaborative, and centered on the development of socially responsible practices and policies.”
“Ethics Lab invites applications for a one-year, full-time Assistant Research Professor in design, creative pedagogy, or related fields. We seek candidates with an outstanding record of creative practice, including experience addressing the intersection of design, technology, ethics, and social impact in a professional, educational, or community leadership role. Candidates should demonstrate a strong interest in and capacity for facilitating entry- to upper-level undergraduate courses.”
Harvard Kennedy School (Belfer Center), Research Assistant [Cambridge, MA]
“The Belfer Center for Science and International Affairs is seeking candidates for the role of Research Assistant to Belfer Director and former Secretary of Defense Ash Carter & The Technology and Public Purpose (TAPP) Project. The Belfer Center is the hub for Harvard University’s research, teaching, and training in international affairs, security, and technology.
The Research Assistant will support Secretary Carter, both in his capacity as the Director of the Belfer Center and the Director of the TAPP Project. Duties will include conducting in-depth research on topics such as the role of technology in changing global affairs & U.S. society, particularly looking at emerging technologies’ impact within three areas: (1) digital technologies, such as social media platforms, artificial intelligence, internet of things, and more; (2) biotechnologies; and (3) future of work. Additionally, this position will cover research for a range of major international security issues.”
Harvard Kennedy School | Program Manager, Public Interest Lab [Cambridge, MA]
“The Public Interest Tech Lab (“the Tech Lab”) undertakes work to harmonize technology and society. Technology designers are the new policymakers...The Tech Lab exposes unforeseen consequences of technology, offers thought leadership and scientific guidance to society’s helpers –civil society, journalists, regulators, advocates, and consumer-facing government, and helps educate a diverse and inclusive workforce of technologists to work in the public interest.”
Harvard Law School, Berkman Klein Center for Internet & Society | Institute Director for “Rebooting Social Media” | [Cambridge, MA]
“The Berkman Klein Center for Internet & Society at Harvard University (BKC) seeks an Institute Director to lead an ambitious, newly launched, three-year “pop-up” Institute for “Rebooting Social Media.” Based at the Berkman Klein Center, the Institute will convene world-class talent from academia, industry, and the public sector to improve the future of social media and online communication. The Institute Director will shape a broadly participatory initiative that collaboratively creates a portfolio of research and projects that better articulate the harms and opportunities of networked communication, prototype new tools and protocols, develop policies enriched by attention to sociotechnical issues, and encourage accurate and accessible narratives about the harms and possibilities of social media.”
The Stanford Institute for Human-Centered Artificial Intelligence (HAI) | Research Associate [Remote now, campus later]
“We are hiring one (1) full time (100% FTE), benefits-eligible, fixed term research associate for two (2) years. Although not guaranteed, there will be potential for an extension/renewal following the initial 2 years contingent upon additional funding commitments and/or programmatic needs.
Reporting to the Research Manager, the Research Associate (Data Analyst 2) will be responsible for developing the global AI vibrancy tool and contribute to the research needs of the AI Index program, including monitoring research on AI measurement, collecting data, consulting AI experts, and writing technical aspects of the annual report.”
UMass Amherst | Researcher/Community Manager for the Media Cloud-International Hate Observatory Project (IHOP) [Amherst, MA]
Use the Media Cloud-IHOP tool suite to explore online extremism and online media coverage of other key issues related to Media Cloud and IHOP.
Closely collaborate with project staff at Northeastern University and Media Ecosystems Analysis Group, and with researchers and developers on the Media Cloud team at Harvard’s Berkman Center for Internet and Society to advance project goals.
Produce original research into online extremism and other topics using Media Cloud tools and publish the results in cooperation with the Media Cloud team and partner organizations.
University of Michigan, School of Public Policy Science, Technology, and Public Policy (STPP) Program | Professor of Racial Justice in Science and Technology Policy [Ann Arbor, MI]
“The Gerald R. Ford School of Public Policy at the University of Michigan invites applications from well-qualified individuals for a tenure-track or tenured faculty position focused on racial justice in science and technology policy. Applicants should have expertise focused on structural and other forms of racism in science, technology, and associated policies, and interest in how the tools of public policy and democracy can be used to create racially just and equitable science and technology and/or how science and technology can be wielded to address structural racism. Applications are welcome from a range of fields, including computer and data science, engineering fields, science and technology studies, science and technology policy, law, communications, African American studies, ethnic studies, information studies, sociology, and history, with particular interest in candidates whose work transcends traditional disciplinary boundaries.”
University of San Francisco | Senior Director, Data Institute [San Francisco, CA]
“The Senior Director of the Data Institute has primary responsibility for administering the Data Institute, its programs, and a single subsidiary center. The Vice Provost for Institutional Budget, Planning, and Analytics oversees the Senior Director on behalf of the Provost, and conducts an annual evaluation of the Senior Director in accordance with university policies and procedures. This appointment is scheduled to begin in spring 2021. The Senior Director of the Data Institute is responsible for developing, and then advancing, a new strategic plan, as well as overseeing all areas of work within the scope of the Institute. In its foundational documents, the Data Institute is envisioned as an organization that “will organize and enrich the University of San Francisco’s various interdisciplinary commitments to the field of data science” in four primary ways: bridge-building that results in strong and lasting industrial partnerships; the use of data science for community outreach and the common good; the enhancement of educational programming at the traditional undergraduate and graduate levels, as well as the development of executive and professional certificate programming; and the general scholarly advancement of data science.”
Fellowships
Data & Society Research Institute | Postdoctoral Fellow, Trustworthy Infrastructures [New York, NY (Remote at least through Covid-19)]
“...seeking a postdoctoral scholar for a two-year appointment researching alternative sociotechnical systems as part of a new area of focus on trustworthy infrastructures. The fellow will have the opportunity to design, propose, and complete original research projects exploring alternative sociotechnical systems both independently and in collaboration with the Data & Society research team. Above all, we’re looking for a candidate with expertise in how communities use sociotechnical methods to establish “trust and safety”—not as a department at a technology firm, but as a real condition of life. This work could include (but is not limited to), documenting how communities marginalized for their race, gender, or other characteristics have achieved safety on unsafe platforms, created their own platforms with alternative affordances, or created forms of critique through experimental technical practice. For this role, we are particularly interested in scholars who are investigating communities of color, such as Black women gaming communities, who are pioneers in creating safer spaces online.”
“Data & Society sees the investigation of trustworthy infrastructures as a crucial step in documenting the relationships among social media, tech platforms, and the changing norms of digital life. The new infrastructures of communication that now circle the globe are inextricably tied up with democratic participation, and so discovering who they exclude and how alternatives might emerge is a crucial political project. Therefore, we are looking for a researcher who can build on a body of work from scholars such as Ruha Benjamin, Anita Say Chan, Meredith Clark, and Sasha Costanza-Chock to ask: how is trust situated within infrastructures and how can safe infrastructures be designed from the margins? We are excited for someone to take up difficult questions such as: what would it take for marginalized communities to feel and experience trust online? What might these platforms look like if those most disproportionately harmed shaped their design? What would systems of redress and accountability look like?”
TechCongress | Congressional Innovation Fellows [Washington, D.C.]
“[W]e’re looking for top engineers, computer scientists, and other technologists who want to apply their skills to some of the biggest challenges facing our country and help shape technology policy.”
“Even if you’ve never thought you might want to work in government, you should give the Congressional Innovation Fellowship a second look. Training our fellows on how Congress works is what we do best, and through our comprehensive orientation, we provide fellows with an in-depth look on just how policy is made on the Hill, giving fellows the tools they needed to succeed from day one.”
Twitter (Site Integrity Team, Trust & Safety) | 2021 Information Operations Fellowship [Washington, D.C.]
“Twitter’s Site Integrity team is launching a fellowship program in 2021 intended to give emerging voices and early career investigators and researchers opportunities to work on challenging integrity problems at scale. The fellowship will offer graduate students, early career professionals and post-doc academics focused on data-driven investigations at the intersections of OSINT, data journalism, social/behavioral science and geopolitics the chance to embed with our Information Operations team for a six month interval, focused on specific projects or areas of inquiry. Are you passionate about contributing to Twitter’s evolving approach to tackling complex threats in a rapidly shifting information landscape? Does your work combine methods drawn from the fields of academia, intelligence analysis, investigations and policy making? Does the idea of supporting the collective health, openness, and civility of public conversation on Twitter interest you? If so, you should join us!”