Head of Novel AI Testing, Trust and Safety - Google
Washington, DC 20022
About the Job
Minimum qualifications:
- Bachelor's degree or equivalent practical experience.
- 13 years of experience in trust and safety, product policy, privacy and security, legal, compliance, risk management, intel, content moderation, red teaming, AI testing, adversarial testing, or similar.
- 2 years of experience in people management, leading teams.
- Experience with AI systems or machine learning.
Preferred qualifications:
- Master's degree or PhD in a relevant field.
- 4 years of experience in people management.
- Experience working with GenAI products.
- Familiarity with SQL, Python or equivalent, and ability to think strategically and identify emerging threats and vulnerabilities.
- Excellent investigative skills with a focus on data analysis, using data in storytelling, and problem-solving.
- Excellent problem-solving and critical thinking skills with attention to detail in an ever-changing environment.
Job Description:
Trust & Safety team members are tasked with identifying and taking on the biggest problems that challenge the safety and integrity of our products. They use technical know-how, excellent problem-solving skills, user insights, and proactive communication to protect users and our partners from abuse across Google products like Search, Maps, Gmail, and Google Ads.
As the Head of Trust and Safety Responsible AI Novel Testing, you will lead a team of experts in developing structured ways to test new generative AI products and features against safety, neutrality and fairness standards. You will work closely with your team to define and lead the structured testing strategy and provide actionable insights reports to be used to drive mitigation steps for novel generative AI product/feature launches, where testing standards and methodologies are undefined. You will be a thought leader in defining Google’s strategy for launching AI products responsibly, and develop partnerships across Product, Engineering, and Trust and Safety through effective relationship building, influence and communication skills.
Compensation:
The US base salary range for this full-time position is $198,000-$282,000 + bonus + equity + benefits. Our salary ranges are determined by role, level, and location. The range displayed on each job posting reflects the minimum and maximum target salaries for the position across all US locations. Within the range, individual pay is determined by work location and additional factors, including job-related skills, experience, and relevant education or training. Your recruiter can share more about the specific salary range for your preferred location during the hiring process.
Equal Opportunity Employer:
Google is proud to be an equal opportunity workplace and is an affirmative action employer. We are committed to equal employment opportunity regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability, gender identity or Veteran status. We also consider qualified applicants regardless of criminal histories, consistent with legal requirements.
Responsibilities:
- Work with your team and stakeholders to develop and implement strategic programs to enhance AI safety at Google.
- Be responsible for leading and motivating a team of Trust and Safety Analysts, ensuring they are empowered, supported, and engaged.
- Be a key influencer within the team and beyond, providing thought leadership and expertise in AI safety.
- Work with sensitive content or situations and be exposed to graphic, controversial, or upsetting topics or content.