Lead Applied Research Scientist - Responsible AI - Salesforce
Washington, DC 20001
About the Job
To get the best candidate experience, please consider applying for a maximum of 3 roles within 12 months to ensure you are not duplicating efforts.Job CategorySoftware EngineeringJob DetailsAbout SalesforceWe’re Salesforce, the Customer Company, inspiring the future of business with AI+ Data +CRM.
Leading with our core values, we help companies across every industry blaze new trails and connect with customers in a whole new way.
And, we empower you to be a Trailblazer, too — driving your performance and career growth, charting new paths, and improving the state of the world.
If you believe in business as the greatest platform for change and in companies doing well and doing good – you’ve come to the right place.Salesforce's Office of Ethical and Humane Use is hiring an Applied Responsible AI Research Scientist to play a pivotal role in guiding the responsible development of cutting-edge artificial intelligence products.
Working together with the Responsible AI & Tech team and in close partnership with both Salesforce AI Research and Frontier AI teams, they will deliver guidance, guardrails, and features that ensure the next generation of AI is designed, developed, and delivered in alignment with Salesforce’s ethical use and responsible AI principles.The ideal candidate will have experience in artificial intelligence and specifically the field of responsible / ethical AI including Generative AI.
They will work across teams to implement responsible AI processes including but not limited to bias assessments, accuracy measurements, harms modeling, privacy (memorization, unlearning), security, and AI model trust and safety.Job ResponsibilitiesDevelop strategy, alongside Salesforce AI Research, engineering, data science, and product management to create, develop, and ship cutting-edge generative AI capabilities for Salesforce customers while mitigating ethical risks and capturing ethical opportunities.Identify potential negative consequences, then identify how those consequences might be mitigated and drive prioritization of those mitigations into a team’s roadmap.
Conversely, identify positive ethical impacts in a roadmap/specification/design and ways to amplify them in the product.Trust and safety, as well as CRM benchmarking against other models and different versions of the same model.Develop solutions for real-world, large-scale problems.As needed, lead teams to deliver on more complex pure and applied research projects.Minimum Requirements:Master's degree (or foreign degree equivalent) in Computer Science, Engineering, Information Systems, Data Science, Social or Applied Sciences, or a related field5-8 years of relevant experience in AI ethics, AI research, Security, Trust & Safety, or similar roles.
Additional experience researching responsible generative AI challenges and risk mitigations.Expertise in one of the following areas: alignment, adversarial robustness, interpretability/explainability, or fairness in generative AI.Proven leadership, organizational, and execution skills.
Passion for developing cutting-edge AI ethics technology and deploying it through a multi-stakeholder approach.Experience working in a technical environment with a broad, cross-functional team to drive results, define product requirements, coordinate resources from other groups (design, legal, etc.), and guide the team through key milestonesProven ability to implement, operate, and deliver results via innovation at a large scale.Excellent written and oral communication skills, as well as interpersonal skills, including the ability to articulate technical concepts to both technical and non-technical audiences.Preferred Requirements:8-10 years of relevant experience in AI ethics, AI research, security, Trust & Safety, or similar rolesAdvanced degree in Computer Science, Human-Computer Interaction, Engineering, Data Science or quantitative Social SciencesPublished research on algorithmic fairness, accountability, and transparency, especially around detecting and mitigating bias or AI safety.Full-time industry experience in deep learning research/product.Strong experience building and applying machine learning models for business applications.Strong programming skillsExperience in implementing high-performance and large-scale deep learning systems.Thoughtful about AI impacts and ethics.Fantastic problem solver; ability to solve problems the world has not solved before.Presented a paper at NeurIPS, FAccT, AIES, or similar conferencesWorks well under pressure, and is comfortable working in a fast-paced, ever-changing environment.AccommodationsIf you require assistance due to a disability applying for open positions please submit a request via this Accommodations Request Form.Posting StatementAt Salesforce we believe that the business of business is to improve the state of our world.
Each of us has a responsibility to drive Equality in our communities and workplaces.
We are committed to creating a workforce that reflects society through inclusive programs and initiatives such as equal pay, employee resource groups, inclusive benefits, and more.
Learn more about Equality at and explore our company benefits at .Salesforce is an Equal Employment Opportunity and Affirmative Action Employer.
Qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender perception or identity, national origin, age, marital status, protected veteran status, or disability status.
Salesforce does not accept unsolicited headhunter and agency resumes.
Salesforce will not pay any third-party agency or company that does not have a signed agreement with Salesforce.Salesforce welcomes all.Pursuant to the San Francisco Fair Chance Ordinance and the Los Angeles Fair Chance Initiative for Hiring, Salesforce will consider for employment qualified applicants with arrest and conviction records.For New York-based roles, the base salary hiring range for this position is $165,600 to $323,400.For Washington-based roles, the base salary hiring range for this position is $151,800 to $296,400.For California-based roles, the base salary hiring range for this position is $165,600 to $323,400.Compensation offered will be determined by factors such as location, level, job-related knowledge, skills, and experience.
Certain roles may be eligible for incentive compensation, equity, benefits.
More details about our company benefits can be found at the following link:
Leading with our core values, we help companies across every industry blaze new trails and connect with customers in a whole new way.
And, we empower you to be a Trailblazer, too — driving your performance and career growth, charting new paths, and improving the state of the world.
If you believe in business as the greatest platform for change and in companies doing well and doing good – you’ve come to the right place.Salesforce's Office of Ethical and Humane Use is hiring an Applied Responsible AI Research Scientist to play a pivotal role in guiding the responsible development of cutting-edge artificial intelligence products.
Working together with the Responsible AI & Tech team and in close partnership with both Salesforce AI Research and Frontier AI teams, they will deliver guidance, guardrails, and features that ensure the next generation of AI is designed, developed, and delivered in alignment with Salesforce’s ethical use and responsible AI principles.The ideal candidate will have experience in artificial intelligence and specifically the field of responsible / ethical AI including Generative AI.
They will work across teams to implement responsible AI processes including but not limited to bias assessments, accuracy measurements, harms modeling, privacy (memorization, unlearning), security, and AI model trust and safety.Job ResponsibilitiesDevelop strategy, alongside Salesforce AI Research, engineering, data science, and product management to create, develop, and ship cutting-edge generative AI capabilities for Salesforce customers while mitigating ethical risks and capturing ethical opportunities.Identify potential negative consequences, then identify how those consequences might be mitigated and drive prioritization of those mitigations into a team’s roadmap.
Conversely, identify positive ethical impacts in a roadmap/specification/design and ways to amplify them in the product.Trust and safety, as well as CRM benchmarking against other models and different versions of the same model.Develop solutions for real-world, large-scale problems.As needed, lead teams to deliver on more complex pure and applied research projects.Minimum Requirements:Master's degree (or foreign degree equivalent) in Computer Science, Engineering, Information Systems, Data Science, Social or Applied Sciences, or a related field5-8 years of relevant experience in AI ethics, AI research, Security, Trust & Safety, or similar roles.
Additional experience researching responsible generative AI challenges and risk mitigations.Expertise in one of the following areas: alignment, adversarial robustness, interpretability/explainability, or fairness in generative AI.Proven leadership, organizational, and execution skills.
Passion for developing cutting-edge AI ethics technology and deploying it through a multi-stakeholder approach.Experience working in a technical environment with a broad, cross-functional team to drive results, define product requirements, coordinate resources from other groups (design, legal, etc.), and guide the team through key milestonesProven ability to implement, operate, and deliver results via innovation at a large scale.Excellent written and oral communication skills, as well as interpersonal skills, including the ability to articulate technical concepts to both technical and non-technical audiences.Preferred Requirements:8-10 years of relevant experience in AI ethics, AI research, security, Trust & Safety, or similar rolesAdvanced degree in Computer Science, Human-Computer Interaction, Engineering, Data Science or quantitative Social SciencesPublished research on algorithmic fairness, accountability, and transparency, especially around detecting and mitigating bias or AI safety.Full-time industry experience in deep learning research/product.Strong experience building and applying machine learning models for business applications.Strong programming skillsExperience in implementing high-performance and large-scale deep learning systems.Thoughtful about AI impacts and ethics.Fantastic problem solver; ability to solve problems the world has not solved before.Presented a paper at NeurIPS, FAccT, AIES, or similar conferencesWorks well under pressure, and is comfortable working in a fast-paced, ever-changing environment.AccommodationsIf you require assistance due to a disability applying for open positions please submit a request via this Accommodations Request Form.Posting StatementAt Salesforce we believe that the business of business is to improve the state of our world.
Each of us has a responsibility to drive Equality in our communities and workplaces.
We are committed to creating a workforce that reflects society through inclusive programs and initiatives such as equal pay, employee resource groups, inclusive benefits, and more.
Learn more about Equality at and explore our company benefits at .Salesforce is an Equal Employment Opportunity and Affirmative Action Employer.
Qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender perception or identity, national origin, age, marital status, protected veteran status, or disability status.
Salesforce does not accept unsolicited headhunter and agency resumes.
Salesforce will not pay any third-party agency or company that does not have a signed agreement with Salesforce.Salesforce welcomes all.Pursuant to the San Francisco Fair Chance Ordinance and the Los Angeles Fair Chance Initiative for Hiring, Salesforce will consider for employment qualified applicants with arrest and conviction records.For New York-based roles, the base salary hiring range for this position is $165,600 to $323,400.For Washington-based roles, the base salary hiring range for this position is $151,800 to $296,400.For California-based roles, the base salary hiring range for this position is $165,600 to $323,400.Compensation offered will be determined by factors such as location, level, job-related knowledge, skills, and experience.
Certain roles may be eligible for incentive compensation, equity, benefits.
More details about our company benefits can be found at the following link:
https://www.salesforcebenefits.com.SummaryLocation
: California - San Francisco; California - Palo Alto; New York - New York; Washington - Seattle; Washington - BellevueType: Full time
Source : Salesforce