Staff Software Engineer, Cloud ML Compute Services - Google
Seattle, WA 98101
About the Job
Minimum qualifications:Bachelor's degree or equivalent practical experience.8 years of experience in software development and with data structures/algorithms.5 years of experience testing, and launching software products, and 3 years of experience with software design and architecture.5 years of experience with machine learning algorithms, tools, and libraries.
Preferred qualifications:3 years of experience in a technical leadership role leading project teams and setting technical direction.Experience with building high-quality and reusable AI infrastructure, compilers, or performance engineering.Experience with stack-spanning systems and tools, from high-level Python to low-level C++.Understanding of the full user experience.Interest in keeping up-to-date on the latest AI models, tools, and techniques.
About the job Google Cloud's software engineers develop the next-generation technologies that change how billions of users connect, explore, and interact with information and one another.
We're looking for engineers who bring fresh ideas from all areas, including information retrieval, distributed computing, large-scale system design, networking and data storage, security, artificial intelligence, natural language processing, UI design and mobile; the list goes on and is growing every day.
As a software engineer, you will work on a specific project critical to Google Cloud's needs with opportunities to switch teams and projects as you and our fast-paced business grow and evolve.
You will anticipate our customer needs and be empowered to act like an owner, take action and innovate.
We need our engineers to be versatile, display leadership qualities and be enthusiastic to take on new problems across the full-stack as we continue to push technology forward.Google Cloud provides organizations with leading infrastructure, platform capabilities, and industry solutions.
We deliver enterprise-grade cloud solutions that leverage Google’s cutting-edge technology to help companies operate more efficiently, adapt to changing needs, and give customers a foundation for the future.
The Cloud ML Compute Services team builds and supports the Google Cloud Platform (GCP) Cloud TPU and GPU services and related ML models/frameworks.
It provides ML infrastructure customers with large-scale, cloud-based access to Google’s ML supercomputers to run training and inference workloads using PyTorch and JAX.
The machine learning stack team is responsible for the ML frameworks, tools, models, and processes to achieve scale and performance in ML workloads in Google Cloud.Google Cloud accelerates every organization’s ability to digitally transform its business and industry.
We deliver enterprise-grade solutions that leverage Google’s cutting-edge technology, and tools that help developers build more sustainably.
Customers in more than 200 countries and territories turn to Google Cloud as their trusted partner to enable growth and solve their most critical business problems.The US base salary range for this full-time position is $189,000-$284,000 + bonus + equity + benefits.
Our salary ranges are determined by role, level, and location.
The range displayed on each job posting reflects the minimum and maximum target salaries for the position across all US locations.
Within the range, individual pay is determined by work location and additional factors, including job-related skills, experience, and relevant education or training.
Your recruiter can share more about the specific salary range for your preferred location during the hiring process.Please note that the compensation details listed in US role postings reflect the base salary only, and do not include bonus, equity, or benefits.
Learn more about benefits at Google.
Responsibilities Work across the tech stack to improve LLM training and inference performance on TPU.Add new features and publish high-performance open-source kernels.Partner with the XLA and PyTorch team to design and implement new PyTorch features, and collaborate directly with Cloud TPU power users to solve tricky problems and enable new workloads.Create smooth inter-operations between JAX and PyTorch (e.g., for data loading, hybrid models, or portability).Implement and benchmark reference PyTorch models and techniques, also inform new PyTorch features and improvements.
Preferred qualifications:3 years of experience in a technical leadership role leading project teams and setting technical direction.Experience with building high-quality and reusable AI infrastructure, compilers, or performance engineering.Experience with stack-spanning systems and tools, from high-level Python to low-level C++.Understanding of the full user experience.Interest in keeping up-to-date on the latest AI models, tools, and techniques.
About the job Google Cloud's software engineers develop the next-generation technologies that change how billions of users connect, explore, and interact with information and one another.
We're looking for engineers who bring fresh ideas from all areas, including information retrieval, distributed computing, large-scale system design, networking and data storage, security, artificial intelligence, natural language processing, UI design and mobile; the list goes on and is growing every day.
As a software engineer, you will work on a specific project critical to Google Cloud's needs with opportunities to switch teams and projects as you and our fast-paced business grow and evolve.
You will anticipate our customer needs and be empowered to act like an owner, take action and innovate.
We need our engineers to be versatile, display leadership qualities and be enthusiastic to take on new problems across the full-stack as we continue to push technology forward.Google Cloud provides organizations with leading infrastructure, platform capabilities, and industry solutions.
We deliver enterprise-grade cloud solutions that leverage Google’s cutting-edge technology to help companies operate more efficiently, adapt to changing needs, and give customers a foundation for the future.
The Cloud ML Compute Services team builds and supports the Google Cloud Platform (GCP) Cloud TPU and GPU services and related ML models/frameworks.
It provides ML infrastructure customers with large-scale, cloud-based access to Google’s ML supercomputers to run training and inference workloads using PyTorch and JAX.
The machine learning stack team is responsible for the ML frameworks, tools, models, and processes to achieve scale and performance in ML workloads in Google Cloud.Google Cloud accelerates every organization’s ability to digitally transform its business and industry.
We deliver enterprise-grade solutions that leverage Google’s cutting-edge technology, and tools that help developers build more sustainably.
Customers in more than 200 countries and territories turn to Google Cloud as their trusted partner to enable growth and solve their most critical business problems.The US base salary range for this full-time position is $189,000-$284,000 + bonus + equity + benefits.
Our salary ranges are determined by role, level, and location.
The range displayed on each job posting reflects the minimum and maximum target salaries for the position across all US locations.
Within the range, individual pay is determined by work location and additional factors, including job-related skills, experience, and relevant education or training.
Your recruiter can share more about the specific salary range for your preferred location during the hiring process.Please note that the compensation details listed in US role postings reflect the base salary only, and do not include bonus, equity, or benefits.
Learn more about benefits at Google.
Responsibilities Work across the tech stack to improve LLM training and inference performance on TPU.Add new features and publish high-performance open-source kernels.Partner with the XLA and PyTorch team to design and implement new PyTorch features, and collaborate directly with Cloud TPU power users to solve tricky problems and enable new workloads.Create smooth inter-operations between JAX and PyTorch (e.g., for data loading, hybrid models, or portability).Implement and benchmark reference PyTorch models and techniques, also inform new PyTorch features and improvements.
Source : Google