Senior Software Engineer, Core ML
Senior Software Engineer, Core ML
- linkCopy link
- emailEmail a friend
Minimum qualifications:
- Bachelor’s degree or equivalent practical experience.
- 5 years of experience testing, maintaining, or launching software products, and 1 year of experience with software design and architecture.
- 5 years of coding experience in one or more of the following languages: C, C++, Java, or Python.
- 5 years of experience with software development in one or more programming languages, and with data structures/algorithms.
- Experience with large language model, machine learning, large-scale distributed systems, inference, performance analysis, performance optimization, efficiency measurement, Python, kernel programming, machine learning algorithms, machine learning architecture, machine learning optimization, machine learning infrastructure.
Preferred qualifications:
- Experience with TPU architecture and programming.
- Knowledge of distributed systems and technologies such as disaggregated serving and speculative decoding.
- Understanding of open-source projects, particularly in Machine Learning (ML) or infrastructure.
- Understanding of inference solution performance benchmarking and optimization.
About the job
Google's software engineers develop the next-generation technologies that change how billions of users connect, explore, and interact with information and one another. Our products need to handle information at massive scale, and extend well beyond web search. We're looking for engineers who bring fresh ideas from all areas, including information retrieval, distributed computing, large-scale system design, networking and data storage, security, artificial intelligence, natural language processing, UI design and mobile; the list goes on and is growing every day. As a software engineer, you will work on a specific project critical to Google’s needs with opportunities to switch teams and projects as you and our fast-paced business grow and evolve. We need our engineers to be versatile, display leadership qualities and be enthusiastic to take on new problems across the full-stack as we continue to push technology forward.With your technical expertise you will manage project priorities, deadlines, and deliverables. You will design, develop, test, deploy, maintain, and enhance software solutions.
In this role, you will be responsible for executing our technical goals for third-party inference solutions. You will also contribute to building and optimizing open-source and internal repositories, working closely with leading engineers and researchers to deliver exceptional performance and customer experience.The ML, Systems, & Cloud AI (MSCA) organization at Google designs, implements, and manages the hardware, software, machine learning, and systems infrastructure for all Google services (Search, YouTube, etc.) and Google Cloud. Our end users are Googlers, Cloud customers and the billions of people who use Google services around the world.
We prioritize security, efficiency, and reliability across everything we do - from developing our latest TPUs to running a global network, while driving towards shaping the future of hyperscale computing. Our global impact spans software and hardware, including Google Cloud’s Vertex AI, the leading AI platform for bringing Gemini models to enterprise customers.
The US base salary range for this full-time position is $166,000-$244,000 + bonus + equity + benefits. Our salary ranges are determined by role, level, and location. Within the range, individual pay is determined by work location and additional factors, including job-related skills, experience, and relevant education or training. Your recruiter can share more about the specific salary range for your preferred location during the hiring process.
Please note that the compensation details listed in US role postings reflect the base salary only, and do not include bonus, equity, or benefits. Learn more about benefits at Google.
Responsibilities
- Develop and optimize Machine Learning (ML) inference solutions for both TPUs and GPUs, contributing to open-source projects like Large Language Model (vLLM).
- Implement TPU-specific backends for Large Language Model (vLLM) and related projects.
- Contribute to internal and Google Cloud Platform (GCP) specific repositories for inference optimization and integration with cloud services.
- Work on optimizing model serving infrastructure and performance, focusing on cost efficiency.
- Debug and troubleshoot inference service issues, developing debugging tools.
Information collected and processed as part of your Google Careers profile, and any job applications you choose to submit is subject to Google's Applicant and Candidate Privacy Policy.
Google is proud to be an equal opportunity and affirmative action employer. We are committed to building a workforce that is representative of the users we serve, creating a culture of belonging, and providing an equal employment opportunity regardless of race, creed, color, religion, gender, sexual orientation, gender identity/expression, national origin, disability, age, genetic information, veteran status, marital status, pregnancy or related condition (including breastfeeding), expecting or parents-to-be, criminal histories consistent with legal requirements, or any other basis protected by law. See also Google's EEO Policy, Know your rights: workplace discrimination is illegal, Belonging at Google, and How we hire.
If you have a need that requires accommodation, please let us know by completing our Accommodations for Applicants form.
Google is a global company and, in order to facilitate efficient collaboration and communication globally, English proficiency is a requirement for all roles unless stated otherwise in the job posting.
To all recruitment agencies: Google does not accept agency resumes. Please do not forward resumes to our jobs alias, Google employees, or any other organization location. Google is not responsible for any fees related to unsolicited resumes.