Senior Software Engineer - Performance Tooling
Microsoft
The Artificial Intelligence (AI) Frameworks team at Microsoft develops AI software that enables running AI models everywhere, from world’s fastest AI supercomputers, to servers, desktops, mobile phones, internet of things (IoT) devices and internet browsers. We collaborate with our hardware teams and partners, both internal and external, and operate at the intersection of AI algorithmic innovation, purpose-built AI hardware, systems, and software. We are a team of highly capable and motivated people that pride themselves on a collaborative and inclusive culture. We own inference performance of OpenAI and other state of the art large language model (LLM) models and work directly with OpenAI on the models hosted on the Azure OpenAI service serving some of the largest workloads on the planet with trillions of inferences per day in major Microsoft products, including Office, Windows, Bing, SQL Server, and Dynamics.
As a Senior Software Engineer - Performance Tooling on the team, you will have the opportunity to work on multiple levels of the AI software stack, including the fundamental abstractions, programming models, compilers, runtimes, libraries and application programming interfaces (APIs) to enable large scale training and inferencing of models. You will benchmark OpenAI and other LLM models for performance on graphics processing units (GPUs) and Microsoft hardware, debug and optimize performance, monitor performance and enable these models to be deployed in the shortest amount of time and the least amount of hardware possible helping achieve Microsoft Azure's capex goals.
Microsoft’s mission is to empower every person and every organization on the planet to achieve more. As employees we come together with a growth mindset, innovate to empower others, and collaborate to realize our shared goals. Each day we build on our values of respect, integrity, and accountability to create a culture of inclusion where everyone can thrive at work and beyond.
Responsibilities
- Work across multiple layers of the AI software stack (abstractions, programming models, compilers, runtimes, libraries, and APIs) to enable large-scale model training and inference.
- Benchmark OpenAI and other LLMs for performance on GPUs and Microsoft hardware.
- Debug, profile, and optimize performance for training/inference workloads on Central Processing Units (CPUs)/Graphics Processing Units (GPUs).
- Monitor performance regressions and drive continuous improvements to reduce time-to-deploy and hardware footprint.
- Collaborate across teams of researchers and engineers to deliver scalable, production-ready AI performance improvements
Qualifications
Required/Minimum Qualifications:
- Bachelor's Degree in Computer Science or related technical field AND 4+ years technical engineering experience with coding in languages including, but not limited to, C++, or Python OR equivalent experience.
Other Requirements:
- Ability to meet Microsoft, customer and/or government security screening requirements are required for this role. This includes passing the Microsoft Cloud background check upon hire/transfer and every two years thereafter.
Preferred/Minimum Qualifications:
- Master's Degree in Computer Science or related technical field AND 6+ years technical engineering experience with coding in languages including, but not limited to, C++, or Python
- OR Bachelor's Degree in Computer Science or related technical field AND 8+ years technical engineering experience with coding in languages including, but not limited to, C++, or Python
- OR equivalent experience.
- 4+ years’ practical experience working on high performance applications and performance debugging and optimization on CPUs/GPUs.
- Experience in DNN/LLM inference and experience in one or more DL frameworks such as PyTorch, Tensorflow, or ONNX Runtime and familiarity with CUDA, ROCm, Triton.
- Technical background and solid foundation in software engineering principles, computer architecture, GPU architecture, hardware neural net acceleration.
- Experience in end-to-end performance analysis and optimization of state of the art LLMs and HPC applications, including proficiency using GPU profiling tools.
- Cross-team collaboration skills and the desire to collaborate in a team of researchers and developers.
- Ability to independently lead projects
#AIInfra
Software Engineering IC4 - The typical base pay range for this role across the U.S. is USD $119,800 - $234,700 per year. There is a different range applicable to specific work locations, within the San Francisco Bay area and New York City metropolitan area, and the base pay range for this role in those locations is USD $158,400 - $258,000 per year.
Certain roles may be eligible for benefits and other compensation. Find additional benefits and pay information here:
https://careers.microsoft.com/us/en/us-corporate-pay
This position will be open for a minimum of 5 days, with applications accepted on an ongoing basis until the position is filled.
Microsoft is an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to age, ancestry, citizenship, color, family or medical care leave, gender identity or expression, genetic information, immigration status, marital status, medical condition, national origin, physical or mental disability, political affiliation, protected veteran or military status, race, ethnicity, religion, sex (including pregnancy), sexual orientation, or any other characteristic protected by applicable local laws, regulations and ordinances. If you need assistance with religious accommodations and/or a reasonable accommodation due to a disability during the application process, read more about requesting accommodations.