hero

Find Your Dream Job Today

Out for Undergrad
companies
Jobs

Principal Software Engineer (GPU inference)

Microsoft

Microsoft

Software Engineering
Beijing, China · China · Jiangsu, China · Suzhou, Jiangsu, China
Posted on Dec 12, 2025
Overview

Online Advertising is one of the fastest‑growing businesses on the Internet. Microsoft Ads powers large‑scale deep learning workloads across Search, Recommendations, Click Prediction, and Relevance. Deep learning sits at the core of how Ads drives business performance and delivers high‑quality user experiences. We are building a unified, high‑performance GPU inference platform to serve all Ads deep learning models at extreme scale. This platform serves billions of requests daily, with strict requirements on latency, throughput, reliability, and cost.

We are seeking a Principal Software Engineer with deep expertise in GPU inference systems, kernel-level optimizations, and large‑scale distributed serving. You will be a senior technical leader driving the architecture, performance, and reliability of the next‑generation GPU serving stack for Ads.

Microsoft’s mission is to empower every person and every organization on the planet to achieve more. As employees we come together with a growth mindset, innovate to empower others, and collaborate to realize our shared goals. Each day we build on our values of respect, integrity, and accountability to create a culture of inclusion where everyone can thrive at work and beyond.

Starting January 26, 2026, Microsoft AI (MAI) employees who live within a 50- mile commute of a designated Microsoft office in the U.S. or 25-mile commute of a non-U.S., country-specific location are expected to work from the office at least four days per week. This expectation is subject to local law and may vary by jurisdiction.



Responsibilities
  • Design and build a unified GPU inference platform for Ads, ensuring scalability, reliability, efficiency.
  • Optimize model inference via batching, quantization, scheduling, memory management, runtime optimization, kernel-level improvements and other performance improvements
  • Develop, optimize, and maintain CUDA kernels and GPU operators for high-throughput, low-latency production inference.
  • Collaborate with algorithm/model teams to co-design serving‑aware model architectures and optimizations.
  • Profile and improve end‑to‑end system performance: GPU utilization, concurrency, memory footprint, throughput and latency.
  • Provide senior technical leadership across teams; elevate engineering best practices and influence long‑term technical strategy.


Qualifications

Required Qualifications:

  • Bachelor's Degree in Computer Science or related technical field AND 6+ years technical engineering experience with coding in languages including, but not limited to, C, C++, C#, Java, JavaScript, or Python
    • OR equivalent experience.
  • 6+ years' experience in high-performance systems, distributed systems, or ML infrastructure.
  • Hands-on experience with GPU inference runtimes such as TensorRT, ONNX Runtime, Triton, TRT‑LLM, vLLM.
  • Experience building and optimizing performance‑critical production systems.

Other Requirements:
Ability to meet Microsoft, customer and/or government security screening requirements are required for this role. These requirements include but are not limited to the following specialized security screenings:

  • Microsoft Cloud Background Check: This position will be required to pass the Microsoft Cloud background check upon hire/transfer and every two years thereafter.

Preferred Qualifications:

  • Master's Degree in Computer Science or related technical field AND 8+ years technical engineering experience with coding in languages including, but not limited to, C, C++, C#, Java, JavaScript, or Python
    • OR Bachelor's Degree in Computer Science or related technical field AND 12+ years technical engineering experience with coding in languages including, but not limited to, C, C++, C#, Java, JavaScript, or Python
    • OR equivalent experience.
  • Expertise in CUDA kernel development and GPU performance engineering.
  • Familiarity with LLM/Transformer inference optimizations:
    • sharding, tensor/kv‑cache parallelism, paged attention, continuous batching, quantization (FP8/AWQ), hybrid CPU–GPU orchestration.

#MicrosoftAI


This position will be open for a minimum of 5 days, with applications accepted on an ongoing basis until the position is filled.




Microsoft is an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to age, ancestry, citizenship, color, family or medical care leave, gender identity or expression, genetic information, immigration status, marital status, medical condition, national origin, physical or mental disability, political affiliation, protected veteran or military status, race, ethnicity, religion, sex (including pregnancy), sexual orientation, or any other characteristic protected by applicable local laws, regulations and ordinances. If you need assistance with religious accommodations and/or a reasonable accommodation due to a disability during the application process, read more about requesting accommodations.