hero

Find Your Dream Job Today

Out for Undergrad
companies
Jobs

Software Engineer 2

Microsoft

Microsoft

Software Engineering
Redmond, WA, USA
USD 100,600-199k / year
Posted on Feb 27, 2026
Overview

Microsoft Azure AI Inference platform is the next generation cloud business positioned to address the growing AI market. We are on the verge of an AI revolution and have a tremendous opportunity to empower our partners and customers to harness the full power of AI responsibly. We offer a fully managed AI Inference platform to accelerate the research, development, and operations of AI powered intelligent solutions at scale. This team owns the hosting, optimization, and scaling the inference stack for all the Azure AI Foundary models including the latest and greatest from OpenAI, Grok, DeepSeek, and other OSS models.

Do you want to join a team entrusted with serving all internal and external ML workloads, solve real world inference problems for state-of-the-art large language (LLM) and multi-modal Gen AI models from OpenAI and other model providers? We are already serving billions of inferences per day on the most cutting-edge AI scenarios across the industry. You will be joining the AI Core Inferencing team, influencing the overall product, driving new features and platform capabilities from preview to General Availability, and many exciting problems on the intersection of AI and Cloud.

We’re looking for a passionate Software Engineer 2 to drive the design, optimization, and scaling of our inference systems. In this role, you’ll lead engineering efforts to ensure our largest models run efficiently in high-throughput, low-latency environments. You will get to work on and influence multiple levels of the AI Inference data plane stack.

We do not just value differences or different perspectives. We seek them out and invite them in so we can tap into the collective power of everyone in the company. As a result, our customers are better served.

Microsoft’s mission is to empower every person and every organization on the planet to achieve more. As employees we come together with a growth mindset, innovate to empower others, and collaborate to realize our shared goals. Each day we build on our values of respect, integrity, and accountability to create a culture of inclusion where everyone can thrive at work and beyond.



Responsibilities
  • Design and implement core inference infrastructure for serving frontier AI models in production.

  • Identify and drive improvements to end-to-end inference performance and efficiency of state-of-the-art LLMs and GenAI models from OpenAI, Anthropic and xAI hosted on AI Foundary.

  • Design and implement efficient load scheduling and balancing strategies, by leveraging key insights and features of the model and workload.

  • Scale the platform to support the growing inferencing demand and maintain high availability.

  • Deliver critical capabilities required to serve the latest and greatest Gen AI models such as GPT5, Realtime audio, Sora, and enable fast time to market for them.

  • Drive generic features to cater to the needs of customers such as GitHub, M365, Microsoft AI and third-party companies.

  • Collaborate with our partners both internal and external.



Qualifications

Required / Minimum Qualifications

  • Bachelor’s degree in Computer Science or a related technical field AND 2+ years of technical engineering experience with coding in languages including, but not limited to, C, C++, C#, Java, or Golang, OR equivalent experience.

Other Requirements

  • Ability to meet Microsoft, customer, and/or government security screening requirements for this role. These requirements include, but are not limited to, the following specialized security screenings:
    • Microsoft Cloud Background Check: This position requires passing the Microsoft Cloud Background Check upon hire or transfer and every two years thereafter.

Preferred Qualifications

  • Technical background with a solid foundation in software engineering principles, distributed computing, and system architecture.
  • Experience working on high-scale, reliable online systems.
  • Experience with real-time online services requiring low latency and high throughput.
  • Experience working with Layer 7 (L7) network proxies and gateways.
  • Knowledge of network architecture and concepts, including HTTP and TCP protocols, authentication, and session management.
  • Knowledge and experience with OSS, Docker, Kubernetes, C++, Golang, or equivalent programming languages.
  • Cross-team collaboration skills and the desire to collaborate in a team of researchers and developers.
  • Ability to independently lead projects.
#AIPLATFORM
#AzureAI
#CoreAI
#GenAI
#AIInference


Software Engineering IC3 - The typical base pay range for this role across the U.S. is USD $100,600 - $199,000 per year. There is a different range applicable to specific work locations, within the San Francisco Bay area and New York City metropolitan area, and the base pay range for this role in those locations is USD $131,400 - $215,400 per year.

Certain roles may be eligible for benefits and other compensation. Find additional benefits and pay information here:
https://careers.microsoft.com/us/en/us-corporate-pay


This position will be open for a minimum of 5 days, with applications accepted on an ongoing basis until the position is filled.




Microsoft is an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to age, ancestry, citizenship, color, family or medical care leave, gender identity or expression, genetic information, immigration status, marital status, medical condition, national origin, physical or mental disability, political affiliation, protected veteran or military status, race, ethnicity, religion, sex (including pregnancy), sexual orientation, or any other characteristic protected by applicable local laws, regulations and ordinances. If you need assistance with religious accommodations and/or a reasonable accommodation due to a disability during the application process, read more about requesting accommodations.