Software Engineer II
Microsoft
Software Engineer II
Redmond, Washington, United States
Save
Overview
Microsoft’s Commerce + Ecosystems (C+E) organization empowers all Microsoft businesses and third parties to bring their products and services to market across all channels and clouds. Within C+E, the Platform Data and Experiences (PDX) team’s mission is to produce accurate, reliable, and efficient records of charge, empower customers and partners through insights and analytics, and elevate critical commerce functions through code, data, models, insights, and email platforms.
Aligned with this mission, the Programmability Insights & Engineering (PIE) team delivers critical commerce experiences through deep integration across charges, billing, and pricing platforms.
We innovate in distributed data storage, intelligent routing, and scalable data modeling—powering a broad range of customer scenarios. Our work unlocks business value through advanced datamodels, leveraging enriched data stores, anomaly detection, and customer benefit insights.
Join us to be part of a dynamic, inclusive team that values diverse perspectives, empowers individuals to drive meaningful change, and is shaping the future of data platforms.
As a Software Engineer II on the team, you will design, build, and operate scalable data pipelines that run Anomaly detection on Commerce Datasets such as Usage, Charges, Pricing, Invoices, Credits and Balances. We support commerce with a focus on reliability, security, and correctness. You’ll apply the latest AI-assisted engineering tools to accelerate problem solving and delivery, move with urgency, and balance speed with quality and safety. You’ll grow by seeking feedback, sharing ideas, and learning from diverse perspectives, living Microsoft’s values of respect, integrity, and accountability so everyone can thrive.
Qualifications
Required Qualifications
- Bachelor's Degree in Computer Science or related technical field AND 2+ years technical engineering experience with coding in languages including, but not limited to, C, C++, C#, Java, JavaScript, or Python
- OR equivalent experience.
- 2+ years of experience in software/data engineering.
- 1+ authoring Big Data ETL processing on cloud service using Spark Scala or other big data technologies.
Other Requirements:
- Ability to meet Microsoft, customer and/or government security screening requirements are required for this role. These requirements include, but are not limited to the following specialized security screenings:
- Microsoft Cloud Background Check: This position will be required to pass the Microsoft Cloud Background Check upon hire/transfer and every two years thereafter.
Preferred Qualifications
- Demonstrated experience leveraging AI tools and technologies to enhance engineering effectiveness, coupled with a curiosity and commitment to continuous learning in the field of Artificial Intelligence.
- Proficiency in Apache Spark (PySpark or Scala) and distributed data processing.
- Experience with schema design and dimensional data modeling
Software Engineering IC3 - The typical base pay range for this role across the U.S. is USD $100,600 - $199,000 per year. There is a different range applicable to specific work locations, within the San Francisco Bay area and New York City metropolitan area, and the base pay range for this role in those locations is USD $131,400 - $215,400 per year.
Certain roles may be eligible for benefits and other compensation. Find additional benefits and pay information here: https://careers.microsoft.com/us/en/us-corporate-pay
Microsoft will accept applications for the role until October 31st, 2025.
#CPXJOBS
Responsibilities
- Design, develop, and maintain Spark-based data pipelines on Azure Synapse for large-scale anomaly detection and reporting.
- Implement distributed data processing solutions leveraging Spark for batch and streaming workloads.
- Collaborate with product managers, data scientists, and engineering teams to deliver end-to-end solutions.
- Coordinating data domain teams to understand datasets and onboard them to Anomaly detection platform.
- Ensure data quality, integrity, and compliance across multiple sources.
- Optimize Spark jobs for performance, scalability, and cost efficiency in cloud environments.
- Contribute to code reviews, design discussions, and architecture improvements.