Principal Applied Scientist
Microsoft
As the advertising ecosystem expands, sophisticated adversarial actors are leveraging generative AI, automation, and distributed infrastructure to bypass safety controls. The Ads Trust and Safety team requires a Principal Applied Scientist to contribute to the research and technical strategy for Threat Modelling team. We are looking for a security domain expert who can advance the state of the art in Threat Modeling, and Adversarial Defense. This role involves transitioning trust mechanisms from static verification to dynamic, behavioral-based integrity systems. You will architect solutions to detect and neutralize high-complexity fraud vectors eg phishing, Payment fraud, cloaking, malware distribution, token misuse, and authentication etc ensuring the ads platform remains safe for users, advertisers and publishers.
The primary success metric is the robust identification and mitigation of advanced abuse vectors with minimal impact on legitimate advertiser friction and ad-serving latency.
Responsibilities
Responsibilities
- Strategic Threat Modeling: Develop and maintain comprehensive adversarial frameworks to map the lifecycle of emerging threats, from account compromise (ATO) to malicious payload delivery.
- Evolution of Advertiser Trust: Advance the continuous, signal-based security protocol. Research and implement behavioral biometrics and Proof of Liveness models to detect synthetic identities and coordinated fraud rings.
- Adversarial Research: Proactively identify "unknown unknown" vulnerabilities through red-teaming and exploratory data analysis, developing models to predict attacker behavior before widespread exploitation.
- Technical Leadership: Drive the technical roadmap for integrity and security, mentoring senior engineers and influencing cross-functional stakeholders on security investment priorities.
Qualifications
Required Qualifications
- Bachelor’s, Master’s, or PhD degree in Computer Science, Cybersecurity, Mathematics, or a related field, with 10+ years of related experience.
- Deep technical expertise in Cybersecurity, Anti-Abuse, or Adversarial Machine Learning.
- Strong programming skills in C++ or Python (at least one is required), with experience in building production-quality security or ML systems.
- Hands-on experience in one or more of the following:
- Web Security standards and Authentication Protocols (OAuth, OIDC).
- Malware analysis, de-obfuscation, or reverse engineering.
- Building fraud detection models at scale.
- Proven ability to design and implement defense mechanisms against complex abuse vectors (e.g., botnets, synthetic identity, evasion/cloaking).
- Strong communication and collaboration skills, with experience articulating complex security risks to business and product leadership.
Preferred Qualifications
- 5+ years of experience in an Adversarial/Trust & Safety role at a major internet platform or cybersecurity firm.
- Familiarity with the Ad-Tech stack (RTB, OpenRTB) and associated fraud incentives.
- Background in Graph Neural Networks (GNNs) for fraud ring detection or behavioral biometrics.
- Track record of impact via security research publications, patents, or contributions to industry security standards.
This position will be open for a minimum of 5 days, with applications accepted on an ongoing basis until the position is filled.
Microsoft is an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to age, ancestry, citizenship, color, family or medical care leave, gender identity or expression, genetic information, immigration status, marital status, medical condition, national origin, physical or mental disability, political affiliation, protected veteran or military status, race, ethnicity, religion, sex (including pregnancy), sexual orientation, or any other characteristic protected by applicable local laws, regulations and ordinances. If you need assistance with religious accommodations and/or a reasonable accommodation due to a disability during the application process, read more about requesting accommodations.