Security Research Intern - AI Focus

Microsoft

Microsoft

Software Engineering, Data Science

Tel Aviv-Yafo, Israel · Herzliya, Israel

Posted on Apr 28, 2026
Overview

Come build community, explore your passions and do your best work at Microsoft with thousands of University interns from every corner of the world. This opportunity will allow you to bring your aspirations, talent, potential - and excitement for the journey ahead.

As an AI Security Research Intern in the Autonomous Attack Disruption team, you will join the frontlines of Microsoft Defender's mission to stop attacks in near real-time. Under the mentorship of experienced researchers, use AI to analyze real-world attacker TTPs and build systems that autonomously detect and disrupt attacks before adversaries reach their goals, including agentic pipelines and LLM-based threat analysis.

This role requires a blend of applied security research expertise, AI fundamentals, and engineering skills to deliver production-ready protection at a global scale. This is your chance to see your AI-powered research transformed into autonomous defense systems that protects millions of users.

At Microsoft, interns are embedded directly into research cycles, working on high-stakes projects that solve real-world security challenges. You will collaborate with global teams to translate complex research into automated protection logic that stops attackers in near real-time. You will work at the intersection of large language models, agentic AI frameworks, and security research - an area where the field is being defined in real time. You'll be empowered to build community, explore your passions, and achieve your goals. This is your chance to bring your solutions and ideas to life while working on cutting-edge technology.



Responsibilities
  • Investigate real-world advanced attacker TTPs and apply AI techniques (LLMs, agentic workflows) to support the development of high-fidelity, AI-augmented protection logic across complex cross-domain kill-chains.
  • Apply security expertise combined with AI-driven methods to analyze massive telemetry sets using big-data query languages (KQL) and AI-driven analysis, reasoning over data to identify novel malicious patterns and engineer evidence-based detection rules.
  • Contribute to the design and implementation of AI-powered capabilities that autonomously disrupt sophisticated threats in near real-time.
  • Assist in the refinement of protection coverage by analyzing real-world attack telemetry to improve the accuracy and performance of existing detection logics.
  • Contribute to a strategic feedback loop by documenting findings from attack data analysis to improve overall protection logic and system-wide security posture.
  • Partner with engineering, product, and other research teams to translate research insights into production-ready AI systems, helping to validate protection concepts, from prompt engineering to model evaluation, and ship them at a global scale.
  • Explore and prototype with emerging AI tools and frameworks to accelerate security research workflows and build reusable AI-driven research tooling.


Qualifications

Required Qualifications

  • Must have at least 3 additional semesters before graduation – graduation date Summer 27 or later.
  • Available to work 3 days a week.
  • Proven hands-on experience in security research, threat hunting, or detection engineering roles (e.g., from specialized military service, previous internships, or a significant portfolio of independent research/investigation).
  • Proficiency in Python~~, C#,~~ or similar languages, with a focus on writing clean, functional, and scalable code.
    Hands-on experience with AI technologies, whether through building ML models, working with LLMs and prompt engineering, experimenting with agentic frameworks, or applying AI to academic or personal projects - and a genuine passion for using AI to solve real-world problems.

Preferred Qualifications

  • Currently pursuing a Bachelor's or Masters Degree in Statistics, Mathematics, Computer Science , Data Science, AI/Machine Learning, or related field.
  • Deep understanding of the modern threat landscape, including hands-on familiarity with lateral movement techniques, credential theft, or cloud-native attack vectors.
  • Previous experience reasoning over large-scale datasets using big-data query languages (KQL/Kusto, SQL, or similar) to identify novel malicious patterns and drive evidence-based research decisions.
  • A proven "Hunter" mindset with a track record of identifying novel malicious patterns and converting them into actionable alerts.
  • Experience with LLMs, prompt engineering, or agentic AI frameworks (e.g., LangChain, Semantic Kernel, AutoGen) — academic projects or personal exploration count.
  • Interest in the intersection of AI and adversarial behavior - building autonomous, high-stakes decision systems for detection, analysis, and disruption.

This position will be open for a minimum of 5 days, with applications accepted on an ongoing basis until the position is filled.




Microsoft is an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to age, ancestry, citizenship, color, family or medical care leave, gender identity or expression, genetic information, immigration status, marital status, medical condition, national origin, physical or mental disability, political affiliation, protected veteran or military status, race, ethnicity, religion, sex (including pregnancy), sexual orientation, or any other characteristic protected by applicable local laws, regulations and ordinances. If you need assistance with religious accommodations and/or a reasonable accommodation due to a disability during the application process, read more about requesting accommodations.