AI Data Protection Specialist
Applied Materials
Who We Are
Applied Materials is a global leader in materials engineering solutions used to produce virtually every new chip and advanced display in the world. We design, build and service cutting-edge equipment that helps our customers manufacture display and semiconductor chips – the brains of devices we use every day. As the foundation of the global electronics industry, Applied enables the exciting technologies that literally connect our world – like AI and IoT. If you want to push the boundaries of materials science and engineering to create next generation technology, join us to deliver material innovation that changes the world.
What We Offer
Location:
Bangalore,INDYou’ll benefit from a supportive work culture that encourages you to learn, develop, and grow your career as you take on challenges and drive innovative solutions for our customers. We empower our team to push the boundaries of what is possible—while learning every day in a supportive leading global company. Visit our Careers website to learn more.
At Applied Materials, we care about the health and wellbeing of our employees. We’re committed to providing programs and support that encourage personal and professional growth and care for you at work, at home, or wherever you may go. Learn more about our benefits.
Job Description
The AI Data Protection Specialist is responsible for ensuring that data used, produced, or influenced by AI systems is handled securely, ethically, and in compliance with organizational policies and regulatory requirements. This role sits at the intersection of cybersecurity, data privacy, and AI governance, helping the organization deploy AI solutions safely and responsibly by working closely with Information Security, Data Governance, Legal, AI Security and AI/ML teams to design controls, assess risks, and implement frameworks that protect sensitive information across the AI lifecycle.
The ideal candidate brings a combination of AI/ML technical understanding + data protection expertise + cybersecurity experience, with the ability to translate AI specific data risks into actionable security controls and guardrails. ‑specific
Key Responsibilities
- Develop and maintain AI data protection policies, standards, and procedures across the enterprise.
- Define and enforce policies for the protection of training data, prompt data, embeddings, model outputs, and AI-related metadata.
- Enforce data governance and protection controls for datasets used in training, testing, and deploying AI models.
- Maintain documentation for AI risk assessments, data flows, and model governance.
- Implement data‑handling standards for AI systems, including anonymization, pseudonymization, redaction, tokenization, and minimization techniques.
- Monitor and assess data leakage risks related to model outputs, prompts, fine‑tuning data, and embedded knowledge.
- Ensure protection against data leakage through AI interfaces (chat, retrieval-augmented generation, AI interfaces, etc.).
- Establish controls to detect and respond to AI-related data incidents or model misuse or unauthorized model use.
- Perform investigations into suspicious AI activity, data exfiltration attempts, or anomalies in model behavior.
- Partner with Data Governance, Privacy, AI Security and AI/ML team to ensure compliance with privacy and retention requirements.
- Work with cloud, data, and application security teams to embed security into AI environments (Azure AI, OpenAI, AWS Bedrock, GCP Vertex, etc.).
- Assess third-party AI vendors, APIs, and tools for security posture and data-handling practices.
Technical Skillset
- Deep knowledge of data protection concepts: encryption, secure enclaves, RAG isolation, synthetic data, DLP, access control frameworks, data lifecycle.
- Hands-on experience with cloud AI platforms (Azure OpenAI, AWS Bedrock, GCP Vertex).
- Knowledge of AI-specific attack vectors (prompt injections, model poisoning, adversarial examples, training data extraction).
- Knowledge in AI/ML concepts, architecture, and data pipelines.
- Knowledge of GDPR, CPPA/CPPA (Canada), HIPAA, PCI, or equivalent data protection regulations.
- Familiarity with emerging AI regulations (EU AI Act, U.S. Executive Orders on AI, Canada AIDA).
Soft Skillset
- Strong analytical and risk assessment skills
- Excellent communication and documentation abilities
- Ability to work across multiple teams and stakeholder management skills
- Problem-solving mindset with attention to detail
- Understanding of ethical AI principles and responsible data
- Proactive, analytical, and able to drive cross-functional alignment.
Qualifications
- Bachelor’s in Computer Science, Cybersecurity, Data Science, or related field.
- 5–10 years in security roles (AppSec, Cloud Sec, Data Protection, or Threat Modeling).
- Experience in data protection, information security & cloud platforms
- Certifications such as CISSP, CIPP/E, CISM, CDPSE, Azure/AWS security, or AI ethics/AI governance badges.
Additional Information
Time Type:
Full timeEmployee Type:
Assignee / RegularTravel:
Yes, 10% of the TimeRelocation Eligible:
YesApplied Materials is an Equal Opportunity Employer. Qualified applicants will receive consideration for employment without regard to race, color, national origin, citizenship, ancestry, religion, creed, sex, sexual orientation, gender identity, age, disability, veteran or military status, or any other basis prohibited by law.