hero

Find Your Dream Job Today

Out for Undergrad
companies
Jobs

Digital Factory - Data Engineer - Senior Associate

EY

EY

Software Engineering, Data Science
Luxembourg City, Luxembourg
Posted on Mar 2, 2026

Build a Better Working World with EY Digital Factory

At EY, we’re committed to shaping the future with confidence. As part of our Digital Factory team, you’ll work at the intersection of technology and innovation, helping to deliver cutting-edge solutions that transform businesses globally.

This is your opportunity to take on a leadership-oriented technical role, guiding backend engineering practices while remaining hands-on in building secure, scalable, and reliable services that power EY’s digital platforms.

Role Overview
The Data Engineer designs, builds, and operates scalable, reliable, and secure data platforms and pipelines that power EY’s digital products, analytics, and data‑driven use cases.

The role focuses on engineering robust data foundations—including ingestion, transformation, storage, and access—using modern data engineering practices. You will collaborate closely with Backend Engineers, Analytics, Data Science, Product, and Architecture teams to deliver trusted, high‑quality data assets in an enterprise and regulated environment.

Responsibilities

Data Platform & Pipeline Engineering

  • Design, implement, and maintain batch and streaming data pipelines using Python and modern data frameworks.
  • Build reliable ingestion and transformation processes from internal and external data sources (APIs, databases, files, events).
  • Develop reusable data processing components and frameworks following software engineering best practices.
  • Ensure clear data contracts, schemas, versioning, and documentation.

Data Quality, Reliability & Observability

  • Implement data validation, quality checks, and monitoring to ensure accuracy, completeness, and timeliness.
  • Write clean, testable code with unit and integration tests for data workloads.
  • Instrument pipelines with logging, metrics, and alerts; define SLAs/SLOs for critical data products.
  • Troubleshoot and resolve data incidents using root‑cause analysis and durable
  • fixes.

Performance & Scalability

  • Optimize data processing performance, storage layouts, and query efficiency.
  • Design pipelines for scalability and resilience (retries, idempotency, checkpointing, backpressure).
  • Apply partitioning, parallelization, and caching strategies where appropriate.

Architecture & Collaboration

  • Contribute to data platform architecture, including lakehouse, warehouse, and event‑driven designs.
  • Collaborate with Backend Engineers to align APIs and services with data consumption patterns.
  • Work closely with Analytics, Data Science, and Product teams to translate requirements into reliable data solutions.
  • Document architecture decisions, data models, and operational runbooks.

Security, Privacy & Compliance

  • Implement secure data handling practices, including access controls, encryption, and secrets management.
  • Ensure compliance with enterprise, regulatory, and privacy requirements (data classification, retention, lineage).
  • Support audits and data governance processes.

Operations & Continuous Improvement

  • Own data pipelines in production: monitor, maintain, and continuously improve reliability and performance.
  • Reduce operational toil through automation and standardized frameworks.
  • Contribute to shared data engineering standards, templates, and best practices across the Digital Factory.

Qualifications

Required

  • Bachelor’s or Master’s degree in Computer Science, Data Engineering, Software Engineering, or a related field.
  • Professional experience as a Data Engineer or in a strongly data‑focused engineering role.
  • Strong experience with Python for data engineering use cases.
  • Solid experience with SQL and data modeling concepts (dimensional, normalized, or lakehouse patterns).
  • Hands‑on experience with relational and analytical data stores (e.g. PostgreSQL, SQL Server, data warehouses, data lakes).
  • Experience with CI/CD, automated testing, and code reviews for data workloads.
  • Working knowledge of cloud platforms (Azure preferred) and data services.
  • Familiarity with containerization (Docker) and orchestration concepts.
  • Strong understanding of data security, access control, and observability.

Preferred

  • Experience with modern data frameworks and platforms (e.g. Spark, Airflow, dbt, Azure Data Factory, Databricks).
  • Exposure to streaming and event‑based data processing (e.g. Kafka, Azure Event Hubs, Service Bus).
  • Experience with lakehouse or enterprise data warehouse architectures.
  • Background in financial services or regulated environments.
  • Familiarity with infrastructure‑as‑code (Terraform/Bicep).
  • Experience working in distributed, cross‑functional teams.

Recruitment Process

  1. HR Screening
  2. Technical Interview
  3. Leadership Interview

Ready to lead and innovate? Apply today.
To ensure an inclusive recruitment experience, please share any disability-related adjustments you may need.

EY | Building a better working world

EY is building a better working world by creating new value for clients, people, society and the planet, while building trust in capital markets.

Enabled by data, AI and advanced technology, EY teams help clients shape the future with confidence and develop answers for the most pressing issues of today and tomorrow.

EY teams work across a full spectrum of services in assurance, consulting, tax, strategy and transactions. Fueled by sector insights, a globally connected, multi-disciplinary network and diverse ecosystem partners, EY teams can provide services in more than 150 countries and territories.

Our offer of employment is contingent upon the successful completion of a background check and pre-screening requirements. The candidate acknowledges that all information provided must be accurate.