EY - GDS Consulting - AI & DATA - IICS+ Tidal - Senior
EY
At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all.
Job Description for Lead Data Engineer
Rank – Manager
Objectives and Purpose
- The Lead Data Engineer leads large scale solution architecture design and optimisation, ingestion and support that serve multiple downstream systems and business users. This individual leads the team of Mid- and Senior data engineers.
- The Senior Data Engineer will:
- Implement data processes for the data warehouse and internal systems.
- Defining data architecture, designing ETL processes.
- Clean, aggregate and organize data from disparate sources and transfer it to data warehouses.
- Lead development testing and maintenance of data pipelines and platforms, to enable data quality to be utilized within business dashboards and tools.
- Support team members and direct reports in refining and validating.
- Create, maintain, and support the data platform and infrastructure that enables meeting diverse data requirements across the enterprise and business groups; this includes the testing, maintenance, construction, and development of architectures such as diverse real time and batch data integration frameworks that supports high-volume, large-scale data processing and databases with proper verification and validation processes.
Your key responsibilities
Data Engineering
- Lead the design, development, optimization, and maintenance of data architecture and pipelines that adheres to ETL principles and business goals.
- Develop and maintain scalable data pipelines, build out new integrations using AWS native technologies and data bricks to support continuing increases in data source, volume, and complexity.
- Define data requirements, gather and mine large scale of structured and unstructured data, and validate data by running various data tools in the Big Data Environment.
- Lead the ad hoc data analysis, support standardization, customization and develop the mechanisms to ingest, analyze, validate, normalize, and clean data.
- Write unit/integration/performance test scripts and perform data analysis required to troubleshoot data related issues and assist in the resolution of data issues.
- Implement processes and systems to drive data reconciliation and monitor data quality, ensuring production data is always accurate and available for key stakeholders, downstream systems, and business processes.
- Lead the evaluation, implementation and deployment of emerging tools and processes for analytic data engineering to improve productivity.
- Develop and deliver communication and education plans on analytic data engineering capabilities, standards, and processes.
- Learn about machine learning, data science, computer vision, artificial intelligence, statistics, and/or applied mathematics.
- Solve complex data problems to deliver insights that help achieve business objectives.
- Implement statistical data quality procedures on new data sources by applying rigorous iterative data analytics.
Relationship Building and Collaboration
- Partner with Business Analysts and Enterprise Architects to develop technical architectures for strategic enterprise projects and initiatives.
- Coordinate with Data Scientists, visualization developers and other data consumers to understand data requirements, and design solutions that enable advanced analytics, machine learning, and predictive modelling.
- Support Data Scientists in data sourcing and preparation to visualize data and synthesize insights of commercial value.
- Collaborate with AI/ML engineers to create data products for analytics and data scientist team members to improve productivity.
- Advise, consult, mentor and coach other data and analytic professionals on data standards and practices, promoting the values of learning and growth.
- Foster a culture of sharing, re-use, design for scale stability, and operational efficiency of data and analytical solutions.
Skills and attributes for success
Technical/Functional Expertise
- Advanced experience and understanding of modern data architecture and frameworks like data mesh, data fabric, data product design, experience with designing data integration frameworks capable of supporting heterogeneous data sources, structured and unstructured data.
- Experience in data modelling, AWS, Databricks project implementations.
- Strong business acumen with knowledge of the Pharmaceutical, Healthcare, or Life Sciences sector is preferred, but not required.
- Expertise in building processes that support data transformation, workload management, data structures, dependency, and metadata.
- Expertise with optimizations – Design that support optimal performance, storage, and compute costs.
- Experience with or knowledge of Agile Software Development methodologies.
Leadership
- Provide mentoring to the Mid and Senior data engineers.
- Strategic mindset of thinking above the minor, tactical details and focusing on the long-term, strategic goals of the organization.
- Advocate of a culture of collaboration and psychological safety.
Decision-making and Autonomy
- Shift from manual decision-making to data-driven, strategic decision-making.
- Proven track record of applying critical thinking to resolve issues and overcome obstacles.
Interaction
- Proven track record of collaboration and developing strong working relationships with key stakeholders by building trust and being a true business partner.
- Demonstrated success in collaborating with different IT functions, contractors, and constituents to deliver data solutions that meet standards and security measures.
Innovation
- Passion for re-imagining new solutions, processes, and end-user experience by leveraging digital and disruptive technologies and developing advanced data and analytics solutions.
- Leading research and development efforts in data engineering.
- Advocate of a culture of growth mindset, agility, and continuous improvement.
Complexity
- Demonstrates high multicultural sensitivity to lead teams effectively.
- Ability to coordinate and problem-solve amongst larger teams.
To qualify for the role, you must have the following:
Essential skillsets
- Bachelor’s degree in Engineering, Computer Science, Data Science, or related field
- 10+ years of experience in software development, data engineering, ETL, and analytics reporting development.
- Expert in designing, building, implementing, and maintaining data and system integrations using dimensional data modelling and development and optimization of ETL pipelines.
- Advanced experience and understanding of modern data architecture and frameworks like data mesh, data fabric, data product design, experience with designing data integration frameworks capable of supporting heterogeneous data sources, structured and unstructured data.
- Proven track record of designing and implementing complex data solutions.
- Demonstrated understanding and experience using:
- Data Engineering Programming Languages (i.e., Python)
- Distributed Data Technologies (e.g., Pyspark)
- Cloud platform deployment and tools (e.g., Kubernetes)
- Relational SQL databases
- DevOps and continuous integration
- AWS cloud services and technologies (i.e., Lambda, S3, DMS, Step Functions, Event Bridge, Cloud Watch, RDS)
- Knowledge of data lakes, data warehouses, AI pipelines or similar
- Databricks/ETL
- IICS/DMS
- GitHub
- Event Bridge, Tidal
- Strong organizational skills with the ability to manage multiple projects simultaneously and operate as a leading member across globally distributed teams to deliver high-quality services and solutions.
- Deep understanding of database architecture and administration
- Processes high proficiency in code programming languages (e.g., SQL, Python, Pyspark, AWS services) to design, maintain, and optimize data architecture/pipelines that fit business goals.
- Extracts, transforms, and loads data from multiple external/internal sources using Databricks Lakehouse/Data Lake concepts into a single, consistent source to serve business users and data visualization needs.
- Utilizes the principles of continuous integration and delivery to automate the deployment of code changes to elevate environments, fostering enhanced code quality, test coverage, and automation of resilient test cases.
- Excellent written and verbal communication skills, including storytelling and interacting effectively with multifunctional teams and other strategic partners.
- Strong problem solving and troubleshooting skills.
- Ability to work in a fast-paced environment and adapt to changing business priorities.
Desired skillsets
- Masters Degree in Engineering, Computer Science, Data Science, or related field
- Experience in a global working environment
Travel requirements
- Access to transportation to attend meetings.
- Ability to fly to meetings regionally and globally.
EY | Building a better working world
EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets.
Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate.
Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today.