Lead Data Engineer - Data Modeling
JPMorganChase
Join us as we embark on a journey of collaboration and innovation, where your unique skills and talents will be valued and celebrated. Together we will create a brighter future and make a meaningful difference.
As a Lead Data Engineer at JPMorganChase within the Enterprise Technology - CTO SRE & Support team, you are an integral part of an agile team that works to enhance, build, and deliver data collection, storage, access, and analytics solutions in a secure, stable, and scalable way. As a core technical contributor, you are responsible for maintaining critical data pipelines and architectures across multiple technical areas within various business functions in support of the firm’s business objectives.
You are a technical builder with strong data modeling instincts to build the data backbone for an operational learning capability in a complex support and SRE environment. You will connect and model data from incidents, RCA outputs, problem records, support tickets, customer signals, and related telemetry to surface recurring patterns, identify systemic drivers, and produce actionable handoffs to prevention and readiness teams. The role goes beyond dashboards: it requires workflow-aware data modeling, pragmatic delivery, and comfort working with heterogeneous, imperfect operational data. Partnering closely with leaders across Support, SRE, and Engineering, you will deliver lightweight, durable data products that strengthen institutional learning, improve executive visibility, and enable proactive reliability improvements in a blameless, learning-oriented environment. Success demands hands-on technical depth, comfort with ambiguity, and the judgment to start with minimally sufficient solutions that evolve through use.
Job responsibilities
- Design and implement a minimum viable data model that links incident, RCA, problem, ticketing, customer signals, and observability data for the review function.
- Build and maintain robust pipelines and transformations that expose repeat patterns, operational toil themes, and systemic issue categories across sources.
- Develop lightweight, workflow-supporting data products that turn operational events into actionable learning and clear handoffs for downstream owners.
- Partner with support, SRE, and operational leaders to define required data fields, taxonomies, classifications, and handoff structures that make review outputs actionable and measurable.
- Design mechanisms to distinguish one-off incidents from recurring classes of failure or avoidable demand, enabling detection of recurrence and informed prioritization.
- Establish practical data quality standards, field definitions, and lightweight governance (e.g., lineage, stewardship, access) for operational learning datasets across multiple sources.
- Safeguard blameless review practices by ensuring outputs promote learning and improvement rather than punitive reporting; embed blameless learning norms into data and workflow design.
- Translate loosely defined operational problems into structured datasets, dashboards, and decision-support tools with clear business and engineering value.
- Document data models, assumptions, transformation logic, and operating procedures to support maintainability, transparency, and long-term scale.
- Build solutions that can start manual or semi manual and progressively automate as process maturity grows, integrating with enterprise systems (e.g., ServiceNow, Jira) over time.
- Create decision-useful reporting, visualizations, and leadership-ready views on repeated high-impact issues, emerging pain themes, action status, and systemic trends, including service health metrics (e.g., MTTD, MTTR) to support prioritization, backlog visibility, ownership/SLA tracking, and escalation of repeated high impact patterns without creating reporting overhead.
Required qualifications, capabilities, and skills
- Formal training or certification with 5+ years in professional data engineering roles in cloud-based environments.
- Data engineering in operational domains: Proven experience building models and pipelines with SQL/Python across heterogeneous incident, ticketing, RCA, and telemetry sources; comfortable with imperfect or partial data.
- Data quality and pragmatic governance: Field normalization, standards, and lineage practices that scale across sources without slowing delivery.
- Blameless workflow design: Ability to design data and workflow outputs that support learning and improvement rather than punitive reporting.
- Investigative rigor: Ability to reconstruct precise event timelines across systems and maintain strong evidence integrity in operational analyses.
- Evidence integrity: Experience producing auditable, versioned datasets and reproducible analyses; clearly separates facts, interpretations, and hypotheses in artifacts and reviews.
- Classification design: Experience designing taxonomies and controlled vocabularies that enable consistent classification and actionability across operational data.
- Enterprise workflow integration: Integrates with enterprise platforms (e.g., ticketing/incident systems) and defines data fields, handoffs, and action-tracking structures that convert review outputs into owned, trackable work.
- Incremental delivery mindset: Starts with minimally sufficient solutions and iterates toward greater automation; adapts under pressure and navigates evolving requirements while keeping stakeholders aligned.
- Structured synthesis: Clear documentation of assumptions and logic; conducts structured, non-leading SME/operator interviews and synthesizes qualitative inputs into structured data.
- Decision-useful reporting: Builds executive- and operator-facing dashboards and decision-support views tightly linked to prioritization, ownership, governance decisions, and measurable outcomes rather than volume reporting.
- Direct experience with SRE, incident/problem management, RCA methods and techniques, service health metrics (e.g., MTTD, MTTR), and post-incident reviews.
- Applied use of LLMs/agents, RAG, anomaly detection, or automated runbooks to accelerate evidence collection, summarization, and action routing in review workflows.
- Familiarity with structured methods used in high-reliability investigations (e.g., Bowtie/AcciMap/STPA), peer review/checklists, cross-source corroboration, cognitive bias mitigation (e.g., confirmation, hindsight, outcome bias), and evidence-handling practices such as immutable log retention, event timestamping, query capture, and “docket”-style evidence packages suitable for leadership reviews and audits.
- Experience with modern cloud data platforms and workflow orchestration (e.g., warehouses/lakehouses, streaming, Airflow/Prefect/dbt) and integration with systems like ServiceNow or Jira.
- Background in financial services or other regulated, large-scale operating models; comfort with data privacy, retention, and access controls.
- Designs metrics and feedback loops to evaluate the impact of corrective actions/safety recommendations and reduce recurrence over time.
- Certifications/education may include Lean/Six Sigma, SRE, reliability/safety or RCA-focused training, or equivalent practical credentials.
We offer a competitive total rewards package including base salary determined based on the role, experience, skill set and location. Those in eligible roles may receive commission-based pay and/or discretionary incentive compensation, paid in the form of cash and/or forfeitable equity, awarded in recognition of individual achievements and contributions. We also offer a range of benefits and programs to meet employee needs, based on eligibility. These benefits include comprehensive health care coverage, on-site health and wellness centers, a retirement savings plan, backup childcare, tuition reimbursement, mental health support, financial coaching and more. Additional details about total compensation and benefits will be provided during the hiring process.
We recognize that our people are our strength and the diverse talents they bring to our global workforce are directly linked to our success. We are an equal opportunity employer and place a high value on diversity and inclusion at our company. We do not discriminate on the basis of any protected attribute, including race, religion, color, national origin, gender, sexual orientation, gender identity, gender expression, age, marital or veteran status, pregnancy or disability, or any other basis protected under applicable law. We also make reasonable accommodations for applicants’ and employees’ religious practices and beliefs, as well as mental health or physical disability needs. Visit our FAQs for more information about requesting an accommodation.
JPMorgan Chase & Co. is an Equal Opportunity Employer, including Disability/Veterans
Our professionals in our Corporate Functions cover a diverse range of areas from finance and risk to human resources and marketing. Our corporate teams are an essential part of our company, ensuring that we’re setting our businesses, clients, customers and employees up for success.
Apply your analytical strengths and help us create the data and workflow foundation for a dynamic, structured review function