Senior Data Management Professional - Data Quality - Commodities Data

Bloomberg

Bloomberg

Data Science, Quality Assurance

London, UK

Posted on May 14, 2026
Bloomberg runs on data. Our products are fueled by powerful information. We combine data and context to paint the whole picture for our clients, around the clock – from around the world. In Data, we are responsible for delivering this data, news and analytics through innovative technology - quickly and accurately. We apply problem-solving skills to identify innovative workflow efficiencies, and we implement technology solutions to enhance our systems, products and processes.
What’s the role?
We are seeking a hands-on data quality and automation professional to help improve the reliability, control environment, and operational efficiency of commodities and energy data. This role will focus on executing and enhancing data quality processes, supporting automation initiatives, and partnering closely with data operations, engineering, and business stakeholders to resolve issues and improve key data pipelines.
This is a delivery-oriented role suited to someone who is strong in implementation and execution, with the ability to translate data quality requirements into practical controls, monitoring, and workflow improvements.
We’ll trust you to:
  • Support the implementation and ongoing enhancement of data quality controls across commodities datasets, including market data, reference data, and fundamentals.
  • Build, maintain, and optimize automated data quality checks for completeness, accuracy, timeliness, consistency, and schema validation.
  • Monitor data quality metrics and controls, investigate exceptions, and help drive timely resolution of issues.
  • Contribute to the maintenance of data quality standards, policies, and KPI reporting for critical data domains.
  • Work closely with data operations teams to identify recurring data issues and convert them into clear requirements for process improvements, automation, or engineering fixes.
  • Help improve day-to-day DataOps processes by reducing manual intervention, standardizing workflows, and strengthening controls.
  • Assist in implementing operational best practices across data workflows, including documentation, testing, change management, and escalation procedures.
  • Partner with engineering and platform teams to improve observability, alerting, and operational support for key data pipelines.
  • Develop and maintain automation solutions for data validation, exception handling, and workflow efficiency using SQL, Python, or similar tools.
  • Support the implementation of imputation controls and rules, including validation, flagging, and monitoring of imputed values.
  • Ensure automated processes are well governed, transparent, and aligned with defined business and control requirements.
  • Identify opportunities to improve scalability and reduce operational risk through targeted automation.
  • Manage and track data quality issues through logging, triage, root-cause analysis, remediation, and closure.
  • Support governance of the data lifecycle across ingestion, normalization, enrichment, and distribution processes.
  • Work with stakeholders across operations, engineering, and product teams to ensure clear ownership and follow-through on data issues.
  • Prepare regular reporting on issue trends, control effectiveness, and remediation progress.
  • Act as a key day-to-day partner for data operations, engineering, and business users on data quality and control topics.
  • Communicate clearly on data issues, priorities, risks, and progress to stakeholders.
  • Contribute practical input into broader data quality and automation initiatives by bringing an execution-focused perspective.
  • Support team members in delivering larger process, control, and tooling improvements.
You’ll need to have:
  • 4+ years experience in data management, data operations, or data controls
  • Experience working with data quality checks, exception management, and operational data processes in a complex data environment.
  • Strong Python scripting skills and practical experience with SQL or similar languages for implementing validation rules, automation, or workflow improvements.
  • Experience working with modern data platforms, workflow tools, or data observability / quality tooling.
  • Proven ability to investigate data issues, perform root-cause analysis, and coordinate remediation across teams.
  • Strong organizational skills, with the ability to manage multiple priorities and drive work through to completion.
  • Effective communicator with the ability to work across technical and non-technical stakeholders.
*Please note we use years of experience as a guide but we certainly will consider applications from all candidates who are able to demonstrate the skills necessary for the role.
We’d love to see:
  • Experience with commodities, energy, market data, or trading-related datasets.
  • STEM background or experience working with technical, quantitative, or data-intensive disciplines.
  • Familiarity with DataOps concepts and how data operations and engineering teams work together to improve reliability and delivery.
  • Experience in a regulated or controlled data environment.
  • Exposure to cloud-based data platforms and pipeline monitoring tools.
  • Experience supporting implementation of automation, controls, or AI/ML-based data solutions within a defined validation framework.
Does this sound like you?
Apply if you think we're a good match. We'll get in touch to let you know what the next steps are!