Senior Data Management Professional - Data Engineering - Commodities Data

Bloomberg

Bloomberg

Software Engineering, Data Science

Princeton, NJ, USA

Posted on May 5, 2026
Bloomberg runs on data. Our products are fueled by powerful information. We combine data and context to paint the whole picture for our clients, around the clock - from around the world. In Data, we are responsible for delivering this data, news, and analytics through innovative technology - quickly and accurately. We apply problem-solving skills to identify workflow efficiencies and implement technology solutions to enhance our systems, products, and processes.Bloomberg runs on data. Our products are fueled by powerful information. We combine data and context to paint the whole picture for our clients, around the clock - from around the world. In Data, we are responsible for delivering this data, news, and analytics through innovative technology - quickly and accurately. We apply problem-solving skills to identify workflow efficiencies and implement technology solutions to enhance our systems, products, and processes.
Our Team:
In the Commodities Data Team, we’re responsible for onboarding, modelling and maintaining data that are fit for purpose for our clients. More than 320,000 business leaders rely on the real time financial information available on the Bloomberg Professional Service. Our products run on intelligence and insight provided by the Commodities Team. Our team of analysts provide valuable data and insights to key decision makers within the commodity markets.
We are responsible for the data management of datasets across Power and Gas, Oil, Carbon, Agriculture and Metals. The team provides relevant, timely and accurate data to empower customers to drive their analysis of commodity markets, both pricing and fundamentals.
The Role:
The Commodities Data team is looking for a highly experienced Senior Data Management Professional to help lead the next generation of our data platform. This role requires a strong data engineering foundation combined with deep ownership of data quality, where quality is built directly into pipelines, systems and architecture rather than managed as a separate function. This role is designed for a top-tier individual contributor who thrives in complex environments and consistently delivers high-impact, scalable solutions.
You will be responsible for designing and evolving data systems that power Tier 1 datasets, improving reliability, reducing technical debt and modernizing legacy workflows. This includes building advanced ETL pipelines, implementing intelligent automation and developing robust data quality controls and monitoring frameworks to ensure data accuracy, completeness and timeliness.
In addition, you will play a key role in defining and executing the data quality vision for our datasets. This includes evolving fit-for-purpose quality metrics, understanding how clients consume data across Bloomberg products and aligning data with both client needs and Bloomberg’s commercial strategy. You will also influence data governance practices and lifecycle management across teams to ensure long-term data integrity and scalability.
You will collaborate closely with Product, Engineering and domain experts to define and execute on strategic data initiatives. In addition to hands-on development, you will act as a technical leader within the team by owning end-to-end solutions, influencing architecture decisions and mentoring others.
We are looking for someone who operates at a high bar of technical excellence, takes ownership of both data systems and data quality outcomes, and leverages modern technologies including AI and machine learning to enhance data workflows and extract additional value from our datasets.
We’ll Trust You To:
- Build and maintain highly scalable, resilient and observable data pipelines supporting critical Commodities datasets
- Lead the modernization of legacy workflows, reducing technical debt and improving maintainability and performance
- Perform deep data analysis including profiling and root cause analysis to support data-driven decision making and validate improvements
- Build and deploy automated data quality controls, including anomaly detection and proactive monitoring.
- Apply AI and machine learning techniques such as natural language processing, entity extraction and anomaly detection to improve data ingestion and enrichment
- Identify opportunities to leverage generative AI and automation to reduce manual workflows and accelerate data onboarding
- Develop validation frameworks for agentic artifacts to ensure quality, reliability and appropriate controls.
- Own and drive large-scale data migrations and system redesigns
- Establish best practices around data architecture, pipeline design and workflow orchestration
- Understand how clients consume data across Bloomberg products and translate those needs into measurable data quality and product improvements
- Partner with Engineering to align on platform evolution, scalability and system design
- Act as a technical leader and mentor, raising the bar for code quality, design thinking and execution across the team
- Apply your proven project management expertise to ensure technical projects are aligned with requirements and stay on track
You’ll Need to Have:
- A bachelor’s degree or above in Statistics, Computer Science, Quantitative Finance or other STEM related field or degree-equivalent qualifications
- 4+ years of experience architecting, designing and implementing scalable data solutions and ETL pipelines, including monitoring, remediation and data management workflows across diverse data sources
- 4+ years of hands-on experience working with Python in development/production environment and working with databases either SQL/NoSQL
- Proven track record of owning and delivering complex, high-impact data initiatives end-to-end
- Strong experience with distributed data systems, workflow orchestration and scalable architecture design
- Hands-on experience applying machine learning or AI techniques in data workflows such as classification, NLP, anomaly detection or LLM-assisted workflows
- Strong experience in data quality management, including defining metrics, performing root cause analysis and driving measurable improvements in data reliability
- Experience building observable systems with monitoring, alerting and data reliability frameworks
- Ability to analyze and refactor legacy systems and drive measurable improvements in performance and scalability
- Familiarity with various databases, schemas, modeling, as well as structured and unstructured formats (PDF, HTML, XBRL, JSON, CSV etc.)
- Strong communication and interpersonal skills, with the proven ability to influence technical direction, mentor team members, clearly communicate complex concepts and methodologies, and effectively collaborate across diverse and distributed teams
We’d Love to See:
- Advanced degree in a relevant subject and/or Certified Data Management Professional (CDMP, or working towards it)
- Experience in Bloomberg products, Bloomberg Terminal fluency and/or Bloomberg Data Workflows
- Demonstrated experience working with Commodities markets and products
- Experience productionizing AI or machine learning models within data platforms
- Track record of driving efficiency gains through automation and intelligent systems
- Strong understanding of data governance, lineage and metadata management at scale
- Hands-on project management experience with familiarity in JIRA and QlikSense
Does this sound like you? Apply if you think we're a good match. We'll get in touch to let you know what the next steps are.