hero

Find Your Dream Job Today

Senior Data Management Professional - Data Engineer - EVTS

Bloomberg

Bloomberg

Software Engineering, Data Science
London, UK
Posted on Oct 7, 2025
Bloomberg runs on data. Our products are fueled by powerful information. We combine data and context to paint the whole picture for our clients, around the clock – from around the world. In Data, we are responsible for delivering this data, news and analytics through innovative technology - quickly and accurately. We apply problem-solving skills to identify innovative workflow efficiencies, and we implement technology solutions to enhance our systems, products and processes.
The Events & Transcripts (EVTS) Data team operates at the intersection of Product, Engineering, and AI, delivering market-critical data that powers Bloomberg’s Corporate Events and Transcripts product EVTS. From earnings calls and shareholder meetings to product launches and conferences, we maintain a global, centralized calendar of market-moving events enriched with transcripts and AI-driven insights.
Our data is embedded in the daily workflows of analysts, traders, and quantitative analysts who depend on accuracy, timeliness, and coverage to act on market-moving signals. We combine scalable ETL pipelines with human-in-the-loop curation to handle the full data lifecycle, ensuring our content meets the highest standards of quality and reliability.
What’s the role?
Events & Transcripts (EVTS) Data team is seeking a highly motivated Data Engineer with a passion for finance, data, and technology to build and optimize pipelines that power Bloomberg’s corporate events and transcripts products. You will be a technical leader, responsible for devising scalable solutions that enhance the accuracy, timeliness, and integrity of our datasets while enabling automation and driving client impact.
In this role, you will be responsible for the transformation, and distribution of structured and unstructured data across EVTS Calendar and Streaming Products. Working closely with Product, Engineering, AI, and Data Technologies, you will modernize our workflows to deliver fit-for-purpose data that underpins critical market workflows.
We'll Trust You To:
- Design, build, and maintain robust, scalable data pipelines that support ingestion, transformation, and delivery of large, complex datasets using Bloomberg’s Data Technologies stack (Dataflow Recipes) or equivalent frameworks (AWS S3, Lambda, Kafka, Apache Airflow, Python)
- Acquire and normalize data from diverse structured and unstructured sources, expanding coverage and ensuring transparency in how data is collected, enriched, and delivered.
- Analyze and improve internal processes, applying programmatic and machine learning approaches to increase efficiency and automation.
- Design and implement Control frameworks, applying statistical methods and data profiling to identify, validate, and resolve data gaps at scale.
- Collaborate with Engineering, AI, and Product partners to enhance Data Ingestion pipelines, integrate AI driven automation, and deliver scalable, client-ready datasets.
- Lead modernization efforts to evolve EVTS data workflows, ensuring interoperability across systems and enabling broader reuse of data assets.
- Safeguard the creation of high-quality training data for Generative AI extraction models in collaboration with SMEs/AI Engineering.
- Mentor peers and supply technical leadership, guiding design decisions and fostering knowledge-sharing across the team.
- Stay ahead of industry standard methodologies in data engineering, AI-driven automation frameworks to drive continuous improvement.
- Work in a fast-paced, complex, and collaborative setting and be ready to take a hands-on-role in our Data initiatives.
You'll Need to Have:
*Please note we use years of experience as a guide but we certainly will consider applications from all candidates who are able to demonstrate the skills necessary for the role.
- BA/BS degree or higher in Computer Science, Statistics, Finance, Data Analytics, or a related STEM/business field, or equivalent professional work experience.
- 3+ years of experience in data quality management, data engineering, or data profiling, ideally within financial data, market research, or technology domains.
- Strong proficiency in Python with hands-on experience manipulating and analyzing large datasets.
- Familiarity with ETL workflows, APIs, and data engineering concepts, with the ability to work across distributed systems.
- Demonstrated expertise in data engineering, with a consistent record of implementing scalable ETL pipelines and data workflows.
- Proven track record to analyze client workflows and dataset usage to ensure data is fit-for-purpose and aligned with business needs.
- Demonstrated expertise in data management principles - including data quality, governance, anomaly detection, and lifecycle management.
- Excellent analytical and problem-solving skills, with the ability to apply quantitative methods to improve processes and guide decisions.
- Strong project management and collaboration skills, with experience leading multi-functional initiatives in fast-paced, global environments.
- Ability to lead multiple projects with global scope in parallel, with superb communication and stakeholder management skills
We'd Love to See:
- Formal knowledge of data governance and data management, supported by industry certifications (e.g. DAMA CDMP, DCAM, etc.)
- Keen interest and familiarity with Generative AI frameworks.
- Experience working within an Agile/Scrum development methodology, with the ability to adapt processes for data engineering and quality initiatives.
- Experience manipulating and wrangling large datasets to enable scalable quality checks, enrichment, and automation.
Does this sound like you?
Apply if you think we're a good match. We'll get in touch to let you know what the next steps are!