Senior Software Engineer - Data License Analytics, Pune
Bloomberg
Software Engineering, Data Science
Pune, Maharashtra, India
Posted on Apr 22, 2026
About the Team
Data License is Bloomberg's enterprise data delivery platform, powering financial institutions worldwide with reference, pricing, regulatory, and alternative data — covering over 50 million securities across portfolio accounting, risk, compliance, and trading use cases.
The DL Analytics Platform is the intelligence and reliability layer of this ecosystem. We build the data ingestion pipelines, embeddings infrastructure, AI agents, and MCP tool servers that give engineers and LLMs structured, query able access to operational data. Our systems power incident triage, semantic search over tickets and documentation, pipeline health analytics, and self-service data onboarding for partner teams across Data License.
We are expanding our engineering presence into Pune and the broader APAC region — a greenfield initiative where engineers will collaborate closely with colleagues in Dublin, London, and New York, helping shape the culture and technical direction of this hub from day one.
What's the role?
You'll join a team at the intersection of data infrastructure, reliability engineering, and applied AI — building systems that make Bloomberg's data delivery more reliable, intelligent, and scalable.
Reliability Engineering: GenAI-powered tooling for incident triage, root-cause analysis, and blast radius assessment — exposing live pipeline data to LLM agents via MCP tool servers.
Data Analytics & AI: Ingestion pipelines into Apache Iceberg, semantic search over operational data using vector databases and embedding models, and automated KPI tracking to surface platform health and anomalies.
We'll trust you to:
- Build GenAI tools and AI agents — RAG pipelines, contextual embedding systems, and MCP tool servers that give LLM agents structured access to live pipeline data
- Build and maintain data ingestion pipelines into Apache Iceberg, orchestrated via Apache Airflow
- Develop semantic search capabilities using vector databases and hosted embedding models
- Build self-service ingestion APIs enabling partner teams to onboard to the platform without managing Iceberg or storage infrastructure
- Design microservices and backend APIs — with opportunities to contribute to React/TypeScript interfaces embedded in production tools
- Instrument systems with OpenTelemetry and contribute to anomaly detection capabilities on the near-term roadmap
Technologies
- Languages & Frameworks: Python (3.11 / 3.12 / 3.13), TypeScript, FastAPI, React
- Data & Orchestration: Apache Iceberg, Apache Airflow, PyArrow, PyIceberg, Spark, Kafka, RabbitMQ, Parquet
- Databases: PostgreSQL, vector databases, Redis, Solr, distributed SQL
- Infrastructure: Docker, Kubernetes, NGINX, OpenTelemetry
- Cloud: GCP, AWS (S3, Redshift), Snowflake, Databricks
- AI & ML: RAG frameworks, embedding models, contextual chunking, semantic search, LLM integration
- Agent Frameworks: MCP (Model Context Protocol), Agent-to-Agent (A2A)
You'll need to have:
- 6+ Software engineering experience in production environments
- Proficiency in Python or a similar language
- Familiarity with relational databases and SQL
- Interest or experience in data engineering — pipelines, batch processing, or data lake technologies
- Understanding of distributed systems and service architecture
- Interest in AI/ML — particularly LLMs, RAG, or embedding-based search
We'd love to see:
- Experience with Apache Iceberg, PyArrow, or data lake engineering
- Familiarity with Apache Airflow or workflow orchestration
- Experience with vector databases, embedding models, or semantic search
- Background in anomaly detection or data quality frameworks
- Exposure to MCP, A2A, or agent communication protocols
- Experience with observability tooling — tracing, metrics, structured logging
- Curiosity about financial data and reliability engineering
Why Join Us
- Ship GenAI tooling used daily in production — MCP servers, RAG pipelines, and AI agents for real incident triage
- Be a founding member of a greenfield engineering hub — shape the culture and standards from day one
- Collaborate with engineers across Dublin, London, and New York
- Work at scale — millions of securities, billions of data points, clients depending on it around the clock
- Join a team that values curiosity, learning, and measurable impact
If this sounds like you:
Apply if you think we're a good match. We'll get in touch to let you know what the next steps are, but in the meantime, feel free to have a look at this: