MLOps Engineering Lead
Becton Dickinson
Job Description Summary
BD's Data Science & Advanced Analytics team (TGS) is seeking an experienced MLOps Engineer with expertise in Azure & Databricks to join our team. The ideal candidate will be responsible for designing, developing, and maintaining machine learning pipelines, ensuring robust deployment and scaling of models, and fostering collaboration between data science and operations teams. This role is crucial for the efficiency, reliability and compliance of our machine learning systems and for driving continuous improvement in our MLOps practices.Job Description
As part of BD's Data Science & Advanced Analytics team, you are responsible for designing and maintaining machine learning pipelines that power cutting-edge AI solutions across BD. Key responsibilities include pipeline design, data management, model training, deployment, and automating the ML lifecycle. The role also involves managing CI/CD pipelines, ensuring model scalability and performance, collaborating with cross-functional teams, and adhering to security and compliance standards.
About BD (www.bd.com)
BD is one of the largest global medical technology companies in the world and is advancing the world of health by improving medical discovery, diagnostics and the delivery of care. At BD, the work we do is life-changing—every day, our customers and their patients depend on BD products to improve health.
The BD Technology Campus India (BDTCI) is an integral part of the global R&D network for BD in product development and product engineering for global markets. This center leads design, development and delivery of critical R&D solutions for the global markets.
You will work with some of the brightest minds in technology, in a unique environment that fosters and supports ingenuity. You’ll drive digital solutions that better serve our customers, patients and employees in pursuit of helping all people live healthy lives.
Get ready. Opportunity abounds in this unique and exciting space where you’ll have the privilege of being one of the first 500 to join the organization.
Technology Global Services (TGS)
Technology Global Services is the group behind building the technology strategy and roadmap for BD. It’s a capability center within BD, entrusted to build and execute enterprise capabilities, develop standards for technology development and enable new age capabilities in Artificial Intelligence, Machine Learning & Automation among others.
Joining TGS in India is an exciting opportunity to be part of a greenfield initiative, establishing BD's technology capabilities in countries worldwide to tackle challenging global health issues. With the agility of a startup and the support of a long-standing medical technology institution, you’ll have the best of both worlds when it comes to driving innovation forward. Not only will you be part of exponential growth, but you’ll be also able to flex your skillset every step of the way. You will be empowered to choose the experiences, learning and opportunities that will help you in the pursuit of your aspirations.
Key Responsibilities
Pipeline Development:
- Pipeline Design: Architect and implement scalable and robust MLOps pipelines on Azure Databricks.
- Data Management: Develop data ingestion, transformation, and preprocessing pipelines to ensure clean and structured data for model training.
- Model Training: Set up automated workflows for model training, hyperparameter tuning, and validation.
- Model Deployment: Implement solutions for deploying models into production environments, ensuring seamless integration with existing systems.
Automation and Integration:
- End-to-End Automation Automate the entire ML lifecycle from data ingestion to model deployment and monitoring.
- CI/CD Pipelines: Develop and manage CI/CD pipelines for ML models using tools like Azure DevOps, Jenkins, or GitHub Actions.
- Integration: Collaborate with data scientists, data engineers, and software developers to integrate ML models into production applications.
Monitoring and Maintenance:
- Performance Monitoring: Implement monitoring solutions to track model performance, accuracy, and drift.
- Scalability: Ensure models are scalable and can handle increased load and data volume.
- Issue Resolution: Diagnose and resolve issues in ML pipelines promptly to minimize downtime and maintain high availability.
Documentation and Best Practices:
- Documentation: Create and maintain detailed documentation for ML Ops processes, pipelines, and best practices.
- Best Practices: Stay up to date with the latest advancements in ML Ops and Azure Databricks and incorporate new techniques and tools into the workflow.
Security and Compliance:
- Security Measures: Ensure that all ML workflows adhere to security standards and compliance requirements.
- Data Privacy: Implement data privacy and protection measures throughout the ML lifecycle.
Technical Skills:
- Proven experience as an MLOps Engineer or similar role, with a strong focus on Azure Databricks.
- Azure Expertise: In-depth knowledge of Azure services, particularly Azure Databricks, Azure Machine Learning, Azure Data Factory, and other related services.
- Programming: Proficiency in Python and SQL. Familiarity with other programming languages such as Scala or R is a plus.
- ML Lifecycle Tools:Experience with ML lifecycle management tools such as MLflow, Kubeflow, or similar platforms.
- Containerization: Proficiency in containerization technologies like Docker and orchestration with Kubernetes.
- CI/CD Practices: Strong understanding and experience with CI/CD practices and tools, such as Jenkins, GitHub Actions, or Azure DevOps.
- Data Engineering: Solid understanding of data engineering principles and experience with ETL processes.
- Big Data Technologies: Familiarity with big data technologies like Apache Spark and Hadoop.
Soft Skills:
- Problem-Solving: Excellent analytical and problem-solving skills, with a keen attention to detail.
- Communication: Strong verbal and written communication skills, with the ability to explain complex technical concepts to non-technical stakeholders.
- Collaboration: Demonstrated ability to work effectively in a collaborative, cross-functional team environment.
Preferred Qualifications:
- Degree: Bachelor’s or Master’s degree in Computer Science, Engineering, Data Science, or a related field.
- 7 to 9 years of working experience. 3 to 5 years of experience as ML Ops Engineer
- Additional Cloud Experience: Experience with other cloud platforms such as AWS or Google Cloud Platform.
- Certifications: Relevant certifications in Azure or related technologies (e.g., Microsoft Certified: Azure Data Scientist).
- Advanced ML Techniques: Experience with advanced machine learning techniques and frameworks (e.g., deep learning, reinforcement learning).
Benefits:
- Competitive Salary: Attractive salary package based on experience and qualifications.
- Professional Development: Opportunities for professional development, training, and certifications.
- Work-Life Balance: Flexible working hours and remote work options.
- Inclusive Environment: An inclusive and collaborative work culture that values diversity and innovation.
- Paid Time Off: Generous paid time off, including vacation days, sick leave, and holidays.
Application Process:
Interested candidates are invited to submit their resume, cover letter, and any relevant project portfolios or GitHub repositories. Our recruitment team will review all applications and reach out to candidates who meet our qualifications for further interviews and assessments.