Senior Data Engineer in Azure using Databricks for Scala/Python

United States - New Jersey, United States - North Carolina

Information Technology (IT)

Group Functions

Your role

Do you like building complex, secure platforms with a touch of a button? Are passionate about developing automated infrastructure as code that is successfully rolled out across a global implementation? Do you have what it takes to build robust solution that aide data engineers in delivering their data pipelines?
UBS , the Group Compliance & Regulatory Governance (GCRG) Technology team is looking for a hands-on Data Engineer on Azure leveraging Databricks for Scala/Python to:
• engineer reliable data pipelines for sourcing, processing, distributing, and storing data in different ways, using Databricks and Airflow
• craft complex transformation pipelines on multiple datasets producing valuable insights that inform business decisions, making use of our internal data platforms and educate others about Best Practices for analytics big data
• develop, train, and apply data engineering techniques to automate manual processes, and solve challenging business problems and ensure the quality, security, reliability, and compliance of our solutions by applying our digital principles and implementing both functional and non-functional requirements
• build observability into our solutions, monitor production health, help to resolve incidents, and remediate the root cause of risks and issues
• leverage Airflow to build complex branching Data Driven pipelines and leverage Databricks to build the spark layer of data pipelines
• leverage Python and Scala for low level complex data operations, codify best practices, methodology and share knowledge with other engineers in UBS

Job Reference #

297450BR

City

Raleigh, Weehawken

Job Type

Full Time

Your Career Comeback

We are open to applications from career returners. Find out more about our program on ubs.com/careercomeback.

Your team

You will be working as part of the Group Compliance Regulatory Governance Technology stream that focuses on Data Analytics and Reporting. Our crew is using the latest data platforms to further the group’s data strategy to realize the true potential of data as an asset through utilizing data lakes, data virtualization, for use with advanced analytics, AI/ML. The crew also ensure the data is managed with strategic sourcing and data quality tooling. Your team will be responsible for building the central GCRG data lake, developing data pipelines to strategically source data from master and authoritative source, creating data virtualization layer, building connectivity for advanced analytics and elastic search capabilities with the aid of cloud computing.


Diversity helps us grow, together. That’s why we are committed to fostering and advancing diversity, equity, and inclusion. It strengthens our business and brings value to our clients.

Your expertise

• Bachelor’s or master’s degree in computer science or any similar engineering is highly desired
• ideally 5+ years of total IT experience in SWD or engineering and ideally 3+ years of hand-on experience designing and building scalable data pipelines for large datasets on cloud data platforms
• ideally 3+ years of hand-on experience in distributed processing using Databricks, Apache Python/Spark, Kafka & leveraging Airflow scheduler/executor framework
• ideally 2+ years of hand-on experience programming experience in Scala (must have), Python & Java (preferred)
• experience with monitoring solutions such as Spark Cluster Logs, Azure Logs, AppInsights, Graphana to optimize pipelines and knowledge in Azure capable languages, Python, Scala or Java
• proficiency at working with large and complex code base management systems like: Github/Gitlab, Gitflow as a project commiter at both command-line and IDEs levels using: tools like: IntelliJ/AzureStudio
• experience working with Agile development methodologies and delivering within Azure DevOps, automated testing on tools used to support CI and release management
• expertise in optimized dataset structures in Parquet and Delta Lake formats, with ability to design and implement complex transformations between datasets
• expertise in optimized Airflow DAGS and branching logic for Tasks to implement complex pipelines and outcomes and expertise in both traditional SQL and NO-SQL authorship

About us

UBS is the world’s largest and the only truly global wealth manager. We operate through four business divisions: Global Wealth Management, Personal & Corporate Banking, Asset Management and the Investment Bank. Our global reach and the breadth of our expertise set us apart from our competitors..

We have a presence in all major financial centers in more than 50 countries.

How we hire

This role requires an assessment on application. Learn more about how we hire: www.ubs.com/global/en/careers/experienced-professionals.html

Join us

At UBS, we embrace flexible ways of working when the role permits. We offer different working arrangements like part-time, job-sharing and hybrid (office and home) working. Our purpose-led culture and global infrastructure help us connect, collaborate, and work together in agile ways to meet all our business needs.

From gaining new experiences in different roles to acquiring fresh knowledge and skills, we know that great work is never done alone. We know that it's our people, with their unique backgrounds, skills, experience levels and interests, who drive our ongoing success. Together we’re more than ourselves. Ready to be part of #teamUBS and make an impact?

Contact Details

UBS Business Solutions SA
UBS Recruiting

Disclaimer / Policy statements

UBS is an Equal Opportunity Employer. We respect and seek to empower each individual and support the diverse cultures, perspectives, skills and experiences within our workforce.

Senior Data Engineer in Azure using Databricks for Scala/Python | UBS - Experienced professionals - job boards