Specialist Software Engineer - Big Data
Societe Generale
Software Engineering
Bengaluru, Karnataka, India
Posted on Dec 24, 2025
Responsibilities
Job Summary:
We are seeking a highly skilled and motivated Specialist Software Engineer with deep expertise in Big Data technologies, data pipeline orchestration, and observability tooling. The ideal candidate will be responsible for designing, developing, and maintaining scalable data processing systems and integrating observability solutions to ensure system reliability and performance.
Key Responsibilities:Big Data Engineering:- Design and implement robust data pipelines using Apache Kafka, Apache NiFi, Apache Spark, and Sqoop.
- Manage and optimize distributed data storage systems including Hadoop, HDFS, Druid, and ElasticSearch.
- Integrate and maintain data visualization and monitoring tools like Kibana, Grafana, and Logstash.
- Ensure efficient data ingestion, transformation, and delivery across various platforms.
- Develop automation scripts and data processing utilities using Python 3 and Shell scripting.
- Build reusable components and libraries for data manipulation and system integration.
- Implement and configure observability agents such as Fluentd, Telegraf, and Logstash.
- Collaborate with platform teams to integrate OpenTelemetry for distributed tracing and metrics collection (good to have).
- Maintain dashboards and alerts for system health and performance monitoring.
- Contribute to CI/CD pipeline development using GitHub Actions.
- Collaborate with DevOps teams to ensure seamless deployment and integration of data services.
- Work closely with cross-functional teams including data scientists, platform engineers, and product managers.
- Document system architecture, data flows, and operational procedures.
- Participate in code reviews, knowledge sharing sessions, and technical mentoring.
- Bachelor’s or Master’s degree in Computer Science, Information Technology, or related field.
- 5+ years of experience in Big Data engineering and scripting.
- Strong hands-on experience with:
- Kafka, NiFi, Hadoop, HDFS, Spark, Sqoop
- ElasticSearch, Druid, Kibana, Grafana
- Python3, Shell scripting
- Logstash, Fluentd, Telegraf
- Familiarity with GitHub Actions and basic DevOps practices.
- Exposure to OpenTelemetry is a plus.
- Excellent problem-solving, analytical, and communication skills.
- Experience in building real-time data streaming applications.
- Knowledge of data governance, security, and compliance in Big Data environments.
- Certifications in Big Data technologies or cloud platforms (AWS/GCP/Azure) are a plus.