Apply before 30-09-2025
Position: Big Data Analytics + DevOps Engineer
Experience Required: 2–3 years (Freshers with strong knowledge are welcome)
Location: Punawale, Pune
Employment Type: Full-Time, Permanent
Compensation:
Freshers: Stipend of ₹8,000 – 10,000 per month during the training/probation period
Experienced (2–3 years): ₹2 – 5 LPA (based on skills and experience)
Job Description:
We are seeking a skilled Big Data Analytics + DevOps Engineer professional with expertise
in Kafka, Hadoop, Spark, Java, Data Analytics, and DevOps.
The candidate will be
responsible for designing, developing, and managing large-scale data processing systems and
ensuring smooth deployment and scalability of data applications.
Key Responsibilities:
1.
Develop and maintain big data pipelines using Hadoop and Spark.
2.
Handle real-time data streaming using Kafka.
3.
Implement data analytics solutions to extract insights from large datasets.
4.
Collaborate with DevOps teams for CI/CD, automation, and deployment of data
applications.
5.
Write efficient, maintainable code in Java and integrate with big data frameworks.
6.
Monitor, troubleshoot, and optimize performance of data processing pipelines.
Mandatory Skills:
1.
Big Data Technologies: Hadoop, Spark
2.
Streaming Platforms: Kafka
3.
Programming: Java
4.
Data Analytics: Strong understanding of analytics concepts and processing large
datasets
5.
DevOps Practices: CI/CD, automation, monitoring, deployment pipelines
Qualifications:
1.
Proven experience in Kafka, Hadoop, Spark, Java, DevOps, and data analytics
2.
Strong problem-solving and analytical skills
3.
Ability to work collaboratively in a fast-paced environment
Fields with (*) are compulsory.