Know ATS Score
CV/Résumé Score
  • Expertini Resume Scoring: Our Semantic Matching Algorithm evaluates your CV/Résumé before you apply for this job role: Data Engineer(DevOps).
India Jobs Expertini

Urgent! Data Engineer(DevOps) Job Opening In Pune – Now Hiring Confidential

Data Engineer(DevOps)



Job description

Key Responsibilities

• Ensure platform uptime and application health as per SLOs/KPIs

• Monitor infrastructure and applications using ELK, Prometheus, Zabbix, etc.

• Debug and resolve complex production issues, performing root cause analysis

• Automate routine tasks and implement self-healing systems

• Design and maintain dashboards, alerts, and operational playbooks

• Participate in incident management, problem resolution, and RCA documentation

• Own and update SOPs for repeatable processes

• Collaborate with L3 and Product teams for deeper issue resolution

• Support and guide L1 operations team

• Conduct periodic system maintenance and performance tuning

• Respond to user data requests and ensure timely resolution

• Address and mitigate security vulnerabilities and compliance issues Technical Skillset

• Hands-on with Spark, Hive, Cloudera Hadoop, Kafka, Ranger

• Strong Linux fundamentals and scripting (Python, Shell)

• Experience with Apache NiFi, Airflow, Yarn, and Zookeeper

• Proficient in monitoring and observability tools: ELK Stack, Prometheus, Loki

• Working knowledge of Kubernetes, Docker, Jenkins CI/CD pipelines

• Strong SQL skills (Oracle/Exadata preferred)

• Familiarity with DataHub, DataMesh, and security best practices is a plus

• Strong problem-solving and debugging mindset

• Ability to work under pressure in a fast-paced environment.

• Excellent communication and collaboration skills.

• Ownership, customer orientation, and a bias for actionKey Responsibilities

• Ensure platform uptime and application health as per SLOs/KPIs

• Monitor infrastructure and applications using ELK, Prometheus, Zabbix, etc.

• Debug and resolve complex production issues, performing root cause analysis

• Automate routine tasks and implement self-healing systems

• Design and maintain dashboards, alerts, and operational playbooks

• Participate in incident management, problem resolution, and RCA documentation

• Own and update SOPs for repeatable processes

• Collaborate with L3 and Product teams for deeper issue resolution

• Support and guide L1 operations team

• Conduct periodic system maintenance and performance tuning

• Respond to user data requests and ensure timely resolution

• Address and mitigate security vulnerabilities and compliance issues Technical Skillset

• Hands-on with Spark, Hive, Cloudera Hadoop, Kafka, Ranger

• Strong Linux fundamentals and scripting (Python, Shell)

• Experience with Apache NiFi, Airflow, Yarn, and Zookeeper

• Proficient in monitoring and observability tools: ELK Stack, Prometheus, Loki

• Working knowledge of Kubernetes, Docker, Jenkins CI/CD pipelines

• Strong SQL skills (Oracle/Exadata preferred)


Skills Required
Airflow, Hive, Hadoop, Pyspark, Shell Scripting, Python


Required Skill Profession

Computer Occupations



Your Complete Job Search Toolkit

✨ Smart • Intelligent • Private • Secure

Start Using Our Tools

Join thousands of professionals who've advanced their careers with our platform

Rate or Report This Job
If you feel this job is inaccurate or spam kindly report to us using below form.
Please Note: This is NOT a job application form.


    Unlock Your Data Engineer Potential: Insight & Career Growth Guide