Know ATS Score
CV/Résumé Score
  • Expertini Resume Scoring: Our Semantic Matching Algorithm evaluates your CV/Résumé before you apply for this job role: Senior Data Engineer (Python, PySpark, Hive, Big Data).
India Jobs Expertini

Urgent! Senior Data Engineer (Python, PySpark, Hive, Big Data) Job Opening In Bengaluru – Now Hiring Confidential

Senior Data Engineer (Python, PySpark, Hive, Big Data)



Job description

About the Role

We are looking for a highly skilled Senior Data Engineer with a strong background in Python, PySpark, Hive, and distributed data processing frameworks.

The ideal candidate will have 5+ years of hands-on experience in building and optimizing big data pipelines, with expertise in both batch and streaming data processing using Spark.

Note: Applications will be considered only from candidates currently working in any of the following organizations or other NSE/BSE-listed companies:
Wipro, Accenture, TCS, IBM, Cognizant, Nagarro, Infosys, Tech Mahindra, PwC, KPMG, Deloitte, BCG, Capgemini, Mphasis, HP, LTI, L&T, Oracle, HCL, Persistent Systems, KPIT, Hexaware, Mindtree, Concentrix, UST Global.

Key Responsibilities

  • Design, develop, and maintain efficient and scalable big data pipelines using Python, PySpark, Hive, and Spark.
  • Work with large-scale datasets in both batch and streaming environments.
  • Perform Spark performance tuning and optimize resource usage.
  • Collaborate with cross-functional teams including data scientists, analysts, and platform engineers.
  • Ensure data quality and integrity across various data pipelines.
  • Utilize orchestration tools such as Apache Airflow for workflow management.
  • Troubleshoot and resolve data pipeline issues in a timely manner.

Mandatory Skills

  • Strong proficiency in Python programming.
  • Hands-on experience with PySpark and Apache Spark for big data processing.
  • Expertise in Hive for data warehousing and querying.
  • Deep understanding of distributed computing principles and data processing frameworks.
  • Experience in Spark performance tuning (both batch and streaming jobs).

Secondary Skills (Nice to Have)

  • Familiarity with orchestration tools like Apache Airflow or similar.
  • Experience with data lake or data mesh architectures.
  • Exposure to cloud platforms (AWS, Azure, or GCP) for big data services.

Qualifications

  • Bachelor's or Master's degree in Computer Science, Information Technology, or a related field.
  • 5+ years of relevant experience in big data engineering roles.
  • Current or previous employment at one of the listed companies is mandatory.

What We Offer

  • Competitive salary up to ₹19 LPA
  • Opportunity to work with cutting-edge technologies and large-scale data systems
  • Collaborative and inclusive work culture
  • Continuous learning and growth opportunities

If you meet the criteria and are ready to take on a new challenge in big data engineering, we encourage you to apply today!


Skills Required
Python, Pyspark, Hive, Apache Spark


Required Skill Profession

Computer Occupations



Your Complete Job Search Toolkit

✨ Smart • Intelligent • Private • Secure

Start Using Our Tools

Join thousands of professionals who've advanced their careers with our platform

Rate or Report This Job
If you feel this job is inaccurate or spam kindly report to us using below form.
Please Note: This is NOT a job application form.


    Unlock Your Senior Data Potential: Insight & Career Growth Guide