Know ATS Score
CV/Résumé Score
  • Expertini Resume Scoring: Our Semantic Matching Algorithm evaluates your CV/Résumé before you apply for this job role: Data Engineer Python/ETL.
India Jobs Expertini

Urgent! Data Engineer - Python/ETL Job Opening In Ghaziabad – Now Hiring Etelligens Technologies Pvt Ltd

Data Engineer Python/ETL



Job description

<p><p><br/><b>About the Role :</b></p><p><br/>We are seeking an experienced Data Engineer - Python with 58 years of hands-on expertise in building scalable data solutions.

The ideal candidate will design, develop, and optimize ETL pipelines, ensure data quality and reliability, and collaborate with cross-functional teams to enable data-driven decision-making.<br/><br/><b>Key Responsibilities :</b><br/></p><p><br/></p><p>- Design, develop, and maintain ETL/ELT pipelines using Python, PySpark, and SQL.<br/></p><p><br/></p><p>- Build and optimize scalable data pipelines leveraging AWS services (Glue, Lambda, S3, Athena, Step Functions).<br/></p><p><br/></p><p>- Implement and manage Data Warehousing solutions with strong knowledge of SCD (Type 1, Type 2) and Medallion Architecture.<br/></p><p><br/></p><p>- Develop efficient data models and ensure partitioning, indexing, and performance optimization in big data environments.<br/></p><p><br/></p><p>- Ensure high standards of data quality, governance, and security across pipelines and platforms.<br/></p><p><br/></p><p>- Collaborate with data scientists, analysts, and business stakeholders to translate business requirements into technical solutions.<br/></p><p><br/></p><p>- Monitor and troubleshoot production pipelines to ensure reliability and scalability.<br/></p><p><br/></p><p>- Contribute to automation, process improvements, and documentation for data engineering workflows.<br/><br/><b>Required Skillsets :</b><br/></p><p><br/></p><p>- 5-8 years of proven experience in Data Engineering.<br/></p><p><br/></p><p>- Strong proficiency in Python, PySpark, and SQL for data processing and Solid understanding of ETL/ELT design principles and experience with AWS services (Glue, Lambda, S3, Athena, Step Functions).<br/></p><p><br/></p><p>- Hands-on experience with Data Warehousing concepts, SCD (Type 1, Type 2), and Medallion Architecture.<br/></p><p><br/></p><p>- Expertise in data modeling, partitioning strategies, and query performance tuning.<br/></p><p><br/></p><p>- Strong problem-solving and debugging skills in big data environments.<br/></p><p><br/></p><p>- Excellent communication skills to explain technical concepts to non-technical stakeholders.<br/><br/><b>Nice-to-Have Skills :</b><br/></p><p><br/></p><p>- Experience with DBT for data transformation and testing.<br/></p><p><br/></p><p>- Exposure to Databricks and the Lakehouse architecture.<br/></p><p><br/></p><p>- Familiarity with CI/CD for data pipelines and infrastructure-as-code (Terraform, Knowledge of data security, compliance, and governance best practices.</p><br/></p> (ref:hirist.tech)


Required Skill Profession

Computer Occupations



Your Complete Job Search Toolkit

✨ Smart • Intelligent • Private • Secure

Start Using Our Tools

Join thousands of professionals who've advanced their careers with our platform

Rate or Report This Job
If you feel this job is inaccurate or spam kindly report to us using below form.
Please Note: This is NOT a job application form.


    Unlock Your Data Engineer Potential: Insight & Career Growth Guide