Know ATS Score
CV/Résumé Score
  • Expertini Resume Scoring: Our Semantic Matching Algorithm evaluates your CV/Résumé before you apply for this job role: Senior Data Warehouse Engineer ETL/Python.
India Jobs Expertini

Urgent! Senior Data Warehouse Engineer - ETL/Python Job Opening In New Delhi – Now Hiring TGS The Global Skills

Senior Data Warehouse Engineer ETL/Python



Job description

<p><p><b>Description :</b><br/><br/>The Data Warehouse Engineer will develop and maintain pipelines that transform raw ingested data into clean, analytics-ready schemas consumed by executives, managers, analysts, and clients.<br/><br/>This is a hands-on role ideal for someone who loves working at the intersection of engineering and analytics.<br/><br/>The candidate will join a talented and supportive team that owns the full ELT lifecycle from ingestion to modeling and will have the opportunity to contribute to architecture decisions, workflow improvements, and new tool adoption.<br/><br/>The incumbent get to work with a modern data stack (Airbyte, dbt, Airflow, Redshift) and build solutions that directly support business-critical initiatives, from performance reporting to marketing attribution.<br/><br/>One will have room to grow your technical depth, broaden your data integration skills, and make a visible impact on AffinityXs data maturity.<br/><br/>- Build and maintain Airbyte extract/load connectors for third-party APIs.<br/><br/>- Write robust, modular dbt models to transform raw data into a normalized reporting layer.<br/><br/>- Develop and monitor DAGs in Airflow for scheduling and orchestration.<br/><br/>- Ensure data quality through validation, testing automation, and documentation.<br/><br/>- Write and tune complex SQL queries for reporting and analysis use cases.<br/><br/>- Collaborate with internal users to understand data requirements and deliver reliable solutions.<br/><br/>- Contribute to process improvements in CI/CD, pipeline observability, and testing frameworks.<br/><br/><b>Desired Candidate Details :</b><br/><br/>- Bachelors degree in Computer Science, Information Systems, Engineering, or related field (or equivalent experience).<br/><br/>- 6+ years of experience in data engineering, analytics engineering, or backend development focused on data systems.<br/><br/>- Strong SQL skills with experience on Redshift and at least one other platform such as MySQL or Oracle.<br/><br/>- Proficient in Python, with ability to write clean, testable, and reusable data pipeline code.<br/><br/>- Experience working with structured and semi-structured data formats (CSV, spreadsheets, XML, JSON).<br/><br/>- Familiarity with tools such as dbt for transformation and Airflow for orchestration.<br/><br/>- Comfortable working with REST APIs and integrating third-party data sources.<br/><br/>- Experience using Jira to manage day-to-day work: creating and completing subtasks, documenting progress, and logging work time or estimates.<br/><br/>- Familiar with Agile team processes, version control, and peer-reviewed development practices.<br/><br/>- Experience in marketing/advertising analytics or campaign data is a plus.</p><br/></p> (ref:hirist.tech)


Required Skill Profession

Computer Occupations



Your Complete Job Search Toolkit

✨ Smart • Intelligent • Private • Secure

Start Using Our Tools

Join thousands of professionals who've advanced their careers with our platform

Rate or Report This Job
If you feel this job is inaccurate or spam kindly report to us using below form.
Please Note: This is NOT a job application form.


    Unlock Your Senior Data Potential: Insight & Career Growth Guide