Job Description
<p></p><p>We are looking for a Senior Data Engineer with strong expertise in SQL, Python, Azure Synapse, Azure Data Factory, Snowflake, and Databricks.
The ideal candidate should have a solid understanding of SQL (DDL, DML, query optimization) and ETL pipelines while demonstrating a learning mindset to adapt to evolving technologies.</p><br/><p><b>Key Responsibilities :</b></p><br/><p>- Collaborate with business and IT stakeholders to define business and functional requirements for data solutions.</p><br/><p>- Design and implement scalable ETL/ELT pipelines using Azure Data Factory, Databricks, and Snowflake.</p><br/><p>- Develop detailed technical designs, data flow diagrams, and future-state data architecture.</p><br/><p>- Evangelize modern data modelling practices, including entity-relationship models, star schema, and Kimball methodology.</p><br/><p>- Ensure data governance, quality, and validation by working closely with quality engineering teams.</p><br/><p>- Write, optimize, and troubleshoot complex SQL queries, including DDL, DML, and performance tuning.</p><br/><p>- Work with Azure Synapse, Azure Data Lake, and Snowflake for large-scale data processing.</p><br/><p>- Implement DevOps and CI/CD best practices for automated data pipeline deployments.</p><br/><p>- Support real-time streaming data processing with Spark, Kafka, or similar technologies.</p><br/><p>- Provide technical mentorship and guide team members on best practices in SQL, ETL, and cloud data solutions.</p><br/><p>- Stay up to date with emerging cloud and data engineering technologies and demonstrate a continuous learning mindset.</p><br/><p><b>Required Skills & Qualifications :</b></p><br/><p><b>Primary Requirements :</b></p><br/><p>- SQL Expertise Strong hands-on experience with DDL, DML, query optimization, and performance tuning.</p><br/><p>- Programming Languages Proficiency in Python or Java for data processing and automation.</p><br/><p>- Data Modelling Good understanding of entity-relationship modelling, star schema, and Kimball methodology.</p><br/><p>- Cloud Data Engineering Hands-on experience with Azure Synapse, Azure Data Factory, Azure Data Lake, Databricks and Snowflake</p><br/><p>- ETL Development Experience building scalable ETL/ELT pipelines and data ingestion workflows.</p><br/><p>- Ability to learn and apply Snowflake concepts as needed.</p><br/><p>- Communication Skills : Strong presentation and communication skills to engage both technical and business stakeholders in strategic discussions.</p><br/><p>- Financial Services Domain (Optional) : Knowledge of financial services.</p><br/><p><b>Good to Have Skills :</b></p><br/><p>- DevOps & CI/CD Experience with Git, Jenkins, Docker, and automated deployments.</p><br/><p>- Streaming Data Processing Experience with Spark, Kafka, or real-time event-driven architectures.</p><br/><p>- Data Governance & Security Understanding of data security, compliance, and governance frameworks.</p><br/><p>- Experience in AWS Knowledge of AWS cloud data solutions (Glue, Redshift, Athena, etc.) is a plus.</p><br/><p><b>Primary Requirements :</b></p><br/><p>- SQL Expertise : Strong hands-on experience with DDL, DML, query optimization, and performance tuning.</p><br/><p>- Programming Languages : Proficiency in Python or Java for data processing and automation.</p><br/><p>- Data Modelling : Good understanding of entity-relationship modelling, star schema, and Kimball methodology.</p><br/><p>- Cloud Data Engineering : Hands-on experience with Azure Synapse, Azure Data Factory, Azure Data Lake, Databricks and Snowflake</p><br/><p>- ETL Development : Experience building scalable ETL/ELT pipelines and data ingestion workflows.</p><br/><p>- Ability to learn and apply Snowflake concepts as needed.</p><br/><p>- Communication Skills : Strong presentation and communication skills to engage both technical and business stakeholders in strategic discussions</p><br/><p>Notice period : 30days</p><br/><p></p> (ref:hirist.tech)