Job Description
<p><p><b>About the Opportunity : <br/></b><br/>We are looking for a skilled Azure Data Engineer / Data Engineer with hands-on experience in Azure Data Factory, Snowflake, Databricks, and DBT to architect and implement large-scale data integration and transformation pipelines.<br/><br/>The ideal candidate will bring deep technical expertise in ETL/ELT design, data modeling, and big data processing, along with proficiency in Python and SQL for automating and optimizing complex data workflows.<br/><br/>Youll work in a high-impact role building and maintaining cloud-native data solutions that enable analytics, AI, and business intelligence initiatives across the organization.<br/><br/>This position is ideal for professionals passionate about designing high-performance, scalable, and secure data systems that serve as the backbone for enterprise decision-making.<br/><br/><b>What Youll Do : </b><br/><br/>- Design, develop, and maintain ETL/ELT pipelines using Azure Data Factory and Databricks for data ingestion, transformation, and orchestration.<br/><br/>- Implement data integration and transformation frameworks in Snowflake, ensuring high performance and scalability.<br/><br/>- Develop modular and reusable data pipelines with DBT for data transformation, lineage tracking, and testing.<br/><br/>- Build and manage data lakes and data warehouse solutions leveraging Azure Data Lake Storage (ADLS) and Snowflake.<br/><br/>- Write optimized SQL and Python scripts for data processing, validation, and automation.<br/><br/>- Collaborate with data scientists, BI developers, and product teams to ensure data availability, quality, and reliability.<br/><br/>- Design and implement data models (star/snowflake schemas) for analytics and reporting workloads.<br/><br/>- Optimize Spark and Databricks jobs for cost, performance, and scalability.<br/><br/>- Establish data quality validation, error handling, and automated monitoring frameworks.<br/><br/>- Integrate CI/CD pipelines for data workflows, ensuring repeatable, version-controlled deployments.<br/><br/>- Ensure compliance with data governance, security policies, and industry regulations.<br/><br/><b>What You Bring : </b><br/><br/>- 6 to 10 years of professional experience in data engineering or ETL development, with a focus on cloud data platforms.<br/><br/>- Hands-on experience with Azure Data Factory, Azure Databricks, and Snowflake.<br/><br/>- Proficiency in SQL and Python for developing transformation logic, validation, and automation scripts.<br/><br/>- Expertise in data warehousing concepts, data lake architectures, and big data processing.<br/><br/>- Experience with DBT (Data Build Tool) for transformation management, modular pipelines, and testing.<br/><br/>- Strong understanding of Spark, Delta Lake, and Parquet for distributed data processing.<br/><br/>- Working knowledge of ETL/ELT pipeline orchestration, metadata management, and data lineage tracking.<br/><br/>- Experience implementing CI/CD pipelines and DevOps for data workflows.<br/><br/>- Familiarity with Azure cloud services, including ADLS, Synapse, and Key Vault.<br/><br/>- Strong analytical, troubleshooting, and performance optimization skills.<br/><br/>- Bachelors or Masters degree in Computer Science, Data Engineering, or related field.<br/><br/><b>Preferred Skills : </b><br/><br/>- Experience with data governance frameworks, role-based security, and compliance standards.<br/><br/>- Familiarity with AWS Redshift or GCP BigQuery for multi-cloud data integration.<br/><br/>- Exposure to Airflow, Prefect, or other workflow orchestration tools.<br/><br/>- Understanding of ML pipelines and feature store management within data ecosystems.<br/><br/>- Certifications in Azure Data Engineer, Snowflake, or Databricks are a plus<br/><br/></p><br/></p> (ref:hirist.tech)