Job Description
<p><p><b>Responsibilities :</b><br/></p><p><br/></p><p>- Architect and implement modular, test-driven ELT pipelines using dbt on Snowflake.<br/></p><p><br/></p><p>- Design layered data models (e.g., staging, intermediate, mart layers / medallion architecture) aligned with dbt best practices.<br/></p><p><br/></p><p>- Lead ingestion of structured and semi-structured data from APIs, flat files, cloud storage (Azure Data Lake, AWS S3), and databases into Snowflake.<br/></p><p><br/></p><p>- Optimize Snowflake for performance and cost : warehouse sizing, clustering, materializations, query profiling, and credit monitoring.<br/></p><p><br/></p><p>- Apply advanced dbt capabilities including macros, packages, custom tests, sources, exposures, and documentation using dbt docs.<br/></p><p><br/></p><p>- Orchestrate workflows using dbt Cloud, Airflow, or Azure Data Factory, integrated with CI/CD pipelines.<br/></p><p><br/></p><p>- Define and enforce data governance and compliance practices using Snowflake RBAC, secure data sharing, and encryption strategies.<br/></p><p><br/></p><p>- Collaborate with analysts, data scientists, architects, and business stakeholders to deliver validated, business-ready data assets.<br/></p><p><br/></p><p>- Mentor junior engineers, lead architectural/code reviews, and help establish reusable frameworks and standards.<br/></p><p><br/></p><p>- Engage with clients to gather requirements, present solutions, and manage end-to-end project delivery in a consulting Qualifications :</b></p><p><br/></p><p>- 5 to 8 years of experience in data engineering roles, with 3+ years of hands-on experience working with Snowflake and dbt in production Skills :</b></p><p><b><br/></b></p><p><b>Cloud Data Warehouse & Transformation Stack :</b><br/></p><p><br/></p><p>- Expert-level knowledge of SQL and Snowflake, including performance optimization, storage layers, query profiling, clustering, and cost management.<br/></p><p><br/></p><p>- Experience in dbt development : modular model design, macros, tests, documentation, and version control using and Integration :</b><br/></p><p><br/></p><p>- Proficiency in orchestrating workflows using dbt Cloud, Airflow, or Azure Data Factory.<br/></p><p><br/></p><p>- Comfortable working with data ingestion from cloud storage (e.g., Azure Data Lake, AWS S3) and Modelling and Architecture :</b></p><p><br/></p><p>- Dimensional modelling (Star/Snowflake schemas), Slowly changing dimensions.<br/></p><p><br/></p><p>- Knowledge of modern data warehousing principles.<br/></p><p><br/></p><p>- Experience implementing Medallion Architecture (Bronze/Silver/Gold layers).<br/></p><p><br/></p><p>- Experience working with Parquet, JSON, CSV, or other data Languages :</b><br/></p><p><br/></p><p>- Python : For data transformation, notebook development, automation.<br/></p><p><br/></p><p>- SQL : Strong grasp of SQL for querying and performance tuning.<br/></p><p><br/></p><p>- Jinja (nice to have) : Exposure to Jinja for advanced dbt Engineering & Analytical Skills :</b><br/></p><p><br/></p><p>- ETL/ELT pipeline design and optimization.<br/></p><p><br/></p><p>- Exposure to AI/ML data pipelines, feature stores, or MLflow for model tracking (good to have).<br/></p><p><br/></p><p>- Exposure to data quality and validation frameworks.</p><br/></p> (ref:hirist.tech)