Job Description
            
                Job Description:
Key Responsibilities:
1.
Design, develop, and optimize ETL processes, with a focus on Snowflake integration.
2.
Collaborate with stakeholders to gather and analyze requirements, translating them into technical specifications.
3.
Architect and implement efficient data pipelines to support various business needs using Snowflake, and DBT.
4.
Perform data profiling, cleansing, and transformation to ensure data accuracy and consistency.
5.
Monitor and troubleshoot ETL jobs, identifying and resolving performance issues and data anomalies.
6.
Implement best practices for data integration, storage, and retrieval within the Snowflake environment.
7.
Work closely with data engineers, analysts, and business users to understand data requirements and deliver solutions that meet their needs.
8.
Stay updated with the latest trends and advancements in ETL technologies, Matillion features, and AWS services.
9.
Design, develop, and optimize complex data pipelines within the Snowflake data warehouse environment.
10.
Implement scalable ETL processes to ingest, transform, and load data from various sources into Snowflake.
11.
Collaborate with data architects and analysts to design and implement efficient data models within Snowflake.
12.
Optimize SQL queries, database configurations, and data pipeline performance for enhanced efficiency and scalability.
13.
Set up and maintain GitHub repositories for version control of data engineering code, configurations, and scripts.
14.
Establish and enforce branching strategies, pull request workflows, and code review processes to ensure code quality and collaboration.
15.
Develop and implement robust data quality checks and validation processes to ensure the accuracy and integrity of data within Snowflake.
16.
Monitor data pipelines for anomalies, errors, and discrepancies, and implement proactive measures to maintain data quality.
17.
Automate deployment, monitoring, and management of data pipelines using orchestration tools like Airflow or custom automation scripts.
18.
Continuously enhance automation processes to streamline data engineering workflows and minimize manual interventions.
19.
Document data engineering processes, pipeline configurations, and troubleshooting steps for knowledge sharing and reference.
20.
Provide mentorship and training to junior team members on Snowflake best practices, GitHub usage, and data engineering techniques.
Required Skills and Qualifications:
- Bachelor’s degree in computer science, Information Systems, Engineering, or a related field.
- Minimum 3+ years of experience in ETL development.
- Proficiency in Snowflake, including design, development, and administration.
- Solid understanding of AWS services such as S3, Redshift, EC2, and Lambda.
- Strong SQL skills, with experience in complex query optimization and performance tuning.
- Experience with data modeling concepts and techniques.
- Proficiency in DBT (Data Build Tool) for data transformation and modeling.
- Extensive experience with version control systems, particularly GitHub, and proficiency in Git workflows and branching strategies.
- Solid understanding of data modeling principles, ETL processes, and data integration methodologies.
- Excellent problem-solving skills and attention to detail.
- Strong communication and collaboration skills, with the ability to work effectively in a team environment.
- Proven track record of delivering high-quality solutions on time and within budget.
Preferred Qualifications:
- Snowflake certifications.
- Experience with other ETL tools and technologies.
- Familiarity with Agile development methodologies.
- Knowledge of data governance and compliance standards.
- Experience with Data Vault modeling and implementation.
- - Familiarity with Python or other programming languages for data manipulation.