Job Overview
Category
Computer Occupations
Ready to Apply?
Take the Next Step in Your Career
Join TSI Triunity and advance your career in Computer Occupations
Apply for This Position
Click the button above to apply on our website
Job Description
<p><p><b>About the Role : </b><br/><br/>We are seeking a highly skilled Data Engineer to design, build, and manage robust data pipelines and frameworks on Google Cloud Platform (GCP).<br/><br/> The ideal candidate will have hands-on experience in PySpark, Python, GCP services (BigQuery, Cloud Functions, Pub/Sub), and Terraform, with strong capabilities in pipeline development, monitoring, and documentation (HLD & LLD).<br/><br/><b>Key Responsibilities : </b><br/><br/><b>Data Pipeline Development : </b><br/><br/>- Design, build, and optimize scalable ETL/ELT data pipelines using PySpark and Python.<br/><br/>- Implement GCP-native solutions leveraging BigQuery, Cloud Functions, Pub/Sub, and related services.<br/><br/>- Use Terraform to automate infrastructure provisioning and deployments.<br/><br/><b>Pipeline Monitoring & Reliability : </b><br/><br/>- Implement monitoring, logging, and alerting mechanisms to ensure pipeline reliability and data quality.<br/><br/>- Troubleshoot pipeline issues and optimize performance.<br/><br/><b>Architecture & Documentation : </b><br/><br/>- Contribute to High-Level Design (HLD) and Low-Level Design (LLD) documents for data solutions.<br/><br/>- Collaborate with architects, data scientists, and business teams to translate requirements into technical & Best Practices : </b><br/><br/>- Work with cross-functional teams to integrate pipelines into broader data platforms.<br/><br/>- Follow best practices for code quality, version control, CI/CD, and security.<br/><br/><b>Required Skills & Experience : </b><br/><br/>- Strong proficiency in PySpark and Python for data processing.<br/><br/>- Hands-on experience with GCP services : BigQuery, Cloud Functions, Pub/Sub.<br/><br/>- Infrastructure-as-Code expertise with Terraform.<br/><br/>- Experience in building, deploying, and monitoring large-scale data pipelines.<br/><br/>- Knowledge of data architecture and ability to prepare HLD and LLD documentation.<br/><br/>- Strong problem-solving skills and ability to work in agile environments.<br/><br/><b>Preferred Qualifications : </b><br/><br/>- Experience in technologies such as Hadoop, Hive, kafka, snowflake, Matillion and AWS<br/><br/>- Knowledge of CI/CD pipelines (Jenkins, GitLab, Git Actions etc.<br/><br/>- Familiarity with data governance, lineage, and security frameworks.<br/><br/>- Experience with containerization (Docker, Kubernetes) is a plus</p><br/></p> (ref:hirist.tech)
Don't Miss This Opportunity!
TSI Triunity is actively hiring for this Triunitysoft - Data Engineer - Python/ETL position
Apply Now