Job Summary:  
We are looking for an experienced and motivated GCP Big Data Engineer  to join our team in a leadership capacity .
The ideal candidate will have 8–10 years of relevant experience in data engineering, with a strong focus on Google Cloud Platform (GCP) , SQL , PySpark , and ETL processes .
This role requires strong technical expertise and leadership capabilities to guide and mentor a team of engineers while ensuring high-quality data solutions.
Key Responsibilities:  
- Design, develop, and maintain scalable and efficient data pipelines on Google Cloud Platform (GCP) .
 
 
- Work with PySpark  to process large-scale datasets and optimize performance.
 
 
- Write complex and efficient SQL  queries for data extraction, transformation, and analysis.
 
 
- Lead the implementation of ETL workflows  and ensure data accuracy, completeness, and integrity.
 
 
- Collaborate with cross-functional teams including data analysts, architects, and product managers to define data needs.
 
 
- Provide technical leadership, mentorship, and code reviews for junior engineers.
 
 
- Drive best practices for data engineering and cloud-based data processing.
 
 
Required Skills & Qualifications:  
- 8–10 years of experience in Data Engineering roles.
 
 
- Proven experience with Google Cloud Platform (GCP)  and its data services (e.g., BigQuery, Dataflow, Cloud Storage).
 
 
- Strong programming skills in PySpark  and Python .
 
 
- Advanced proficiency in SQL  and working with large, complex datasets.
 
 
- Deep understanding of ETL frameworks  and data pipeline orchestration.
 
 
- Experience leading or mentoring teams in a technical capacity.
 
 
- Excellent communication and problem-solving skills.
 
 
Preferred Qualifications:  
- GCP Certification (e.g., Professional Data Engineer) is a plus.
 
 
- Experience with CI/CD pipelines and data pipeline automation.
 
 
- Familiarity with Agile/Scrum methodologies.