Kindly find the Job Description Below.
Job Title:Technical Lead-Azure Data Engineer  
Location: Bangalore  
Years of Experience: 10+ years of experience  
Sigmoid  works with a variety of clients from start-ups to fortune 500 companies.
We are looking for a detailed oriented self-starter to assist our engineering and analytics teams in various roles as a Software Development Engineer.
As an Technical Lead- Data Engineer, you will be responsible for building a highly-scalable and extensible big data platform that provides the foundation for collecting, storing, modeling, and analyzing massive data sets from multiple channels.
We are looking for a skilled Data Engineer with 10+ years of experience in big data technologies, particularly Azure, Python, PYSpark, SQL and data lakehouse architectures.
The ideal candidate will have a strong background in building scalable data pipelines and experience with modern data storage formats, including Apache Iceberg.
You will work closely with cross-functional teams to design and implement efficient data solutions in a cloud-based environment.
Key Responsibilities:  
- Data Pipeline Development:  
- Design, build, and optimize scalable data pipelines using Apache Spark.
 
 
- Implement and manage large-scale data processing solutions across data lakehouses.
 
 
- Data Lakehouse Management:  
- Work with modern data lakehouse platforms (e.g.Apache Iceberg) to handle large datasets.
 
 
- Optimize data storage, partitioning, and versioning to ensure efficient access and querying.
 
 
- SQL & Data Management:  
- Write complex SQL queries to extract, manipulate, and transform data.
 
 
- Develop performance-optimized queries for analytical and reporting purposes.
 
 
- Data Integration:  
- Integrate various structured and unstructured data sources into the lakehouse environment.
 
 
- Work with stakeholders to define data needs and ensure data is available for downstream consumption.
 
 
- Data Governance and Quality:  
- Implement data quality checks and ensure the reliability and accuracy of data.
 
 
- Contribute to metadata management and data cataloging efforts.
 
 
- Performance Tuning:  
- Monitor and optimize the performance of Spark jobs, SQL queries, and overall data infrastructure.
 
 
- Work with cloud infrastructure teams to optimize costs and scale as needed.
 
 
Qualifications:  
- Bachelor’s or Master’s degree in Computer Science, Information Technology, or a related field.
 
 
- 9+ years of experience in data engineering, with a focus on Java/python, Spark and SQL Programming languages.
 
 
- Hands-on experience with Apache Iceberg, Snowflake, or similar technologies.
 
 
- Strong understanding of data lakehouse architectures and data warehousing principles.
 
 
- Proficiency in AWS data services.
 
 
- Experience with version control systems like Git and CI/CD pipelines.
 
 
- Excellent problem-solving and analytical skills.
 
 
- Strong communication and collaboration skills.
 
 
Nice to Have:  
- Experience with containerization (Docker, Kubernetes) and orchestration tools like Airflow.
 
 
- Certifications in Azure cloud technologies