Role: Senior Data Engineer (Python, Spark/Databricks, SQL, AWS)
Experience: 6–12 years
Location: Hyderabad
Work Mode: Hybrid (3 days/week in-office)
Join Time: Immediate
Must-Have Technical Skills:
- Strong programming skills in Python or Scala, Spark/Databricks
- Hands-on experience with Apache Spark for big data processing on AWS cloud
- Proficiency with AWS services such as S3, Glue, Redshift, EMR, Lambda
- Strong SQL skills for data transformation and analytics
- Expertise in setting up and managing CI/CD pipelines with Jenkins
Responsibilities:
- Design, build, and optimize scalable data pipelines on AWS
- Implement data ingestion, transformation, and integration solutions using Spark/Databricks, Glue, and SQL
- Manage and optimize cloud storage and compute environments
- Ensure robust, automated deployments with Terraform and Jenkins
- Collaborate with cross-functional teams to deliver high-quality data products
Nice to Have:
- Prior experience in the Healthcare / Life Sciences domain
- Familiarity with modern data lake and data mesh architectures
Why Join Us?
- Work on cutting-edge data engineering projects in healthcare analytics
- Hybrid work model for flexibility and collaboration
- Opportunity to grow in a fast-paced, innovation-driven environment
Apply Now!
Send your updated resume to