Job Title: Senior Data Engineer
Experience : 3+ years
Location: Gurgaon /Pune / Bangalore
Skills: PySpark, SQL, Databricks, AWS.
Role Summary:
We are looking for 3–4 experienced Databricks Developers to support a fast-paced, high-impact data engineering initiative.
The ideal candidates should have hands-on expertise in building scalable data pipelines using Databricks and AWS, along with strong SQL and Python skills.
Required Skill Set:
- 3–4 years of experience in Data Engineering
- Strong hands-on experience with Databricks (Notebooks, Jobs, Workflows)
- Proficiency in PySpark and SQL
- Familiarity with AWS services (S3, Glue, Lambda, etc.)
- Experience with CI/CD tools and version control (e.G., Git)
- Good understanding of Delta Lake and performance tuning
Key Responsibilities:
- Design and develop robust ETL pipelines using Databricks (PySpark or SQL)
- Work with large-scale datasets in cloud environments (preferably AWS)
- Optimize data pipelines for performance and cost efficiency
- Integrate data from multiple structured and unstructured sources
- Collaborate with data architects, analysts, and business stakeholders to understand requirements
- Implement data validation and quality checks
- Maintain proper documentation and version control for data workflows