Postion - Big Data PySpark Developer
Location: Bangalore
Experience: 5 to 9 Years
Joining: Immediate joiners preferred
Mandatory Skills:
Strong experience in Big Data, PySpark, Python, Hive
Expertise in Spark optimization and performance tuning
Good to Have:
Exposure to GCP or other cloud platforms (AWS, Azure)
Role Responsibilities:
Build and optimize large-scale data processing pipelines using PySpark
Tune Spark jobs for better efficiency and cost optimization
Collaborate with cross-functional teams to develop and deploy scalable solutions
Ensure best practices in coding, version control, and Agile delivery