Location : Chennai, Pune, Noida, Kochi, Hyderabad, Trivandrum.Job Description :- Proficient in SQL, Spark, Scala, and AWS, with a strong command over these technologies.- Minimum 6-7 years of relevant experience in Spark and SQL, plus 2-3 years of hands-on practice in AWS.- Demonstrated track record of effectively optimizing performance.- Adept at leveraging these tools to enhance efficiency and productivity within projects.- Crucial ability to optimize performance, showcasing capacity to streamline processes and deliver high-quality outcomes.Key Responsibilities :- Design, implement, and maintain large-scale data processing systems using Spark and Scala.- Develop and optimize SQL queries to extract and analyze data from various sources.- Deploy and manage data pipelines in AWS, ensuring reliability, scalability, and performance.- Collaborate with cross-functional teams to identify opportunities for data-driven improvements. - Monitor system performance, troubleshoot issues, and implement solutions to enhance efficiency. - Ensure data quality and integrity through rigorous testing and validation procedures.- Stay updated with the latest trends and advancements in big data technologies to drive innovation. (ref:hirist.tech)