Know ATS Score
CV/Résumé Score
  • Expertini Resume Scoring: Our Semantic Matching Algorithm evaluates your CV/Résumé before you apply for this job role: Big Data Engineer Scala/Spark/Python.
India Jobs Expertini

Urgent! Big Data Engineer - Scala/Spark/Python Job Opening In Bengaluru – Now Hiring Idyllic Services Pvt Ltd

Big Data Engineer Scala/Spark/Python



Job description

<p><b>Job Title : </b>Big Data Engineer - Scala<br/><br/><b>Experience : </b>7 to 10Years (Minimum 3+ years in Scala)<br/><br/><b>Notice Period : </b> Immediate to 30 Days<br/><br/><b>Role Overview : </b><br/><br/>We are looking for a highly skilled Big Data Engineer (Scala) with strong expertise in Scala, Spark, Python, NiFi, and Apache Kafka to join our data engineering team.

The ideal candidate will have a proven track record in building, scaling, and optimizing big data pipelines, and hands-on experience in distributed data systems and cloud-based solutions.<br/><br/><b>Key Responsibilities : <br/></b><br/>- Design, develop, and optimize large-scale data pipelines and distributed data processing systems.<br/><br/>- Work extensively with Scala, Spark (PySpark), and Python for data processing and transformation.<br/><br/>- Develop and integrate streaming solutions using Apache Kafka and orchestration tools like NiFi / Airflow.<br/><br/>- Write efficient queries and perform data analysis using Jupyter Notebooks and SQL.<br/><br/>- Collaborate with cross-functional teams to design scalable cloud-based data architectures.<br/><br/>- Ensure delivery of high-quality code through code reviews, performance tuning, and best practices.<br/><br/>- Build monitoring and alerting systems leveraging Splunk or equivalent tools.<br/><br/>- Participate in CI/CD workflows using Git, Jenkins, and other DevOps tools.<br/><br/>- Contribute to product development with a focus on scalability, maintainability, and performance.<br/><br/><b>Mandatory Skills : </b><br/><br/>- Scala - Minimum 3+ years of hands-on experience.<br/><br/>- Strong expertise in Spark (PySpark) and Python.<br/><br/>- Hands-on experience with Apache Kafka.<br/><br/>- Knowledge of NiFi / Airflow for orchestration.<br/><br/>- Strong experience in Distributed Data Systems (5+ years).<br/><br/>- Proficiency in SQL and query optimization.<br/><br/>- Good understanding of Cloud Architecture.<br/><br/><b>Preferred Skills : </b><br/><br/>- Exposure to messaging technologies like Apache Kafka or equivalent.<br/><br/>- Experience in designing intuitive, responsive UIs for data analytics visualization.<br/><br/>- Familiarity with Splunk or other monitoring/alerting solutions.<br/><br/>- Hands-on experience with CI/CD tools (Git, Jenkins).<br/><br/>- Strong grasp of software engineering concepts, data modeling, and optimization techniques</p> (ref:hirist.tech)


Required Skill Profession

Computer Occupations



Your Complete Job Search Toolkit

✨ Smart • Intelligent • Private • Secure

Start Using Our Tools

Join thousands of professionals who've advanced their careers with our platform

Rate or Report This Job
If you feel this job is inaccurate or spam kindly report to us using below form.
Please Note: This is NOT a job application form.


    Unlock Your Big Data Potential: Insight & Career Growth Guide