Know ATS Score
CV/Résumé Score
  • Expertini Resume Scoring: Our Semantic Matching Algorithm evaluates your CV/Résumé before you apply for this job role: Unify Technologies Spark/Scala Developer Data Analytics.
India Jobs Expertini

Urgent! Unify Technologies - Spark/Scala Developer - Data Analytics Job Opening In Hyderabad – Now Hiring UNIFY TECHNOLOGIES PVT LTD

Unify Technologies Spark/Scala Developer Data Analytics



Job description

<p><b>Job Description : </b></p><p><br/></p><p>We are looking for a candidate with strong experience into <b>Spark </b>with <b>Scala</b> with Machine Learning and AI, Should have hands-on programming experience and Good Understanding of Data Structures, Algorithms, Data Transformation, Data Ingestion, Optimization mechanism/techniques, Good understanding of Big Data (Hadoop, MapReduce, Kafka, Cassandra) Technologies.<br/><br/>We deal with huge amounts of data at a massive scale, so we are looking for engineers who love solving challenging problems through conducting independent research and collaborating with teams across our product teams to help improve the overall product experience.<br/><br/><b>Key Responsibilities : </b><br/><br/>- Develop, optimize, and maintain large-scale distributed data processing systems using Apache Spark with Scala.<br/><br/>- Design and implement complex data transformation and ingestion pipelines for structured and unstructured data.<br/><br/>- Collaborate with data scientists and AI/ML engineers to integrate machine learning models into data workflows.<br/><br/>- Optimize Spark jobs for performance, scalability, and reliability.<br/><br/>- Work with large-scale data platforms involving Hadoop, Kafka, Cassandra, MapReduce, and related big data technologies.<br/><br/><b>Required Skills and Qualifications : </b><br/><br/>- 5 to 10 years of hands-on experience in data engineering with a focus on Spark and Scala.<br/><br/>- Strong knowledge of data structures, algorithms, and software engineering principles.<br/><br/>- Proficient in building data ingestion and ETL pipelines from diverse data sources.<br/><br/>- Experience with machine learning frameworks and integrating ML/AI models into data pipelines.<br/><br/>- Solid understanding of big data ecosystems including Hadoop, Kafka, Cassandra, and MapReduce.<br/><br/>- Proven experience with performance tuning, optimization techniques, and distributed systems.<br/><br/>- Familiarity with CI/CD pipelines, containerization (Docker), and orchestration tools (Kubernetes) is a plus.</p> (ref:hirist.tech)


Required Skill Profession

Computer Occupations



Your Complete Job Search Toolkit

✨ Smart • Intelligent • Private • Secure

Start Using Our Tools

Join thousands of professionals who've advanced their careers with our platform

Rate or Report This Job
If you feel this job is inaccurate or spam kindly report to us using below form.
Please Note: This is NOT a job application form.


    Unlock Your Unify Technologies Potential: Insight & Career Growth Guide