Know ATS Score
CV/Résumé Score
  • Expertini Resume Scoring: Our Semantic Matching Algorithm evaluates your CV/Résumé before you apply for this job role: Data Engineer (Spark, Scala, Python, Cassandra, Elasticsearch, AWS, Airflow, SQL).
India Jobs Expertini

Urgent! Data Engineer - (Spark, Scala, Python, Cassandra, Elasticsearch, AWS, Airflow, SQL) Job Opening In Bengaluru – Now Hiring The Nielsen Company

Data Engineer (Spark, Scala, Python, Cassandra, Elasticsearch, AWS, Airflow, SQL)



Job description

At Nielsen, we believe that career growth is a partnership.

You ultimately own, fuel and set the journey.

By joining our team of nearly 14,000 associates, you will become part of a community that will help you to succeed.

We champion you because when you succeed, we do too.

Embark on a new initiative, explore a fresh approach, and take license to think big, so we can all continuously improve.

We enable your best to power our future.

Responsibilities

  • Work closely with team leads and backend developers to design and develop functional, robust pipelines to support internal and customer needs
  • Write both unit and integration tests, and develop automation tools for daily tasks
  • Develop high quality, well documented, and efficient code 
  • Manage and optimize scalable pipelines in the cloud
  • Optimize internal and external applications for performance and scalability
  • Develop automated tests to ensure business needs are met, and write unit, integration, or data quality tests
  • Communicate regularly with stakeholders, project managers, quality assurance teams, and other developers regarding progress on long-term technology roadmap
  • Recommend systems solutions by comparing advantages and disadvantages of custom development and purchased alternatives
  • Key Skills

  • Domain Expertise
  • 2+ years of experience as a software/data engineer 
  • Bachelor’s degree in Computer Science, MIS, or Engineering
  • Technical Skills
  • Experience in software development using programming languages & tools/services: Java or Scala, Big Data, Hadoop, Spark, Spark SQL, Presto Hive, Cloud (preferably AWS), Docker, RDBMS (such as Postgres and/or Oracle), Linux, Shell scripting, GitLab, Airflow, Cassandra & Elasticsearch.

  • Experience in big data processing tools/languages using Apache Spark Scala.

  • Experience with orchestration tools: Apache Airflow or similar tools.

  • Strong knowledge on Unix/Linux OS, commands, shell scripting, python, JSON, YAML.

  • Agile scrum experience in application development is required.

  • Strong knowledge in AWS S3, PostgreSQL or MySQL.

  • Strong knowledge in AWS Compute: EC2, EMR, AWS Lambda.

  • Strong knowledge in Gitlab /Bitbucket .

  • AWS Certification is a plus
  • Big data systems and analysis
  • Experience with data warehouses or data lakes
  • Mindset and attributes
  • Strong communication skills with ability to communicate complex technical concepts and align organization on decisions
  • Sound problem-solving skills with the ability to quickly process complex information and present it clearly and simply
  • Utilizes team collaboration to create innovative solutions efficiently

  • Required Skill Profession

    Other General



    Your Complete Job Search Toolkit

    ✨ Smart • Intelligent • Private • Secure

    Start Using Our Tools

    Join thousands of professionals who've advanced their careers with our platform

    Rate or Report This Job
    If you feel this job is inaccurate or spam kindly report to us using below form.
    Please Note: This is NOT a job application form.


      Unlock Your Data Engineer Potential: Insight & Career Growth Guide