Know ATS Score
CV/Résumé Score
  • Expertini Resume Scoring: Our Semantic Matching Algorithm evaluates your CV/Résumé before you apply for this job role: Data Engineer Python/SQL/ETL.
India Jobs Expertini

Urgent! Data Engineer - Python/SQL/ETL Job Opening In Noida – Now Hiring DigitalCube Consultancy

Data Engineer Python/SQL/ETL



Job description

<p><p><b>Job Description : </b><br/><br/><b>Job Title : </b> Data Engineer<br/><br/><b>Experience Required : </b> 3-6 Years<br/><br/><b>Responsibilities : </b></p><p><p><b><br/></b></p>- Design, develop, and maintain scalable data pipelines and ETL processes to collect, process, and store data from various sources.<br/><br/>- Work with Apache Spark to process large datasets in a distributed environment, ensuring optimal performance and scalability.<br/><br/>- Develop and optimize Spark jobs and data transformations using Scala for large-scale data processing.<br/><br/>- Collaborate with data analysts and other stakeholders to ensure data pipelines meet business and technical requirements.<br/><br/>- Integrate data from different sources (databases, APIs, cloud storage, etc.) into a unified data platform.<br/><br/>- Ensure data quality, consistency, and accuracy by building robust data validation and cleansing mechanisms.<br/><br/>- Use cloud platforms (AWS, Azure, or GCP) to deploy and manage data processing and storage solutions.<br/><br/>- Automate data workflows and tasks using appropriate tools and frameworks.<br/><br/>- Monitor and troubleshoot data pipeline performance, optimizing for efficiency and cost-effectiveness.<br/><br/>- Implement data security best practices, ensuring data privacy and compliance with industry standards.<br/><br/>- Stay updated with new data engineering tools and technologies to continuously improve the data infrastructure.<br/><br/><b>Requirements :</b></p><p><p><b><br/></b></p>- 4 to 6 years of experience required as a Data Engineer or an equivalent role<br/><br/>- Strong experience working with Apache Spark with Scala for distributed data processing and big data handling.<br/><br/>- Basic knowledge of Python and its application in Spark for writing efficient data transformations and processing jobs.<br/><br/>- Proficiency in SQL for querying and manipulating large datasets.<br/><br/>- Experience with cloud data platforms, preferably AWS (e.g., S3, EC2, EMR, Redshift) or other cloud-based solutions.<br/><br/>- Strong knowledge of data modeling, ETL processes, and data pipeline orchestration.<br/><br/>- Familiarity with containerization (Docker) and cloud-native tools for deploying data solutions.<br/><br/>- Knowledge of data warehousing concepts and experience with tools like AWS Redshift, Google BigQuery, or Snowflake is a plus.<br/><br/>- Experience with version control systems such as Git.<br/><br/>- Strong problem-solving abilities and a proactive approach to resolving technical challenges.<br/><br/>- Excellent communication skills and the ability to work collaboratively within cross-functional teams.<br/><br/><b>Preferred Qualifications :</b></p><p><p><b><br/></b></p>- Experience with additional programming languages like Python, Java, or Scala for data engineering tasks.<br/><br/>- Familiarity with orchestration tools like Apache Airflow, Luigi, or similar frameworks.<br/><br/>- Basic understanding of data governance, security practices, and compliance regulations.</p><br/></p> (ref:hirist.tech)


Required Skill Profession

Computer Occupations



Your Complete Job Search Toolkit

✨ Smart • Intelligent • Private • Secure

Start Using Our Tools

Join thousands of professionals who've advanced their careers with our platform

Rate or Report This Job
If you feel this job is inaccurate or spam kindly report to us using below form.
Please Note: This is NOT a job application form.


    Unlock Your Data Engineer Potential: Insight & Career Growth Guide