Know ATS Score
CV/Résumé Score
  • Expertini Resume Scoring: Our Semantic Matching Algorithm evaluates your CV/Résumé before you apply for this job role: Data Engineer Python/SQL/Spark.
India Jobs Expertini

Urgent! Data Engineer - Python/SQL/Spark Job Opening In India, India – Now Hiring SDOD TECHNOLOGIES PRIVATE LIMITED

Data Engineer Python/SQL/Spark



Job description

<p><p><b>Requirements : </b></p><p><br/>- Strong proficiency in writing complex, optimized SQL queries (especially for Amazon Redshift).<br/><br/></p><p>- Experience with Apache Spark (preferably on AWS EMR) for big data processing.<br/><br/></p><p>- Proven experience using AWS Glue for ETL pipelines (working with RDS, S3 etc.

).<br/><br/></p><p>- Strong understanding of data ingestion techniques from diverse sources (files, APIs, relational DBs).<br/><br/></p><p>- Solid hands-on experience with Amazon Redshift : data modeling, optimization, and query tuning.<br/><br/></p><p>- Familiarity with AWS Quicksight for building dashboards and visual analytics.<br/><br/></p><p>- Proficient in Python or PySpark for scripting and data transformation.<br/><br/></p><p>- Understanding of data pipeline orchestration, version control, and basic DevOps.</p><br/><p><b>Good-to-have Skills : </b><br/><br/>- Knowledge of other AWS services (Lambda, Step Functions, Athena, CloudWatch).<br/><br/></p><p>- Experience with workflow orchestration tools like Apache Airflow.<br/><br/></p><p>- Exposure to real-time streaming tools (Kafka, Kinesis, etc.

).<br/><br/></p><p>- Familiarity with data security, compliance, and governance best practices.<br/><br/></p><p>- Experience with infrastructure as code (e.

g., Terraform, Responsibilities : </b></p><br/>- Develop, maintain, and optimize complex SQL queries, primarily for Amazon Redshift, ensuring high performance and scalability.<br/><br/>- Build and manage ETL pipelines using AWS Glue, processing data from various sources including RDS, S3, APIs, and relational databases.<br/><br/>- Utilize Apache Spark (preferably on AWS EMR) for large-scale data processing and transformation tasks.<br/><br/>- Design efficient data models and optimize Redshift clusters for storage, query performance, and cost-effectiveness.<br/><br/>- Create and maintain data ingestion workflows from diverse sources such as files, APIs, and databases.<br/><br/>- Develop scripts and data transformations using Python or PySpark.<br/><br/>- Implement and monitor data pipeline orchestration with version control and adhere to DevOps best practices.<br/><br/>- Collaborate with analytics and BI teams, leveraging AWS QuickSight for dashboarding and visualization.<br/><br/>- Ensure data quality, security, and compliance throughout the data lifecycle.</p><br/></p> (ref:hirist.tech)


Required Skill Profession

Computer Occupations



Your Complete Job Search Toolkit

✨ Smart • Intelligent • Private • Secure

Start Using Our Tools

Join thousands of professionals who've advanced their careers with our platform

Rate or Report This Job
If you feel this job is inaccurate or spam kindly report to us using below form.
Please Note: This is NOT a job application form.


    Unlock Your Data Engineer Potential: Insight & Career Growth Guide