Know ATS Score
CV/Résumé Score
  • Expertini Resume Scoring: Our Semantic Matching Algorithm evaluates your CV/Résumé before you apply for this job role: Sr. Software Engineer.
India Jobs Expertini

Urgent! Sr. Software Engineer Job Opening In New Delhi – Now Hiring IntraEdge

Sr. Software Engineer



Job description

Job Title: Senior Data Engineer – Pyspark | AWS Lambda | AWS Glue | Snowflake

Experience Required: 5 to 9 years

Location: Remote

Joining Timeline: Immediate to 20 Days

Job Summary:

We are seeking a highly skilled and experienced Senior Data Engineer with strong expertise in Pyspark , AWS Lambda , AWS Glue , and Snowflake .

The ideal candidate will be responsible for building and maintaining scalable data pipelines, ensuring seamless data integration and transformation across platforms, and delivering high-performance solutions in a cloud-native environment.

This is a remote position , open only to candidates who can join immediately or within 15 days .

Key Responsibilities:

  • Design, build, and maintain robust and scalable ETL/ELT pipelines using AWS Glue and AWS Lambda .

  • Integrate and manage Pyspark-based solutions , ensuring secure and compliant data flow.

  • Work with large datasets and implement complex data transformation logic within AWS and Snowflake environments.

  • Optimize Snowflake queries, warehouse performance, and data architecture.

  • Automate data ingestion, cleansing, transformation, and loading processes.

  • Monitor and troubleshoot data pipeline failures and performance issues.

  • Collaborate with cross-functional teams including data analysts, data scientists, and DevOps.

  • Ensure adherence to data governance, quality, and security standards.

  • Provide technical leadership and mentorship to junior data engineers as needed.

Required Skills & Qualifications:

  • 5 to 9 years of overall experience in data engineering and cloud-based data solutions.

  • Hands-on expertise in Payspark platform and integrations.

  • Strong experience with AWS Glue (Job creation, Crawlers, Catalog, PySpark/Scala scripts).

  • Solid knowledge of AWS Lambda for serverless compute and data automation.

  • Proven expertise in Snowflake (data modeling, performance tuning, secure data sharing, etc.).

  • Proficient in SQL, Python , and working knowledge of PySpark or Scala .

  • Experience with CI/CD practices for data pipelines (e.g., using Git, CodePipeline, or similar tools).

  • Excellent problem-solving skills and the ability to work independently in a remote setup.

Good to Have:

  • Exposure to AWS services such as S3, Athena, Redshift, CloudWatch, Step Functions.

  • Familiarity with data security and compliance best practices (e.g., GDPR, HIPAA).

  • Experience with Agile/Scrum methodologies.

Employment Type:

Full-time : Remote

Notice Period: Immediate Joiners preferred or maximum 15 days .

How to Apply: share it to


Required Skill Profession

Computer Occupations



Your Complete Job Search Toolkit

✨ Smart • Intelligent • Private • Secure

Start Using Our Tools

Join thousands of professionals who've advanced their careers with our platform

Rate or Report This Job
If you feel this job is inaccurate or spam kindly report to us using below form.
Please Note: This is NOT a job application form.


    Unlock Your Sr Software Potential: Insight & Career Growth Guide