Know ATS Score
CV/Résumé Score
  • Expertini Resume Scoring: Our Semantic Matching Algorithm evaluates your CV/Résumé before you apply for this job role: 2892 Databricks Engineer.
India Jobs Expertini

Urgent! 2892- Databricks Engineer Job Opening In Pune – Now Hiring EXL

2892 Databricks Engineer



Job description

About the Role

We are seeking a highly skilled Databricks Engineer with 5–7 years of experience in data engineering and analytics.

The ideal candidate will have strong expertise in designing, developing, and optimizing large-scale data pipelines and solutions using Databricks, Spark, and cloud platforms.

This role requires a combination of strong technical skills, problem-solving abilities, and hands-on experience with modern data architectures.


Key Responsibilities

  • Design, develop, and maintain data pipelines and ETL/ELT processes using Databricks and Apache Spark.

  • Build and optimize Delta Lake-based data architectures for scalable and reliable analytics.

  • Implement data ingestion, transformation, and processing frameworks from structured and unstructured sources.

  • Collaborate with data scientists, analysts, and business stakeholders to deliver data-driven solutions .

  • Ensure data quality, governance, and security across all platforms and solutions.

  • Optimize performance, cost-efficiency, and scalability of Databricks workloads.

  • Work with cloud platforms for data storage, compute, and orchestration.

  • Implement CI/CD pipelines, testing frameworks, and MLOps practices for Databricks projects.

  • Troubleshoot issues and provide production support for Databricks environments.


Required Skills & Experience

  • 5–7 years of experience in data engineering, with at least 2–3 years of Databricks hands-on experience .

  • Strong expertise in Apache Spark (PySpark/Scala/SQL) and distributed data processing.

  • Solid experience with Delta Lake, Lakehouse architecture, and data modeling .

  • Hands-on experience with at least one cloud platform: Azure Data Lake, AWS S3, or GCP BigQuery/Storage .

  • Strong proficiency in SQL for data manipulation and performance tuning.

  • Experience with ETL frameworks, workflow orchestration tools (Airflow, ADF, DBX Workflows).

  • Good understanding of CI/CD, Git-based workflows, and DevOps practices .

  • Exposure to MLOps and MLflow is a strong plus.

  • Knowledge of data governance, cataloging, and security frameworks .


Required Skill Profession

Computer Occupations



Your Complete Job Search Toolkit

✨ Smart • Intelligent • Private • Secure

Start Using Our Tools

Join thousands of professionals who've advanced their careers with our platform

Rate or Report This Job
If you feel this job is inaccurate or spam kindly report to us using below form.
Please Note: This is NOT a job application form.


    Unlock Your 2892 Databricks Potential: Insight & Career Growth Guide