Know ATS Score
CV/Résumé Score
  • Expertini Resume Scoring: Our Semantic Matching Algorithm evaluates your CV/Résumé before you apply for this job role: Senior Data Engineer Medallion Architecture.
India Jobs Expertini

Urgent! Senior Data Engineer - Medallion Architecture Job Opening In Mumbai – Now Hiring Confidential

Senior Data Engineer Medallion Architecture



Job description

<p><p><b>Description :</b></p><br/><p>We are building a next-generation Customer Data Platform (CDP) powered by the Databricks Lakehouse architecture and Lakehouse Engine framework.

</p><p><br/></p><p>We're looking for a skilled Data Engineer with 4-9 years of experience to help us build metadata-driven pipelines, enable real-time data processing, and support marketing campaign orchestration capabilities at scale.</p><br/><p>The core responsibilities for the job include the following :</p><br/><p><b>Lakehouse Engine Implementation :</b></p><br/><p>- Configure and extend the Lakehouse Engine framework for batch and streaming pipelines.</p><br/><p>- Implement the medallion architecture (Bronze -&gt; Silver -&gt; Gold) using Delta Lake.</p><br/><p>- Develop metadata-driven ingestion patterns from various customer data sources.</p><br/><p>- Build reusable transformers for PII handling, data standardization, and data quality enforcement.</p><br/><p><b>Real-Time CDP Enablement :</b></p><br/><p>- Build Spark Structured Streaming pipelines for customer behavior and event tracking.</p><br/><p>- Set up Debezium + Kafka for Change Data Capture (CDC) from CRM systems.</p><br/><p>- Design and develop identity resolution logic across both streaming and batch datasets.</p><br/><p><b>DataOps and Governance :</b></p><br/><p>- Use Unity Catalog for managing RBAC, data lineage, and auditability.</p><br/><p>- Integrate Great Expectations or similar tools for continuous data quality monitoring.</p><br/><p>- Set up CI/CD pipelines for deploying Databricks notebooks, jobs, and DLT pipelines.</p><br/><p><b>Requirements :</b></p><br/><p>- 4-9 years of hands-on experience in data engineering.</p><br/><p>- Expertise in Databricks Lakehouse platform, Delta Lake, and Unity Catalog.</p><br/><p>- Advanced PySpark skills, including Structured Streaming.</p><br/><p>- Experience implementing Kafka + Debezium CDC pipelines.</p><br/><p>- Strong in SQL transformations, data modeling, and analytical querying.</p><br/><p>- Familiarity with metadata-driven architecture and parameterized pipelines.</p><br/><p>- Understanding of data governance : PII masking, access controls, and lineage tracking.</p><br/><p>- Proficiency in working with AWS, MongoDB, and PostgreSQL.</p><br/><p><b>Nice to Have :</b></p><br/><p>- Experience working on Customer 360 or Martech CDP platforms.</p><br/><p>- Familiarity with Martech tools like Segment, Braze, or other CDPs.</p><br/><p>- Exposure to ML pipelines for segmentation, scoring, or personalization.</p><br/><p>- Knowledge of CI/CD for data workflows using GitHub Actions, Terraform, or Databricks CLI.</p><br/></p> (ref:hirist.tech)


Required Skill Profession

Computer Occupations



Your Complete Job Search Toolkit

✨ Smart • Intelligent • Private • Secure

Start Using Our Tools

Join thousands of professionals who've advanced their careers with our platform

Rate or Report This Job
If you feel this job is inaccurate or spam kindly report to us using below form.
Please Note: This is NOT a job application form.


    Unlock Your Senior Data Potential: Insight & Career Growth Guide