Know ATS Score
CV/Résumé Score
  • Expertini Resume Scoring: Our Semantic Matching Algorithm evaluates your CV/Résumé before you apply for this job role: Lead Data Engineer.
India Jobs Expertini

Urgent! Lead Data Engineer Job Opening In Gurugram – Now Hiring Confidential

Lead Data Engineer



Job description

  • Design and Develop Data Pipelines: Hands on with development, and optimisation of scalable and reusable - data pipelines in Azure Microsoft Fabric Synapse Data Engineering , leveraging both batch and real-time processing techniques .

    Ensure smooth integration with Azure Data Factory for orchestration and workflow management.
  • Cloud Data Architecture: Collaborate with the Data Architecture team to design and implement robust data architectures in the Azure environment , ensuring they align with business needs while optimising performance, scalability, and cost-efficiency.
  • Pipeline Optimisation : Continuously monitor and optimise the performance, cost, and reliability of data pipelines, ensuring efficient processing, storage, and management of large datasets.
  • Cross-functional Collaboration: Work closely with data engineering teams analysts , and business stakeholders to understand data requirements, developing solutions that enable self-service analytics and support the decision-making process.
  • Documentation Knowledge Sharing: Contribute to internal documentation, fostering a culture of knowledge-sharing.

    Provide mentorship and guidance to junior engineers, helping to elevate team skills and improve overall team performance.
  • Microsoft Fabric Experience: Apply your knowledge of Azure Tech Stack on Data Engineering ( or your willingness to learn Fabric-based development to manage end-to-end data orchestration, governance, and security across cloud and on-premises systems, ensuring seamless data movement and integration across hybrid environments.
  • Data Modelling Expertise: Leverage your deep expertise in Azure to design and implement data models , create processing pipelines, and integrate with other Azure services like Data Lake and Synapse to support data storage and analytics needs.

Required Skills and Qualifications:

  • Experience with Azure Ecosystem (Preferably Synapse): 5+ years of hands-on experience with Azure Ecosystem , including Synapse Spark OneLake , and other Fabric tools.

    Expertise in optimising Fabric notebooks and efficiently managing large-scale data workloads.
  • Proficiency in Azure Data Factory: Strong experience with designing and orchestrating complex data pipelines using Azure Data Factory , with an emphasis on seamless data flow integration across various Azure services.
  • Familiarity with Microsoft Fabric: A working knowledge or eagerness to learn Azure Data Fabric , focusing on cross-platform data orchestration, governance, and security.
  • Advanced Data Engineering Skills: Extensive experience in data engineering, including the design and implementation of ETL processes and working with large datasets.

    Proven expertise in data quality , monitoring, and testing practices.
  • Cloud Architecture Design Expertise: Experience designing and implementing data architectures in the Azure ecosystem , including tools such as Data Lake Synapse , and Azure Storage .
  • SQL and Data Modelling Expertise: Strong skills in SQL and data modelling , with the ability to design optimised data structures, tables, and views.

    Knowledge of both transactional and analytical data modelling.
  • Collaboration and Communication Skills: Strong ability to work cross-functionally with teams from various domains.

    Ability to communicate complex technical concepts to both technical and non-technical stakeholders.
  • Cost Optimisation: Proven experience optimising data engineering processes and Azure resources for both performance and cost, particularly in large-scale cloud environments.

Preferred Skills:

  • Data Lakehouse Experience: Familiarity with Data Lakehouse architectures, particularly with tools like Delta Lake OneLake , etc.
  • Azure Ecosystem Familiarity: Knowledge of Azure s full ecosystem for end-to-end data integration and ETL processes.
  • Proficiency in PySpark and Python: Expertise in PySpark for data processing tasks, with a solid foundation in Python .
  • Fabric Integration: Familiarity with Fabric and how it integrates with other services within the Azure ecosystem .
  • Databricks Experience: Experience with Databricks is a plus.


Skills Required
Pyspark, data engineering , Databricks, Python, Azure


Required Skill Profession

Computer Occupations



Your Complete Job Search Toolkit

✨ Smart • Intelligent • Private • Secure

Start Using Our Tools

Join thousands of professionals who've advanced their careers with our platform

Rate or Report This Job
If you feel this job is inaccurate or spam kindly report to us using below form.
Please Note: This is NOT a job application form.


    Unlock Your Lead Data Potential: Insight & Career Growth Guide