Know ATS Score
CV/Résumé Score
  • Expertini Resume Scoring: Our Semantic Matching Algorithm evaluates your CV/Résumé before you apply for this job role: Data Pipeline Architect.
India Jobs Expertini

Urgent! Data Pipeline Architect Job Opening In Bengaluru – Now Hiring ThoughtFocus

Data Pipeline Architect



Job description

Looking for Immediate joiners only

Shift time: 1:00 PM to 10:00 PM

Location: Bangalore

Skills: Python, pyspark, Databricks

Exp: 5+ Years


Job Summary:

The Data Engineer responsible for implementing and managing the operational aspects of cloud-native and hybrid data platform solutions built with Azure Databricks.

They ensure the efficient and effective functioning of the Azure Databricks environment, including monitoring and troubleshooting data pipelines, managing data storage and access, and optimizing performance.

They work closely with data engineers, data scientists, and other stakeholders to understand data requirements, design solutions, and implement data integration and transformation processes.


Key Responsibilities:

  • Provide expertise and ownership of Azure Databricks development tasks within the scrum team.
  • Interact effectively with clients and leadership and can adapt communication for the appropriate audience.
  • Read and comprehend software requirements, assisting with development of agile user stores and tasks.
  • Assist with troubleshooting configuration and performance issues.
  • Assist with Azure Databricks deployments, testing, configuration, and installation.
  • Ensure security is a priority and understand the various areas where security vulnerabilities arise with database technologies.
  • Ensure database resiliency, and disaster recovery capabilities.


Required Skills & Qualifications:

  • 4+ years proven experience working with Azure Databricks Analytics database capabilities, specifically Azure Databricks and other relational database technologies supported in Azure.
  • 5+ years proven experience with Azure Data Lake Storage Gen 2, Azure Databricks, Azure Data Explorer, Azure Event Hubs, Spark Pools, Python, PySpark, SQL, Azure Landing Zone, Azure Networking Services, Microsoft EntraID.
  • 5+ years proven experience with Azure geo-redundancy, HA/failover technologies.
  • 5+ years proven experience designing and implementing data pipelines using Azure Databricks for data cleaning, transformation, and loading into Data Lakehouse.
  • 5+ years proven experience with Infrastructure as Code (IaC) tools such as Terraform.
  • 5+ years proven experience with programming languages such as Python, PySpark and data constructs such as JSON or XML.


Required Skill Profession

Computer Occupations



Your Complete Job Search Toolkit

✨ Smart • Intelligent • Private • Secure

Start Using Our Tools

Join thousands of professionals who've advanced their careers with our platform

Rate or Report This Job
If you feel this job is inaccurate or spam kindly report to us using below form.
Please Note: This is NOT a job application form.


    Unlock Your Data Pipeline Potential: Insight & Career Growth Guide