Know ATS Score
CV/Résumé Score
  • Expertini Resume Scoring: Our Semantic Matching Algorithm evaluates your CV/Résumé before you apply for this job role: Staff Data Platform Engineer(Dev Ops).
India Jobs Expertini

Urgent! Staff Data Platform Engineer(Dev Ops) Job Opening In Pune – Now Hiring Zendesk

Staff Data Platform Engineer(Dev Ops)



Job description

Job Description

Staff Platform Engineer (DevOps)

Our Enterprise Data & Analytics (EDA) is looking for an experienced Staff Data Platform Engineer to join our growing Platform engineering team.

You’ll work in a collaborative Agile environment using the latest engineering best practices with involvement in all aspects of the software development lifecycle.

As a Staff Data Platform Engineer, you will shape the strategy, architecture, and execution of Zendesk’s data platform that powers next‑generation reporting and analytics across the product portfolio.

You’ll lead complex, multi-team initiatives; set technical standards for scalable, reliable data systems; and mentor senior engineers while partnering closely with Product, Security, and Analytics to deliver high-quality, compliant, and cost-efficient data products at scale.

Data is at the heart of Zendesk’s business—this is a high-impact, high-ownership role with broad technical and organizational influence.

What you get to do every single day

  • Lead architecture and roadmap

    • Define and evolve the end-to-end data platform architecture across ingestion, transformation, storage and governance.

    • Establish standardized data contracts, schemas, documentation, and tooling that improve consistency and reduce time-to-data for analytics and product teams.

    • Lead build-vs-buy evaluations and pilot new technologies to improve reliability, speed, or cost.
       

  • Deliver platform capabilities at scale

    • Design and deliver secure, and highly-available data services and pipelines handling large-scale, mission-critical workloads.

    • Establish SLOs/SLIs for data pipelines, lineage, and serving layers; implement robust observability (metrics, tracing, alerting) and incident response.

    • Partner with Analytics, Product, and Security to translate business needs into robust, scalable data solutions and SLAs.

    • Tune query and pipeline performance and enforce FinOps guardrails to reduce Snowflake storage / compute spend without compromising reliability.
       

  • Raise the engineering bar

    • Define standards for data modeling, testing (unit/integration/contract), CI/CD, IaC, and code quality; champion reproducibility and reliability.

    • Advance data quality and governance (DQ checks, metadata, lineage, PII handling, RBAC) with “privacy-by-design” practices.

    • Conduct deep root cause analyses; drive systemic fixes that improve resilience and developer experience.
       

What you bring to the role

  • 8+ years of data engineering experience building, working & maintaining scalable data infrastructure (data pipelines & ETL processes on big data environments)

  • 3+ years leading complex, cross-team initiatives at Senior/Staff level.

  • 3+ years of experience with Cloud columnar databases (Snowflake)

  • Proven experience as a CI/CD Engineer or DevOps Engineer, with a focus on data platforms and analytics (Terraform, Docker, Kubernetes, Github Actions)

  • Experience with atleast 1 Cloud Platform (AWS, Google Cloud)

  • Proficiency in query authoring (SQL) and data processing (batch and streaming)

  • Intermediate experience with atleast one of the programming languages: Python, Go, Java, Scala, we use primarily Python

  • Experience with ETL schedulers such as Apache Airflow, AWS Glue or similar frameworks

  • Integration with 3rd party API SaaS applications like Salesforce, Zuora, etc

  • Ensure data integrity and accuracy by conducting regular data audits, identifying and resolving data quality issues, and implementing data governance best practices.

  • Experience performing root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement.

  • Excellent collaboration and communication skills.

  • Ability to work closely with data scientists, analysts, and business stakeholders to translate business requirements into technical solutions.

    Strong documentation skills for pipeline design and data flow diagrams.

What does our data stack looks like

  • ELT (Fivetran, Snowflake, dbt, Airflow)

  • Infrastructure (GCP, AWS, Kubernetes, Terraform, Github Actions)

  • Monitoring and Observability (DataDog, MonteCarlo)

  • BI (Tableau, Looker)

Please note that Zendesk can only hire candidates who are physically located and plan to work from Karnataka or Maharashtra.

Please refer to the location posted on the requisition for where this role is based.

Hybrid: In this role, our hybrid experience is designed at the team level to give you a rich onsite experience packed with connection, collaboration, learning, and celebration - while also giving you flexibility to work remotely for part of the week.

This role must attend our local office for part of the week.

The specific in-office schedule is to be determined by the hiring manager.


Required Skill Profession

Computer Occupations



Your Complete Job Search Toolkit

✨ Smart • Intelligent • Private • Secure

Start Using Our Tools

Join thousands of professionals who've advanced their careers with our platform

Rate or Report This Job
If you feel this job is inaccurate or spam kindly report to us using below form.
Please Note: This is NOT a job application form.


    Unlock Your Staff Data Potential: Insight & Career Growth Guide