Know ATS Score
CV/Résumé Score
  • Expertini Resume Scoring: Our Semantic Matching Algorithm evaluates your CV/Résumé before you apply for this job role: Cognite Senior Data Platform Engineer.
India Jobs Expertini

Urgent! Cognite - Senior Data Platform Engineer Job Opening In Bengaluru – Now Hiring Cognite

Cognite Senior Data Platform Engineer



Job description

<p><p><b>Job Description : </b></p><p><br/></p><p>Function : Software Engineering Backend Development</p><p><br/><br/>SparkSpark Streaming<br/><br/>Cognite is revolutionising industrial data management through our flagship product, Cognite Data Fusion - a state-of-the-art SaaS platform that transforms how industrial companies leverage their data.

We're seeking a Senior Data Platform Engineer who excels at building high-performance distributed systems and thrives in a fast-paced startup environment.

You'll be working on cutting-edge data infrastructure challenges that directly impact how Fortune 500 industrial companies manage their most critical operational data.<br/><br/><b>Responsibilities : </b><br/><br/>- High-Performance Data Systems : Design and implement robust data processing pipelines using Apache Spark, Flink, and Kafka for terabyte-scale industrial datasets.<br/><br/>- Build efficient APIs and services that serve thousands of concurrent users with sub-second response times.<br/><br/>- Optimise data storage and retrieval patterns for time-series, sensor, and operational data.<br/><br/>- Implement advanced caching strategies using Redis and in-memory data structures.<br/><br/>Distributed Processing Excellence : <br/><br/>- Engineer Spark applications with a deep understanding of Catalyst optimiser, partitioning strategies, and performance tuning<br/><br/>- Develop real-time streaming solutions processing millions of events per second with Kafka and Flink.<br/><br/>- Design efficient data lake architectures using S3/GCS with optimised partitioning and file formats (Parquet, ORC).<br/><br/>- Implement query optimisation techniques for OLAP datastores like ClickHouse, Pinot, or Druid.<br/><br/>- Scalability and Performance : <br/><br/>- Scale systems to 10K+ QPS while maintaining high availability and data consistency.<br/><br/>- Optimise JVM performance through garbage collection tuning and memory management.<br/><br/>- Implement comprehensive monitoring using Prometheus, Grafana, and distributed tracing.<br/><br/>- Design fault-tolerant architectures with proper circuit breakers and retry mechanisms.<br/><br/>Technical Innovation : <br/><br/>- Contribute to open-source projects in the big data ecosystem (Spark, Kafka, Airflow).<br/><br/>- Research and prototype new technologies for industrial data challenges.<br/><br/>- Collaborate with product teams to translate complex requirements into scalable technical solutions.<br/><br/>- Participate in architectural reviews and technical design discussions.<br/><br/><b>Requirements : </b><br/><br/>Distributed Systems Experience (4-6 years) : </p><p><br/></p><p>- Production Spark experience - built and optimised large-scale Spark applications with understanding of internals </p><p><br/></p><p>- Streaming systems proficiency - implemented real-time data processing using Kafka, Flink, or Spark Streaming </p><p><br/></p><p>- JVM Language expertise - strong programming skills in Java, Scala, or Kotlin with performance optimisation experience.<br/><br/>Data Platform Foundations (3+ years) : </p><p><br/></p><p>- Big data storage systems - hands-on experience with data lakes, columnar formats, and table formats (Iceberg, Delta Lake) </p><p><br/></p><p>- OLAP query engines - worked with Presto/Trino, ClickHouse, Pinot, or similar high-performance analytical databases </p><p><br/></p><p>- ETL/ELT pipeline development - built robust data transformation pipelines using tools like DBT, Airflow, or custom frameworks<br/><br/>Infrastructure and Operations : </p><p><br/></p><p>- Kubernetes production experience -deployed and operated containerised applications in production environments.

</p><p><br/></p><p>- Cloud platform proficiency - hands-on experience with AWS, Azure, or GCP data services.</p><p><br/><br/>- Monitoring and observability - implemented comprehensive logging, metrics, and alerting for data systems.<br/><br/><b>Technical Depth Indicators : </b><br/><br/>- Performance Engineering - System optimisation experience - delivered measurable performance improvements (2x+ throughput gains).<br/><br/>- Resource efficiency - optimised systems for cost while maintaining performance requirements.<br/><br/>- Concurrency expertise - designed thread-safe, high-concurrency data processing systems.<br/><br/>- Data Engineering Best Practices - Data quality frameworks -implemented validation, testing, and monitoring for data pipelines.<br/><br/>- Schema evolution - managed backwards-compatible schema changes in production systems.<br/><br/>- Data modelling expertise - designed efficient schemas for analytical workloads.<br/></p><br/></p> (ref:hirist.tech)


Required Skill Profession

Computer Occupations



Your Complete Job Search Toolkit

✨ Smart • Intelligent • Private • Secure

Start Using Our Tools

Join thousands of professionals who've advanced their careers with our platform

Rate or Report This Job
If you feel this job is inaccurate or spam kindly report to us using below form.
Please Note: This is NOT a job application form.


    Unlock Your Cognite Senior Potential: Insight & Career Growth Guide