About the Role
As part of Workday’s Prism Analytics Data Processing team, you will be responsible for building, enhancing and extending our large scale distributed data processing engines.
You will work with a top-notch team to architect and build features in our distributed ingestion, transaction and query processing engines.
You will be responsible for developing algorithms and techniques for supporting fast, efficient transactions and queries over large scale data, in a multi-tenanted cloud environment.
About You
You are an engineer who is passionate about software development and takes pride in your work.
You think and code in terms of well defined abstractions.
You enjoy coming up with novel solutions and can clearly articulate the value to stakeholders.
You are excited about working as part of a team of engineers.
You understand the importance of doing what is right for the customer.
You have a strong interest in learning about and developing data management and distributed data processing frameworks and algorithms.
You are a fast learner and enjoy learning new technical areas such as languages and frameworks.
You can make all of this happen using Scala, and Java while getting exposure to Spark and related Hadoop technologies.
Basic Qualifications
7+ years using any of the following programming languages: Java, Scala, C++, Go, or Rust
3+ years development experience in relevant domain areas (e.g. database internals, distributed system application, JVM performance tuning, Cloud/SaaS Services, etc.)
Other Qualifications
Background in data warehousing, database system internals and/or distributed systems
Expertise in distributed data processing engines or data management systems
A strong understanding of SQL
Knowledge of Apache Spark and Spark SQL internals
Expertise in one or more of: Hadoop YARN, Kubernetes, MapReduce, or Mesos