- Bachelors or masters degree in computer science, Engineering, or a related field.
- 4 to 5 years of overall experience and 3+ years of experience designing and implementing data solutions on the Databricks platform.
- Proficiency in programming languages such as Python, Scala, or SQL.
- Strong understanding of distributed computing principles and experience with big data technologies such as Apache Spark.
- Experience with cloud platforms such as AWS, Azure, or GCP, and their associated data services.
- Proven track record of delivering scalable and reliable data solutions in a fast-paced environment.
- Excellent problem-solving skills and attention to detail.
- Strong communication and collaboration skills with the ability to work effectively in cross-functional teams.
- Good to have experience with containerization technologies such as Docker and Kubernetes.
- Knowledge of DevOps practices for automated deployment and monitoring of data pipelines.
Mandatory Skills Primary Skills -> Databricks, SQL, Python
Skills Required
Aws, Azure, Databricks, Sql, Python, Gcp, Scala