Know ATS Score
CV/Résumé Score
  • Expertini Resume Scoring: Our Semantic Matching Algorithm evaluates your CV/Résumé before you apply for this job role: Databricks with Python.
India Jobs Expertini

Urgent! Databricks with Python Job Opening In Chennai – Now Hiring Confidential

Databricks with Python



Job description

Teamware Solutions is seeking a skilled Databricks with Python Developer to build, optimize, and manage our big data processing and analytics solutions.

This role is crucial for working with relevant technologies, ensuring smooth data operations, and contributing significantly to business objectives through expert analysis, development, implementation, and troubleshooting within the Databricks with Python domain.

Roles and Responsibilities:

  • Big Data Solution Development: Design, develop, and implement scalable data processing pipelines and analytics solutions using PySpark within the Databricks platform.
  • Data Ingestion & Transformation: Write efficient PySpark code to extract, transform, and load (ETL/ELT) large volumes of data from various sources into Databricks.
  • Data Lake/Warehouse Management: Work with data lake and data warehouse concepts, ensuring data quality, consistency, and efficient storage within Databricks (e.g., Delta Lake).
  • Performance Optimization: Optimize PySpark jobs and Databricks notebooks for performance, cost-efficiency, and scalability, addressing bottlenecks in data processing.
  • Analysis & Insights: Perform complex data analysis using PySpark to uncover insights, build data models, and support business intelligence and machine learning initiatives.
  • Troubleshooting: Perform in-depth troubleshooting, debugging, and issue resolution for PySpark jobs, Databricks environments, and data pipeline failures.
  • Collaboration: Work closely with data engineers, data scientists, business analysts, and other stakeholders to understand data requirements and deliver robust solutions.
  • Code Quality & Best Practices: Write clean, modular, and well-documented PySpark code.

    Participate in code reviews and adhere to best practices for big data development.
  • Automation: Implement automation for data pipelines, job scheduling, and monitoring within the Databricks ecosystem.

Preferred Candidate Profile:

  • PySpark Expertise: Strong hands-on development experience with PySpark for big data processing and analytics.
  • Databricks Platform: Proven experience working with the Databricks platform, including Databricks notebooks, clusters, Delta Lake, and related services.
  • SQL Proficiency: Excellent proficiency in SQL for data manipulation and querying.
  • Python Programming: Strong programming skills in Python for data engineering tasks.
  • Big Data Concepts: Solid understanding of big data concepts, distributed computing, and data warehousing principles.
  • Cloud Platforms (Plus): Familiarity with cloud services from AWS, Azure, or GCP, particularly those related to data storage and processing, is a plus.
  • Problem-Solving: Excellent analytical and problem-solving skills with a methodical approach to complex data challenges.
  • Communication: Strong verbal and written communication skills to articulate technical solutions and collaborate effectively within a team.
  • Education: Bachelor's degree in Computer Science, Data Engineering, Information Technology, or a related technical field.

Skills Required
Pyspark, Databricks, Sql, Python, Big Data, Aws


Required Skill Profession

Computer Occupations



Your Complete Job Search Toolkit

✨ Smart • Intelligent • Private • Secure

Start Using Our Tools

Join thousands of professionals who've advanced their careers with our platform

Rate or Report This Job
If you feel this job is inaccurate or spam kindly report to us using below form.
Please Note: This is NOT a job application form.


    Unlock Your Databricks with Potential: Insight & Career Growth Guide