Know ATS Score
CV/Résumé Score
  • Expertini Resume Scoring: Our Semantic Matching Algorithm evaluates your CV/Résumé before you apply for this job role: Data Engineer PySpark/Azure Databricks.
India Jobs Expertini

Urgent! Data Engineer - PySpark/Azure Databricks Job Opening In Pune – Now Hiring SpeedMart

Data Engineer PySpark/Azure Databricks



Job description

<p><p><b>Company Profile :</b></p><p><br/></p><p>Our client is a global IT services company that helps businesses with digital transformation with offices in India and the United States.<br/><br/>It helps businesses with digital transformation, provide IT collaborations and uses technology, innovation, and enterprise to have a positive impact on the world of business.<br/><br/>With expertise is in the fields of Data, IoT, AI, Cloud Infrastructure and SAP, it helps accelerate digital transformation through key practice areas IT staffing on demand, innovation and growth by focusing on cost and problem solving.<br/><br/><b>Job Profile :</b> Data Engineer.<br/><br/><b>Location :</b> Pune.<br/><br/><b>Employment Type :</b> Full-time, WFO, Regular shift.<br/><br/><b>Preferred experience :</b> 6+ years.<br/><br/><b>The Role :</b></p><p><br/></p><p>Responsible for day-to-day tasks related to data engineering, data modeling, ETL (Extract Transform Load), data warehousing, and data analytics.<br/><br/>Work with AWS and Databricks to design, develop, and maintain data pipelines and data platforms.<br/><br/>Build operational and automated dashboards, reports with help of Power BI and other reporting tools.<br/><br/><b>Responsibilities :</b><br/><br/>- Work extensively on Databricks and its modules using PySpark for data processing.<br/><br/>- Designs, Develops, and optimize scalable ETL pipelines using the Databricks platform and cloud services to transform raw data into actionable insights.<br/><br/>- Designing and implementing data storage solutions on AWS.<br/><br/>- Develop and maintain data models and schemas optimized for specific use cases.<br/><br/>- Building and maintaining data pipelines for data integration and processing.<br/><br/>- Optimizing data processing performance through tuning and monitoring.<br/><br/>- Developing and maintaining data models and schemas.<br/><br/>- Ensuring data security and privacy prerequisites are followed.<br/><br/>- Designing and developing BI reports and dashboards.<br/><br/>- Develop operational and functional reports.<br/><br/>- Build automated reports and dashboards with the help of Power BI and other reporting tools.<br/><br/>- Understand business requirements to set functional specifications for reporting applications.<br/><br/><b>Must Have Qualifications :</b><br/><br/>- 6+ years of proven experience in data engineering roles.<br/><br/>- Strong expertise in Databricks and PySpark for building scalable data solutions.<br/><br/>- Advanced proficiency in SQL (optimizations, complex transformations, performance tuning).<br/><br/>- Solid hands-on experience with AWS cloud data stack (S3, Glue, Lambda, Redshift, Step Functions, EMR, etc.)<br/><br/>- Strong understanding of data modeling, warehousing, and ETL/ELT best practices.<br/><br/>- Pharma/healthcare domain knowledge ability to work with clinical, commercial, and regulatory datasets.<br/><br/>- Experience with data governance, security, and compliance in a regulated industry.<br/><br/><b>Preferred Qualifications :</b><br/><br/>- Excellent communication, problem-solving, and leadership skills.</p><br/></p> (ref:hirist.tech)


Required Skill Profession

Computer Occupations



Your Complete Job Search Toolkit

✨ Smart • Intelligent • Private • Secure

Start Using Our Tools

Join thousands of professionals who've advanced their careers with our platform

Rate or Report This Job
If you feel this job is inaccurate or spam kindly report to us using below form.
Please Note: This is NOT a job application form.


    Unlock Your Data Engineer Potential: Insight & Career Growth Guide