Know ATS Score
CV/Résumé Score
  • Expertini Resume Scoring: Our Semantic Matching Algorithm evaluates your CV/Résumé before you apply for this job role: Senior Data Engineer.
India Jobs Expertini

Urgent! Senior Data Engineer Job Opening In Pune – Now Hiring Braves Technologies

Senior Data Engineer



Job description

We are looking for a Senior Data Engineer to join our data engineering team and lead the design and implementation of scalable, high-performance data solutions.

The ideal candidate will have extensive experience with Python, Databricks, AWS platforms, and strong hands-on skills Docker.

You will play a key role in integrating data with client’s vendors, specifically Flat File exchange for now, but new vendors will be API integrations.

Our Company:
Founded in 2003, Braves Technologies is helping global technology companies incubate their dedicated offshore software development teams in India.

For the past 15+ years, Braves has been building Software Engineering, Game Development, and Customer Success teams for clients across the US and Australia.


For more information, you can visit style=box-sizing: inherit; margin: 0px; padding: 0px; border-color: rgba(0, 0, 0, 0.9); border-style: none; border-width: 0px; border-image: none 100% / 1 / 0 stretch; font-size: 14px; vertical-align: baseline; background: none 0% 0% / auto repeat scroll padding-box border-box rgb(255, 255, 255); outline: rgba(0, 0, 0, 0.9) none 0px; font-family: -apple-system, system-ui, >

Our Culture:
We are a team focused on high performance, high delivery, diverse thinking, and embodying a collaborative culture at all levels.

We value and encourage learning throughout the organization.

Every employee at Braves understands ownership and fulfills what is required.

We align a perfect work-life balance.


Key Responsibilities
  • Design and implement scalable ETL/ELT pipelines using Databricks and Apache Spark.
  • Develop, maintain, and optimise SQL and PL/SQL scripts for data extraction, transformation, and loading from Oracle and other relational databases.
  • Write robust, reusable, and optimised Python and Shell scripts for data automation and workflow orchestration.
  • Manage data ingestion and integration processes across structured and unstructured sources.
  • Deploy and manage code using GitHub for version control and collaboration.
  • Monitor, troubleshoot, and improve performance of data pipelines and jobs.

Required Skills & Experience
  • 6+ years of experience in data engineering with large-scale data systems.
  • Strong proficiency in Databricks, Python, AWS S3 buckets, and Docker
  • Solid experience with SQL, PL/SQL.
  • Advanced programming in Python for data transformation and automation.
  • Hands-on experience with GitHub for code versioning, branching, and collaboration.
  • Familiar with data quality frameworks and best practices in data architecture.
  • Lake house architecture implementation experience

What’s in it for you/Benefits for working for us:
Competitive Salary
Remote/Hybrid work culture
Flexible work timings
Family Group Medical Health Insurance
Group Accidental Insurance
Leave encashments (Gross, not just base salary)
Regular Fun and Sports activities.
Birthday/Anniversary Celebrations


Required Skill Profession

Computer Occupations



Your Complete Job Search Toolkit

✨ Smart • Intelligent • Private • Secure

Start Using Our Tools

Join thousands of professionals who've advanced their careers with our platform

Rate or Report This Job
If you feel this job is inaccurate or spam kindly report to us using below form.
Please Note: This is NOT a job application form.


    Unlock Your Senior Data Potential: Insight & Career Growth Guide