Know ATS Score
CV/Résumé Score
  • Expertini Resume Scoring: Our Semantic Matching Algorithm evaluates your CV/Résumé before you apply for this job role: Big Data Engineer Python/SQL/ETL.
India Jobs Expertini

Urgent! Big Data Engineer - Python/SQL/ETL Job Opening In Bengaluru – Now Hiring CURATAL

Big Data Engineer Python/SQL/ETL



Job description

<p><p><b>Key Responsibilities : </b><br/><br/></p><p>- Design, develop, and support robust ETL pipelines to extract, transform, and load data into analytical products that drive strategic organizational goals.<br/><br/></p><p>- Develop and maintain data workflows on platforms like Databricks and Apache Spark using Python and Scala.<br/><br/></p><p>- Create and support data visualizations using tools such as MicroStrategy, Power BI, or Tableau, with a preference for MicroStrategy.<br/><br/></p><p>- Implement streaming data solutions utilizing frameworks like Kafka for real-time data processing.<br/><br/></p><p>- Collaborate with cross-functional teams to gather requirements, design solutions, and ensure smooth data operations.<br/><br/></p><p>- Manage data storage and processing in cloud environments, with strong experience in AWS cloud services.<br/><br/></p><p>- Use knowledge of data warehousing, data modeling, and SQL to optimize data flow and accessibility.<br/><br/></p><p>- Develop scripts and automation tools using Linux shell scripting and other languages as needed.<br/><br/></p><p>- Ensure continuous integration and continuous delivery (CI/CD) practices are followed for data pipeline deployments using containerization and orchestration technologies.<br/><br/></p><p>- Troubleshoot production issues, optimize system performance, and ensure data accuracy and integrity.<br/><br/></p><p>- Work effectively within Agile development teams and contribute to sprint planning, reviews, and Skills & Experience : </b></p><p><br/></p>- 7+ years of experience in technology with a focus on application development and production support.<br/><br/></p><p>- At least 5 years of experience in developing ETL pipelines and data engineering workflows.<br/><br/></p><p>- Minimum 3 years hands-on experience in ETL development and support using Python/Scala on Databricks/Spark platforms.<br/><br/></p><p>- Strong experience with data visualization tools, preferably MicroStrategy, Power BI, or Tableau.<br/><br/></p><p>- Proficient in Python, Apache Spark, Hive, and SQL.<br/><br/></p><p>- Solid understanding of data warehousing concepts, data modeling techniques, and analytics tools.<br/><br/></p><p>- Experience working with streaming data frameworks such as Kafka.<br/><br/></p><p>- Working knowledge of Core Java, Linux, SQL, and at least one scripting language.<br/><br/></p><p>- Experience with relational databases, preferably Oracle.<br/><br/></p><p>- Hands-on experience with AWS cloud platform services related to data engineering.<br/><br/></p><p>- Familiarity with CI/CD pipelines, containerization, and orchestration tools (e.g., Docker, Kubernetes).<br/><br/></p><p>- Exposure to Agile development methodologies.<br/><br/></p><p>- Strong interpersonal, communication, and collaboration skills.<br/><br/></p><p>- Ability and eagerness to quickly learn and adapt to new Qualifications : </b></p><p><br/></p>- Bachelors or Masters degree in Computer Science, Information Technology, or related fields.<br/><br/></p><p>- Experience working in large-scale, enterprise data environments.<br/><br/></p><p>- Prior experience with cloud-native big data solutions and data governance best practices.</p><br/></p> (ref:hirist.tech)


Required Skill Profession

Computer Occupations



Your Complete Job Search Toolkit

✨ Smart • Intelligent • Private • Secure

Start Using Our Tools

Join thousands of professionals who've advanced their careers with our platform

Rate or Report This Job
If you feel this job is inaccurate or spam kindly report to us using below form.
Please Note: This is NOT a job application form.


    Unlock Your Big Data Potential: Insight & Career Growth Guide