Position Description:
Your future duties and responsibilities:
• Design, develop, and maintain ETL solutions for large-scale data pipelines and migration projects.
• Work with Java, SQL, and Oracle Database to build and optimize data integration workflows.
• Develop scripts using Python, Groovy, or Shell to automate and enhance ETL processes.
• Collaborate with cross-functional teams to gather requirements and deliver effective data solutions.
• Ensure data quality, accuracy, and performance tuning of ETL jobs and SQL queries.
• Troubleshoot and resolve issues in data processing and pipeline workflows.
• Participate in code reviews, best practice implementation, and process improvements.
• Contribute to documentation, deployment, and support of ETL processes in production environments.
Required qualifications to be successful in this role:
Experience with cloud platforms such as AWS, Azure, or GCP for data engineering and ETL.
• Knowledge of Big Data technologies like Hadoop, Spark, or Kafka.
• Familiarity with CI/CD pipelines, version control (Git/Bitbucket), and DevOps practices.
• Exposure to data modeling, data warehousing, and data governance frameworks.
• Experience working in Agile/Scrum development environments.
• Strong understanding of performance tuning for SQL queries and ETL jobs.
• Excellent problem-solving, analytical, and communication skills to work with diverse stakeholders.
Skills: