- Maintain ETL/ELT pipelines using modern data engineering tools and frameworks   
   - On-call support data pipeline health, performance, and SLA compliance   
   - Document data processes, schemas, and best practices SOP  
   -  Implement data quality checks, monitoring, and alerting systems to ensure data reliability  
   - Optimize data pipeline performance and troubleshoot production issues 
     
 
 Requirements 
3+ years of experience in data engineering, software engineering, or related role
  
• Proven experience building and maintaining production data pipelines Required Qualifications 
  
• Strong proficiency in Spark SQL, hands-on experience with realtime Kafka, Flink 
  
• Databases: Strong knowledge of relational databases (Oracle, MySQL) and NoSQL systems
  
• Proficiency with Version Control Git, CI/CD practices and collaborative development workflows
  
• Strong operations management and stakeholder communication skills 
  
• Flexibility to work cross time zone 
  
• Have cross-cultural communication mindset
  
• Experience working in cross-functional teams
  
• Continuous learning mindset and adaptability to new technologies 
 
 
- And regarding the required software skills: streaming jobs and some services that need to be developed and maintained, so we prefer to have more people that can have good grasp of at least one or two coding languages such as python, java, scala etc.
   
     - Good understanding of streaming related knowledge such as kafka, flink etc.
, in addition to big data related knowledge   
       
 
 Benefits 
- Competitive salary and performance-based bonuses.
  
    - Comprehensive insurance plans.
  
    - Collaborative and supportive work environment   
    - Chance to learn and grow with a talented team.
  
    - A positive and fun work environment.