Job Description
            
                <p><p><b>Senior Data Engineer  : </b></p><br/>The Senior Data Engineer will design, develop, and maintain scalable data pipelines and infrastructure to support data-driven decision-making and advanced analytics.
This role requires deep expertise in data engineering, strong problem-solving skills, and the ability to collaborate with cross-functional teams to deliver robust data solutions.<br/><br/><b>Key Responsibilities : </b><br/><br/>- Data Pipeline Development : Design, build, and optimize scalable, secure, and reliable data pipelines to ingest, process, and transform large volumes of structured and unstructured data.<br/><br/>- Data Architecture : Architect and maintain data storage solutions, including data lakes, data warehouses, and databases, ensuring performance, scalability, and cost-efficiency.<br/><br/>- Data Integration : Integrate data from diverse sources, including APIs, third-party systems, and streaming platforms, ensuring data quality and consistency.<br/><br/>- Performance Optimization : Monitor and optimize data systems for performance, scalability, and cost, implementing best practices for partitioning, indexing, and caching.<br/><br/>- Collaboration : Work closely with data scientists, analysts, and software engineers to understand data needs and deliver solutions that enable advanced analytics, machine learning, and reporting.<br/><br/>- Data Governance : Implement data governance policies, ensuring compliance with data security, privacy regulations (e.g., GDPR, CCPA), and internal standards.<br/><br/>- Automation : Develop automated processes for data ingestion, transformation, and validation to improve efficiency and reduce manual intervention.<br/><br/>- Mentorship : Guide and mentor junior data engineers, fostering a culture of technical excellence and continuous learning.<br/><br/>- Troubleshooting : Diagnose and resolve complex data-related issues, ensuring high availability and reliability of data systems.<br/><br/>- Education : Bachelors or Masters degree in Computer Science, Engineering, Data Science, or a related field.<br/><br/>- Experience : 3+ years of experience in data engineering or a related role, with a proven track record of building scalable data pipelines and infrastructure.<br/><br/><b>Technical Skills : </b><br/><br/>- Proficiency in programming languages such as Python, Java, or Scala.<br/><br/>- Expertise in SQL and experience with NoSQL databases (e.g., MongoDB, Cassandra).<br/><br/>- Strong experience with cloud platforms (e.g., AWS, Azure, GCP) and their data services (e.g., Redshift, BigQuery, Snowflake).<br/><br/>- Hands-on experience with ETL/ELT tools (e.g., Apache Airflow, Talend, Informatica) and data integration frameworks.<br/><br/>- Familiarity with big data technologies (e.g., Hadoop, Spark, Kafka) and distributed systems.<br/><br/>- Knowledge of containerization and orchestration tools (e.g., Docker, Kubernetes) is a plus.<br/><br/><b>Soft Skills :</b><br/><br/>- Excellent problem-solving and analytical skills.<br/><br/>- Strong communication and collaboration abilities.<br/><br/>- Ability to work in a fast-paced, dynamic environment and manage multiple priorities.<br/><br/>- Certifications (optional but preferred) : Cloud certifications (e.g., AWS Certified Data Analytics, Google Professional Data Engineer) or relevant data engineering certifications.<br/><br/><b>Preferred Qualifications :</b><br/><br/>- Experience with real-time data processing and streaming architectures.<br/><br/>- Familiarity with machine learning pipelines and MLOps practices.<br/><br/>- Knowledge of data visualization tools (e.g., Tableau, Power BI) and their integration with data pipelines.<br/><br/>- Experience in industries with high data complexity, such as finance, healthcare, or e-commerce.</p><br/></p> (ref:hirist.tech)