Job Description:  
Data Engineer – MEAN stack  
About Us  
At Codvo, we are committed to building scalable, future-ready data platforms that power business impact.
We believe in a culture of innovation, collaboration, and growth, where engineers can experiment, learn, and thrive.
Join us to be part of a team that solves complex data challenges with creativity and cutting-edge technology.
 Role Overview   
We are looking for a highly skilled Full Stack Data Engineer with expertise in MEAN Stack to design, develop, and optimize end-to-end applications, and analytics solutions.
This role combines strong cloud platform expertise, and software engineering skills to deliver scalable, production-grade solutions.
Key Responsibilities   
- Design and develop applications on MEAN Stack.
 
 
- Mentor and assist team members towards common objective 
- Implement CI/CD pipelines for MEAN Stack workflows using GitHub Actions, Cloud DevOps, or similar.
 
 
- Build and maintain APIs, dashboards, or applications that consume processed data (full-stack aspect).
 
 
- Collaborate with data scientists, analysts, and business stakeholders to deliver solutions.
 
 
- Ensure data quality, lineage, governance, and security compliance.
 
 
- Deploy solutions across cloud environments (Azure/AWS/GCP).
 
 
Required Skills & Qualifications   
Core MEAN Stack Skills: 
Strong in MEAN Stack SQL.
- Front End (AngularJS), Backend (NodeJS, Express.JS), HTML, CSS, Bootstrap.
 
 
- Data Layer (Mongo DB/Postgres/MySQL), Cloud (Azure) Programming language - Java Script, Type Script 
- Framework/design patterns - MVC/MVVM.
 
 
Programming & Full Stack: 
- Python (mandatory), SQL (expert).
 
 
- Exposure to Java/Scala (for Spark jobs).
 
 
- Knowledge of APIs, microservices (FastAPI/Flask), or basic front-end (React/Angular) is a plus.
 
 
DevOps & CI/CD: 
- Git, CI/CD tools (GitHub Actions, DevOps, Jenkins).
 
 
- Containerization (Docker, Kubernetes is a plus).
 
 
Data Engineering Foundations: 
- Data modeling (OLTP/OLAP).
 
 
- Batch & streaming data processing (Kafka, Event Hub, Kinesis).
 
 
- Data governance & compliance (Unity Catalog, Lakehouse security).
 
 
Nice-to-Have   
- Experience with machine learning pipelines (MLflow, Feature Store).
 
 
- Knowledge of data visualization tools (Power BI, Tableau, Looker).
 
 
- Exposure to Graph databases (Neo4j) or RAG/LLM pipelines.
 
 
Qualifications   
- Bachelor’s or Master’s in Computer Science, Data Engineering, or related field.
 
 
- 4–7 years of experience in data engineering, with deep expertise in MEAN Stack.
 
 
Soft Skills   
- Strong problem-solving and analytical skills.
 
 
- Ability to work in fusion teams (business + engineering + AI/ML).
 
 
- Clear communication and documentation abilities.