Job Description
<p><p><b>Job Title:</b> Principal Engineer, Test AI Core Services Development.<br/><br/><b>Location:</b> Bangalore (WFO).<br/><br/><b>Time</b> 9:30AM 6:30PM.<br/><br/><b>Company Overview :</b><br/><br/>At Codvo, software and people transformations go hand-in-hand.
We are a global empathy-led technology services company.
Product innovation and mature software engineering are part of our core DNA.<br/><br/>Respect, Fairness, Growth, Agility, and Inclusiveness are the core values that we aspire to live by each day.<br/><br/>We continue to expand our digital strategy, design, architecture, and product management capabilities to offer expertise, outside-the-box thinking, and measurable results.<br/><br/><b>Role Requirement</b>-<br/><br/>We are looking for detail-oriented and forward-thinking Test Engineer to ensure the quality, performance, and security of our Core AI Services.<br/><br/>You will help validate distributed, cloud-native services and public APIs that form the foundation for enterprise AI capabilities.<br/><br/>This role demands deep technical skill and a passion for delivering robust, secure, and ethical AI services at scale.<br/><br/>Youll be a part of a Scrum team and work closely with developers and architects to design effective validation strategies, automated testing frameworks, and AI-specific evaluation tools with a builder mindset rapid prototyping and continuous improvement with agility of a start-up.<br/><br/><b>Key Responsibilities:</b><br/><br/>- Perform functional, performance, and security testing on cloud-native services deployed on Microsoft Azure.<br/><br/>- Design and implement automated test suites for APIs, service components, and AI pipelines.<br/><br/>- Automate the evaluation of AI system outputs to ensure accuracy, consistency, and safety of responses.<br/><br/>- Collaborate with developers and data scientists to establish service-level quality metrics and observability hooks.<br/><br/>- Validate services against AI regulatory frameworks and ensure traceability, fairness, and robustness in outcomes.<br/><br/>- Participate in threat modelling and security validation of exposed APIs and AI services.<br/><br/>- Provide feedback early in the lifecycle to reduce defects and improve design.<br/><br/>- Mentor junior testers, encourage continuous learning, and contribute to a culture of innovation.<br/><br/><b>AI & Cloud Expertise</b><br/><br/>- Familiarity with LLM evaluation techniques, output scoring, and validation frameworks.<br/><br/>- Understanding of key concepts such as prompt engineering, RAG, model orchestration, and hallucination detection.<br/><br/>- Experience in testing for accuracy, relevance, and consistency of AI model predictions/generations.<br/><br/>- Defining Performance Metrics for AI services and testing for the same.<br/><br/>- Awareness of AI safety, bias detection, and explainability techniques.<br/><br/>- Experience ensuring compliance with AI regulations and standards (e.g., NIST AI RMF, EU AI Act).<br/><br/>- Strong belief in ethical AI practices, transparency, and end-user trust.<br/><br/><b>Core Skills and Qualifications:</b><br/><br/>- 12+ years of experience in software testing, QA, or validation roles for cloud-native applications using Microsoft and .NET technologies.<br/><br/>- Proficient in designing automated testing frameworks.<br/><br/>- Hands-on experience with Azure DevOps, CI/CD pipelines, and containerized test environments.<br/><br/>- Strong understanding of API testing, performance profiling, and security testing (including OWASP top 10).<br/><br/>- Excellent problem-solving skills, with the ability to analyse complex technical challenges and propose scalable solutions.<br/><br/>- Experience working in Agile teams and collaborating across global R&D locations.<br/><br/>- Demonstrated ability to mentor junior team members fostering a culture of continuous learning and innovation.</p><br/></p> (ref:hirist.tech)