Job Description
About Reckonsys Tech Labs:-Reckonsys is a boutique software product development services firm specialized in creating uncommon solutions for common problems by making use of the right technologies and best practices.
Reckonsys works with Startup founders to build their MVPs and with existing enterprises to help solve interesting problems.Established in 2015, over the last 10 years we have 65+ associates.
Out of which 40 Products that we have built for our clients, 20 either got funded/acquired/have become profitable!!Website Link: In Link: co: are seeking Python Developers who are hands-on with Generative AI (Gen AI), Model Context Protocol (MCP), Agent-to-Agent (A2 A) workflows, and Retrieval-Augmented Generation (RAG).
This is not a research role—it’s about shipping production-grade AI systems that work reliably in the wild.You will architect, implement, and optimize AI backends that combine Python engineering discipline with Gen AI capabilities like tool orchestration, retrieval pipelines, and observability.Key ResponsibilitiesPython EngineeringGenerative AI & RAGMCP (Model Context Protocol)Agent-to-Agent (A2 A) WorkflowsProduction & ObservabilityRequired Skills & Qualifications3–7 years professional experience with Python (3.9+).Strong knowledge of OOP, async programming, and REST API design.Proven hands-on experience with RAG implementations and vector databases (Pinecone, Weaviate, FAISS, Qdrant, Milvus).Familiarity with MCP (Model Context Protocol) concepts and hands-on experience with MCP server implementations.Understanding of multi-agent workflows and orchestration libraries (Lang Graph, Auto Gen, Crew AI).Proficiency with Fast API/Django for backend development.Comfort with Docker, Git Hub Actions, CI/CD pipelines.Practical experience with cloud infrastructure (AWS/GCP/Azure).Add tracing, logging, and evaluation metrics (Prompt Foo, Lang Smith, Ragas).Optimize for latency, cost, and accuracy in real-world deployments.Deploy solutions using Docker, Kubernetes, and cloud platforms (AWS/GCP/Azure).Design and implement multi-agent orchestration (e.g., Auto Gen, Crew AI, Lang Graph).Build pipelines for agents to delegate tasks, exchange structured context, and collaborate.Add observability, replay, and guardrails to A2 A interactions.Develop MCP servers to expose tools, resources, and APIs to LLMs.Work with Fast MCP SDK and design proper tool/resource decorators.Ensure MCP servers follow best practices for discoverability, schema compliance, and security.Implement RAG pipelines: text preprocessing, embeddings, chunking strategies, retrieval, re-ranking, and evaluation.Integrate with LLM APIs (Open AI, Anthropic, Gemini, Mistral) and open-source models (Llama, MPT, Falcon).Handle context-window optimization and fallback strategies for production workloads.Build clean, modular, and scalable Python codebases using Fast API/Django.Implement APIs, microservices, and data pipelines to support AI use cases.Nice-to-HaveExposure to AI observability & evaluation (Lang Smith, Prompt Foo, Ragas).Contributions to open-source AI/ML or MCP projects.Understanding of compliance/security frameworks (SOC-2, GDPR, HIPAA).Prior work with custom embeddings, fine-tuning, or LLMOps stacks.What We OfferOpportunity to own core AI modules (MCP servers, RAG frameworks, A2 A orchestration).End-to-end involvement from architecture → MVP → production rollout.A fast-moving, engineering-first culture where experimentation is encouraged.Competitive compensation, flexible work setup, and strong career growth.Location:Bangalore (Hybrid) / RemoteExperience Level:3 – 7 yearsCompensation:Competitive, based on expertise