Job Description
<p><p><b>Position Name</b> : Software Engineer : Mumbai (Wadala) - Work from Office<br/><br/><b>Experience Range</b> : 4+years<br/><br/><b>Mandatory Requirement :</b><br/><br/>- Strong experience in Python with Flask / FastAPI frameworks.<br/><br/>- Experience in Microservices developments using AWS Lambda<br/><br/>- Experience in data processing pipelines using PySpark in AWS Glue<br/><br/>- Strong knowledge of relational databases like PostgreSQL or MySQL.<br/><br/>- Experience with NumPy and Pandas for data processing.<br/><br/>- Knowledge of Celery, Redis/RabbitMQ/ AWS SQS message queues for asynchronous task processing.<br/><br/><b>General DevOps Skills :</b><br/><br/>- Experience with CI/CD pipelines<br/><br/>- Good understanding of Git workflows and version control.<br/><br/>- Knowledge of API documentation tools like Swagger/OpenAPI.<br/><br/>- Familiarity with Agile methodologies like Scrum/Kanban & Jira project management tool.<br/><br/><b>About the client :</b><br/><br/>Our client is a leading Insurance brokers company.<br/><br/> As one of India's leading insurance brokers, they bring clarity to the complex world of insurance.<br/><br/> With a pan-India presence across 1,000+ cities and decades of collective experience, they navigate intricate risk landscapes with expertise and precision.<br/><br/><b>Job Roles and Responsibilities :</b><br/><br/>- Backend Development : Design, develop, and maintain RESTful APIs using Flask or FastAPI.<br/><br/>- Develop microservices using AWS Lambda functions and ETL jobs using AWS Glue & PySpark, Cleanse, transform, and analyze complex datasets to support business insights and analytics using PySpark.<br/><br/>- Optimize PySpark jobs for performance and scalability.<br/><br/>- Work with Pandas & NumPy for data transformation and analytics.<br/><br/><b>Serverless Application Development (AWS SAM & Lambda) :</b><br/><br/>- Design and deploy serverless applications using AWS SAM (Serverless Application Model) to automate infrastructure provisioning.<br/><br/>- Develop, test, and maintain AWS Lambda functions for real-time data processing, microservices, and backend automation.<br/><br/><b>Data Engineering with AWS Glue :</b><br/><br/>- Create ETL pipelines with AWS Glue to transform, clean, and catalog structured and semi-structured data.<br/><br/>- Develop Glue Jobs using PySpark and monitor performance, scaling, and job triggers.<br/><br/>- Integrate Glue with data lakes and other AWS data sources S3, and Aurora.<br/><br/><b>Authentication and Access Control (AWS Cognito) :</b><br/><br/>-Implement secure user authentication and authorization using AWS Cognito (user pools and identity pools).<br/><br/>- Customize token policies, integrate social logins (OAuth2, SAML), and manage identity federation.<br/><br/><b>AWS Bedrock & LLMs :</b><br/><br/>- Utilize AWS Bedrock to build, test, and fine-tune LLM-powered applications using models like Anthropic Claude, Meta Llama, or Amazon Titan.<br/><br/>- Design prompt engineering strategies, fine-tuning workflows, and RAG (Retrieval-Augmented Generation) architectures.<br/><br/><b>GraphQL API Design (AWS AppSync) :</b><br/><br/>- Design scalable GraphQL APIs with AWS AppSync to simplify front-end/backend integration.<br/><br/>- Implement resolvers using Lambda, DynamoDB, and Aurora Serverless data sources.<br/><br/>- Handle schema stitching, caching, real-time subscriptions, and access control.<br/><br/><b>Frontend Integration & DevOps (AWS Amplify) :</b><br/><br/>- Integrate front-end apps (React) with Amplify for CI/CD, hosting, and backend service integration.<br/><br/>- Configure Amplify with GraphQL endpoints (AppSync), Cognito auth, and storage modules.<br/><br/>- Manage deployment pipelines and environment-specific builds.<br/><br/><b>Vector Store Design & Search :</b><br/><br/>- Design schema for storing dense vector embeddings from LLMs or NLP pipelines.<br/><br/>- Integrate vector DBs with LLMs using frameworks like LangChain, or custom RAG workflows.<br/><br/><b>Deployment & Performance Optimization :</b><br/><br/>- Optimize APIs and database queries for high performance.<br/><br/>- Deploy and manage applications using Docker, Kubernetes (EKS / ECS).<br/><br/>- Implement unit tests, integration tests, and maintain code quality.<br/><br/><b>Technical Leadership :</b><br/><br/>- Contribute to architectural decisions and collaborate with stakeholders to gather and analyze requirements.<br/><br/>- Mentor junior engineers and contribute to code reviews to ensure high-quality :</b><br/><br/>- Debug and resolve technical issues and performance bottlenecks in a timely manner.<br/><br/>- Provide innovative solutions to complex technical challenges.<br/><br/><b>Continuous Improvement :</b><br/><br/>- Stay updated with emerging technologies and incorporate them into existing systems when beneficial.<br/><br/>- Optimize application performance and scalability through regular refactoring and tuning.<br/><br/><b>Collaboration & Best Practices :</b><br/><br/>- Work with product managers, Business Team, and data engineers.<br/><br/>- Participate in code reviews, sprint planning, and architecture discussions.<br/><br/>- Ensure security best practices in both backend and frontend.<br/><br/>- Maintain technical documentation and API and Experience :</b><br/><br/>- Bachelors or masters degree in computer science or related field.<br/><br/>- 4+ years of experience in Software Engineering/Development<br/><br/>- 3+ years of experience in Python<br/><br/>- 1+ years of experience in ReactJs<br/><br/>- 2+ years of experience in AWS, SAM, services - Lambda, Glue, Cognito, Bedrock, AppSync & Amplify<br/><br/>- 1+ years of experience in Vector DB & LLM<br/></p><br/></p> (ref:hirist.tech)