This is a highly specialized, hands-on role focused on building and deploying generative AI solutions.
It requires someone who can not only work with AI models but also integrate them seamlessly and securely into existing enterprise software.
- Generative AI & LLM Integration: The core responsibility is integrating Gemini models into corporate platforms like Slack and Confluence.
This involves hands-on development, prompt engineering, and the deployment of large language models (LLMs) in a production environment. - AI Orchestration & MLOps: A key part of the job is building the infrastructure that makes the AI work.
This includes managing orchestration logic, setting up embedding pipelines, and ensuring all components, from the prompt to the data retrieval, work together smoothly. - Vector Databases & Data Engineering: You must be proficient with vector databases (like Pinecone or Weaviate) and understand the process of creating embeddings from structured and unstructured data.
This is crucial for enabling the AI to retrieve relevant information from a company's internal documentation. - API & System Integration: The role requires strong technical skills to connect various platforms.
You'll need to set up API authentication and role-based access controls to ensure the AI assistants can securely access data from systems like Looker and Confluence. - Agile Development: You will be working in a sprint-based Agile environment, so familiarity with concepts like daily standups, sprint demos, and user acceptance testing is essential for managing projects and meeting deadlines.
Skills Required
Python, Api Integration, MLops, Agile Methodology