Job Description
<p></p><p>We are seeking an experienced NLP & LLM Specialist to join our team.<br/><br/> The ideal candidate will have deep expertise in working with transformer-based models, including GPT, BERT, T5, RoBERTa, and similar models.<br/><br/> This role requires experience in fine-tuning these pre-trained models on domain-specific tasks, as well as crafting and optimizing prompts for natural language processing tasks such as text generation, summarization, question answering, classification, and translation.<br/><br/> The candidate should be proficient in Python and familiar with NLP libraries like Hugging Face, SpaCy, and NLTK, with a solid understanding of model evaluation metrics.<br/><br/><b>Roles and Responsibilities :</b><br/><br/>- Model Expertise : Work with transformer models such as GPT, BERT, T5, RoBERTa, and others for a variety of NLP tasks, including text generation, summarization, classification, and translation.<br/><br/>- Model Fine-Tuning : Fine-tune pre-trained models on domain-specific datasets to improve performance for specific applications such as summarization, text generation, and question answering.<br/><br/>- Prompt Engineering : Craft clear, concise, and contextually relevant prompts to guide transformer-based models towards generating desired outputs for specific tasks.<br/><br/>- Iterate on prompts to optimize model performance.<br/><br/>- Instruction-Based Prompting : Implement instruction-based prompting to guide the model toward achieving specific goals, ensuring that the outputs are contextually accurate and aligned with task objectives.<br/><br/>- Zero-shot, Few-shot, Many-shot Learning : Utilize zero-shot, few-shot, and many-shot learning techniques to improve model performance without the need for full retraining.<br/><br/>- Chain-of-Thought (CoT) Prompting : Implement Chain-of-Thought (CoT) prompting to guide models through complex reasoning tasks, ensuring that the outputs are logically structured and provide step-by-step explanations.<br/><br/>- Model Evaluation : Use evaluation metrics such as BLEU, ROUGE, and other relevant metrics to assess and improve the performance of models for various NLP tasks.<br/><br/>- Model Deployment : Support the deployment of trained models into production environments and integrate them into existing systems for real-time applications.<br/><br/>- Bias Awareness : Be aware of and mitigate issues related to bias, hallucinations, and knowledge cutoffs in LLMs, ensuring high-quality and reliable outputs.<br/><br/>- Collaboration : Collaborate with cross-functional teams including engineers, data scientists, and product managers to deliver efficient and scalable NLP solutions.<br/><br/><b>Must Have Skill :</b><br/><br/>- Overall 7+ years with at least 2+ years of experience working with transformer-based models and NLP tasks, with a focus on text generation, summarization, question answering, classification, and similar tasks.<br/><br/>- Relevant 4+ Years, Expertise in transformer models like GPT (Generative Pre-trained Transformer), BERT (Bidirectional Encoder Representations from Transformers), T5 (Text-to-Text Transfer Transformer), RoBERTa, and similar models.<br/><br/>- Familiarity with model architectures, attention mechanisms, and self-attention layers that enable LLMs to generate human-like text.<br/><br/>- Experience in fine-tuning pre-trained models on domain-specific datasets for tasks such as text generation, summarization, question answering, classification, and translation.<br/><br/>- Familiarity with concepts like attention mechanisms, context windows, tokenization, and embedding layers.<br/><br/>- Awareness of biases, hallucinations, and knowledge cutoffs that can affect LLM performance and output quality.<br/><br/>- Expertise in crafting clear, concise, and contextually relevant prompts to guide LLMs towards generating desired outputs.<br/><br/>- Experience in instruction-based prompting<br/><br/>- Use of zero-shot, few-shot, and many-shot learning techniques for maximizing model performance without retraining.<br/><br/>- Experience in iterating on prompts to refine outputs, test model performance, and ensure consistent results.<br/><br/>- Crafting prompt templates for repetitive tasks, ensuring prompts are adaptable to different contexts and inputs.<br/><br/>- Expertise in chain-of-thought (CoT) prompting to guide LLMs through complex reasoning tasks by encouraging step-by-step breakdowns.<br/><br/>- Proficiency in Python and experience with NLP libraries (e., Hugging Face, SpaCy, NLTK).<br/><br/>- Experience with transformer-based models (e., GPT, BERT, T5) for text generation tasks.<br/><br/>- Experience in training, fine-tuning, and deploying machine learning models in an NLP context.<br/><br/>- Understanding of model evaluation metrics (e., BLEU, ROUGE)<br/><br/><b>Qualification :</b><br/><br/>- BE/B.Tech or Equivalent degree in Computer Science or related field.<br/><br/>- Excellent communication skills in English, both verbal and written</p><br/><p></p> (ref:hirist.tech)