Job Description
<p><p><b>RazerTech Consulting</b> is mandated to hire for a <b>Sr. Technical AI Data Engineer</b> for a US-based Strategy Consulting, Investment Banking Advisory firm.<br/><br/><b>Location : Hyderabad | Initial 3-4 months Remote, later transitioning into Hybrid setup</b><br/><br/><b>Position Summary :</b><br/><br/>We are looking for a skilled and highly motivated Technical Data Engineer to join our fast growing data team at a pivotal moment.
In this role, you will have the opportunity to build and shape critical components of our data infrastructure - not entirely from scratch, but close to it.<br/><br/><b>Responsibilities :</b><br/><br/>- Gain a comprehensive understanding of current data sources, pipelines, and storage systems.<br/><br/></p><p>- Design, build, and maintain scalable ETL/ELT pipelines to automate the movement of data from diverse sources to a centralized data warehouse.<br/><br/></p><p>- Optimize data pipelines for performance, reliability, and maintainability.<br/><br/></p><p>- Ensure data is validated, transformed, and stored in formats that meet analytical needs.<br/><br/></p><p>- Analyze existing data origin sources (internal databases, third-party APIs, web-based systems) to assess structure, quality, and reliability.<br/><br/></p><p>- Define and document data architecture, recommending improvements to support current and future data needs.<br/><br/></p><p>- Collaborate with stakeholders to align technical solutions with business requirements.<br/><br/></p><p>- Apply data wrangling techniques to prepare raw data for analysis, including handling missing values, data deduplication, and schema standardization.<br/><br/></p><p>- Ensure data integrity and implement logging, alerting, and monitoring for all data workflows.<br/><br/></p><p>- Partner with Data Analysts and business Stakeholders to support A/B testing frameworks and provide infrastructure for </p><p>running experiments.<br/><br/></p><p>- Enable self-service reporting and analysis by ensuring well-documented, accessible datasets.<br/><br/></p><p>- Assist in the development of dashboards and reports using tools like Tableau or Power BI.<br/><br/></p><p>- Support the data team in presenting key metrics and insights in a visually compelling way.<br/><br/></p><p>- Continuously identify and integrate new data sources -internal or external - to enhance business insights and competitive edge.<br/><br/></p><p>- Deploy systems to monitor data quality, pipeline health, and job failures proactively.<br/><br/></p><p>- Design and implement automated pipelines to ingest and process data from key sources.<br/><br/></p><p>- Stay current with advancements in AI technologies, frameworks (e.g., TensorFlow, PyTorch), and large language models (LLMs) to inform architectural decisions and promote innovation.<br/><br/></p><p>- Ensure data is clean, validated, and ready for use by analysts and stakeholders.<br/><br/></p><p>- Document existing workflows and identify quick wins for optimization.<br/><br/></p><p>- Take initiative and ownership of projects from concept to deployment, demonstrating a builder's mindset.<br/><br/><b>Qualifications :</b><br/><br/>- A bachelor's degree in Engineering, Data Science, or related field is required.
Masters degree highly preferred.<br/><br/></p><p>- 5+ years experience in a relevant Technical Data Engineer position, machine learning, deep learning, or AI systems engineering.<br/><br/></p><p>- Proficient in SQL and at least one programming language (e.g., Python).<br/><br/></p><p>- Experience with data pipeline tools (e.g., Airflow, dbt, Apache Beam, etc.).<br/><br/></p><p>- Familiarity with cloud platforms (AWS, GCP, Azure) and data warehouses (e.g., Snowflake, BigQuery, Redshift).<br/><br/></p><p>- Experience with frameworks like BeautifulSoup, Scrapy, or Selenium.<br/><br/></p><p>- Experience in training, fine-tuning, and deploying large language models (LLMs) or transformer-based architectures (e.g., BERT, GPT, LLaMA)<br/><br/></p><p>- Knowledge of A/B testing frameworks and visualization tools (e.g., Tableau, Power BI).<br/><br/></p><p>- Experience designing and implementing scalable ML pipelines using tools like MLflow, Kubeflow, Airflow, CI/CD pipelines for model deployment.<br/><br/></p><p>- Strong problem-solving skills and the ability to work in a fast-paced environment.<br/><br/></p><p>- Experience in an entrepreneurial or start up environment preferred.<br/><br/></p><p>- Demonstrated leadership capabilities, able to effectively communicate and collaborate with members at all levels of the company.</p><br/></p> (ref:hirist.tech)