Know ATS Score
CV/Résumé Score
  • Expertini Resume Scoring: Our Semantic Matching Algorithm evaluates your CV/Résumé before you apply for this job role: Data Engineer Java/Scala/Python.
India Jobs Expertini

Urgent! Data Engineer - Java/Scala/Python Job Opening In Bengaluru – Now Hiring TOOKITAKI TECHNOLOGIES PRIVATE LIMITED

Data Engineer Java/Scala/Python



Job description

<p><p><b>About the job</b><br/><br/><b>Position Overview</b><br/><br/>Job Title : Software Development Engineer 2<br/><br/>Department : Technology<br/><br/>Location : Bangalore, India<br/><br/>Reporting To : Senior Research Manager - Data<br/><br/><b>Position Purpose</b><br/><br/>The Research Engineer Data will play a pivotal role in advancing TookiTakis AI-driven compliance and financial crime prevention platforms through applied research, experimentation, and data innovation.<br/><br/> This role is ideal for professionals who thrive at the intersection of research and engineering, turning cutting-edge data science concepts into production-ready capabilities that enhance TookiTakis competitive edge in fraud prevention, AML compliance, and data intelligence.<br/><br/>The role exists to bridge research and engineering by<br/><br/>- Designing and executing experiments on large, complex datasets<br/><br/>- Prototyping new data-driven algorithms for financial crime detection and compliance automation.<br/><br/>- Collaborating across product, data science, and engineering teams to transition research outcomes into scalable, real-world solutions.<br/><br/>- Ensuring the robustness, fairness, and explainability of AI models within TookiTakis compliance platform.<br/><br/><b>Key Responsibilities : </b><br/><br/><b>Applied Research & Prototyping : </b><br/><br/>- Conduct literature reviews and competitive analysis to identify innovative approaches for data processing, analytics, and model developments.<br/><br/>- Build experimental frameworks to test hypotheses using real-world financial datase<br/><br/>- Prototype algorithms in areas such as anomaly detection, graph-based analytics, and natural language processing for compliance workflows.<br/><br/><b>Data Engineering for Research : </b><br/><br/>- Develop data ingestion, transformation, and exploration pipelines to support experimentation.<br/><br/>- Work with structured, semi-structured, and unstructured datasets at scale.<br/><br/>- Ensure reproducibility and traceability of experiments<br/><br/><b>Algorithm Evaluation & Optimization : </b><br/><br/>- Evaluate research prototypes using statistical, ML, and domain-specific metrics.<br/><br/>- Optimize algorithms for accuracy, latency, and scalability.<br/><br/>- Conduct robustness, fairness, and bias evaluations on mode.<br/><br/><b>Collaboration & Integration : </b><br/><br/>- Partner with data scientists to transition validated research outcomes into production-ready to code.<br/><br/>- Work closely with product managers to align research priorities with business goals<br/><br/>- Collaborate with cloud engineering teams to deploy research pipelines in hybrid environments.<br/><br/><b>Documentation & Knowledge Sharing : </b><br/><br/>- Document experimental designs, results, and lessons learned<br/><br/>- Share best practices across engineering and data science teams to accelerate innovation<br/><br/><b>Qualifications and Skills :</b><br/><br/>- Education Required : Bachelors degree in Computer Science, Data Science, Applied Mathematics, or related field<br/><br/>- Preferred : Masters or PhD in Machine Learning, Data Engineering, or a related research intensive field<br/><br/><b>Experience :</b><br/><br/>- Minimum 4 - 7 years in data-centric engineering or applied research roles.<br/><br/>- Proven track record of developing and validating algorithms for large-scale data processing or machine learning applications.<br/><br/>- Experience in financial services, compliance, or fraud detection is a strong plus.<br/><br/><b>Technical Expertise : </b><br/><br/>- Programming : Proficiency in Scala, Java, or Python<br/><br/>- Data Processing : Experience with Spark, Hadoop, and Flink<br/><br/>- ML/Research Frameworks : Hands-on with TensorFlow, PyTorch, or Scikit-learn<br/><br/>- Databases : Experience with both relational (PostgreSQL, MySQL) and NoSQL databases (MongoDB, Cassandra, ElasticSearch).<br/><br/>- Cloud Platforms : Experience with AWS (preferred) or GCP for research and data pipelines.<br/><br/>- Tools : Familiarity with experiment tracking tools like MLflow or Weights & Biases.<br/><br/>- Application Deployment : Strong experience with CI/CD practices, Containerized Deployments through Kubernetes, Docker Etc.<br/><br/>- Streaming frameworks : Strong experience in creating highly performant and scalable real time streaming applications with Kafka at the core<br/><br/>- Data Lakehouse : Experience with one of the modern data lakehouse platforms/formats such as Apache Hudi, Iceberg, Paimon is a very strong Plus.<br/><br/><b>Soft Skills : </b><br/><br/>- Strong analytical and problem-solving abilities.<br/><br/>- Clear concise communication skills for cross-functional collaboration.<br/><br/>- Adaptability in fast-paced, evolving environments.<br/><br/>- Curiosity-driven with a bias towards experimentation and iteration.<br/><br/><b>Key Competencies : </b><br/><br/>- Innovation Mindset : Ability to explore and test novel approaches that push boundaries in data analytics.<br/><br/>- Collaboration : Works effectively with researchers, engineers, and business stakeholders.<br/><br/>- Technical Depth : Strong grasp of advanced algorithms and data engineering principles.<br/><br/>- Problem Solving : Dives deep into the logs, metrics and code and identifying problems opportunities for performance tuning and optimization.<br/><br/>- Ownership : Drives research projects from concept to prototype to production.<br/><br/>- Adaptability : Thrives in ambiguity and rapidly changing priorities.<br/><br/>- Preferred Certifications in AWS Big Data, Apache Spark, or similar technologies.<br/><br/>- Experience in compliance or financial services domains.<br/><br/><b>Success Metrics : </b></p><p><br/></p><p>- Research to Production Conversion : % of validated research projects integrated into TookiTakis platform<br/><br/>- Model Performance Gains : Documented improvements in accuracy, speed, or robustness from research initiatives.<br/><br/>- Efficiency of Research Pipelines : Reduced time from ideation to prototype completion.<br/><br/>- Collaboration Impact : Positive feedback from cross-functional teams on research Competitive Salary : Aligned with industry standards and experience.<br/><br/>- Professional Development : Access to training in big data, cloud computing, and data integration tools.<br/><br/>- Comprehensive Benefits : Health insurance and flexible working options.<br/><br/>- Growth Opportunities : Career progression within Tookitakis rapidly expanding Services Delivery team</p><br/></p> (ref:hirist.tech)


Required Skill Profession

Computer Occupations



Your Complete Job Search Toolkit

✨ Smart • Intelligent • Private • Secure

Start Using Our Tools

Join thousands of professionals who've advanced their careers with our platform

Rate or Report This Job
If you feel this job is inaccurate or spam kindly report to us using below form.
Please Note: This is NOT a job application form.


    Unlock Your Data Engineer Potential: Insight & Career Growth Guide