Job description
Reinforcement Learning & Deep Learning for Robotic Arms
Location: Bharat Forge, Mundhwa, Pune
Job Type: Full-time
Experience Level: 3-6 Years
Industry: AI-Driven Robotics, Neural Network-Based Manipulation, Autonomous Dexterity Systems
Job Overview
We are looking for a highly technical Robotics Simulation Engineer specializing in reinforcement learning (RL) and deep learning for robotic arms.
The ideal candidate should have strong expertise in learning-based motion planning, grasp adaptation, real-time trajectory optimization, and AI-powered dexterous manipulation.
This role requires proficiency in ROS2 MoveIt!, Orocos, NVIDIA Isaac Sim, Omniverse, Groot, RL-based control models, policy optimization, and Sim2Real AI adaptation strategies to push the boundaries of robotic autonomy and self-learning dexterity.
Key Responsibilities
Reinforcement Learning for Robotic Manipulation: • Develop Deep Reinforcement Learning (DRL)-based policies for robotic arm dexterity, training models on high-dimensional action spaces for grasping and force adaptation.
• Implement self-learning policy gradient optimization for improving robotic decision-making in dynamic task execution.
• Train RL-based inverse kinematics (IK) models, enabling real-time trajectory correction and adaptive grasping strategies.
Deep Learning-Based Motor Control & Dexterity Optimization: • Work on neural network-driven control models, allowing robotic arms to self-learn adaptive grip forces, compliance, and motion refinement.
• Apply transformer-based movement prediction algorithms, ensuring fluid and intelligent robotic motion sequencing.
• Implement Sim2Real adaptation pipelines, transferring trained RL deep learning policies from simulation environments to real-world robotic execution.
Advanced Motion Planning & Collision Detection: • Optimize collision-aware trajectory generation algorithms using Hybrid-A, RRT, PRM, TEB*, ensuring real-time corrective motion strategies.
• Develop ROS2 MoveIt! and Orocos-integrated policy models, refining trajectory optimization for tool handling and dexterous motion control.
• Train robotic arms on force-modulated grasp strategies, ensuring precision pick-and-place execution in dynamic environments.
Multi-Sensor Fusion & Neural SLAM in Robotics Simulation: • Model sensor fusion techniques, integrating LiDAR, depth cameras, IMU, force-torque sensors for neural SLAM-
driven manipulation.
• Utilize vision-language models (VLM) for task-driven robotic autonomy, enhancing semantic reasoning for self-correcting grasping strategies.
AI Simulation, Testing, Benchmarking & Deployment: • Conduct benchmark testing, validating RL-trained models against real-world execution consistency.
• Debug AI-driven robotic control inconsistencies, ensuring seamless synchronization across real-time hardware execution pipelines.
• Develop Sim2Real AI validation workflows, refining robotic arms' adaptability across varied manipulation tasks.
Research & Technical Documentation:
• Maintain technical documentation, detailing RL-based motion planning architectures, deep learning policy training methodologies, and neural SLAM integration techniques.
• Stay updated on cutting-edge advancements in AI-driven manipulation, RL-powered autonomous dexterity, and self-learning humanoid robotics.
Required Skill Profession
Engineers