Know ATS Score
CV/Résumé Score
  • Expertini Resume Scoring: Our Semantic Matching Algorithm evaluates your CV/Résumé before you apply for this job role: LLM Systems Performance Engineer (CUDA).
India Jobs Expertini

Urgent! LLM Systems Performance Engineer (CUDA) Job Opening In gurugram – Now Hiring Phinity

LLM Systems Performance Engineer (CUDA)



Job description

We look forward to when AI can discover the next quantum AI accelerator, or when AI can make RL much more compute-efficient.

We want to enable AI to bootstrap its own intelligence, to discover new computational paradigms.

Just as AlphaEvolve discovered a 23% speedup in Gemini's critical kernels and achieved 32.5% improvements in FlashAttention, we're building the infrastructure that will enable every AI model to optimize its own compute stack.

Of course, to automate algorithm and hardware discovery, we need to break the data barrier.

CUDA is a low-resource language, and kernel optimization depends a lot on context and hardware that models simply are not trained on.


Phinity is building the canonical training data infrastructure that will enable agentic hardware engineering and optimization, which will fuel algorithmic discovery.

We are building environments for agents to learn to write kernel from a spec and optimize them on specific hardware, and eventually, to discover new hardware breakthroughs.

Our customers include one of the largest frontier model labs.


We're seeking top engineers for a contractor role who can optimize hardware for model training and inference workloads, who can bake their industry experience into a model.

This is a hybrid Systems Engineer/AI research role where you will be looking through and debugging model reasoning traces and designing the optimal CUDA problems to teach unreleased models to automate your work in industry.

Please do not apply unless you have optimized kernels before.


Skill requirements:

Languages: CUDA, C++, Python,

Frameworks: JAX/XLA, PyTorch, TensorFlow (at the C++ level), Pallas

Libraries: cuBLAS, cuDNN, CUTLASS, CUB, Thrust

Compiler Tools: NVCC, PTX assembly, MLIR/XLA understanding

Hardware Knowledge: SM architecture, tensor cores, memory hierarchies (HBM, L2, shared, registers)


Apply if you have:

  • Achieved >10x speedups on production ML workloads
  • Written kernels that outperform vendor libraries
  • Optimized attention, GEMM, or convolution at the assembly level
  • Built custom fusions that beat XLA/Triton compiler output
  • Published papers or open-source kernels used in production


Required Skill Profession

Other General



Your Complete Job Search Toolkit

✨ Smart • Intelligent • Private • Secure

Start Using Our Tools

Join thousands of professionals who've advanced their careers with our platform

Rate or Report This Job
If you feel this job is inaccurate or spam kindly report to us using below form.
Please Note: This is NOT a job application form.


    Unlock Your LLM Systems Potential: Insight & Career Growth Guide