Job description
 
                         Namaskaram!
Ajnalens is looking for Embedded Software Engineer – Camera Software Developer to join our team at Thane.
 The ideal candidate should have experience working in camera software development and integration.
We’re  also proud to share that Lenskart  is now our strategic investor — a milestone that reflects the impact, potential, and purpose of the path we’re walking.
Read more here:   LinkedIn Post
Minimum work experience required:
6+ years of experience in firmware development with below skills
Low-Level Camera Driver Development:
Development and customization of  V4L2 (Video4Linux2)  drivers for Linux-based systems.
Integration of  camera sensor drivers  (e.g., OV5640, IMX219, GC2145) over  MIPI CSI-2, DVP, or parallel interfaces.
Handling  sensor initialization sequences ,  register-level control , and  I2C communication.
Support for features like  exposure, gain, white balance, flip/mirror , and  frame size control.
Camera Interface & Communication:
Experience with  I2C (register control)  +  MIPI CSI-2 (video stream)
Working with  GPIO-based power/reset/enable  pins.
Handling camera  frame synchronization (VSYNC/HSYNC/PCLK)  on parallel interfaces.
Using  DMA and interrupt handling  for image data transfer.
Camera Stack & Middleware Integration:
Linux media controller framework: configuring  subdev, pipeline graph , and  media entities.
Integration with  ISP (Image Signal Processor)  or external ISPs.
Configuration of  Device Tree  nodes for camera sensors and pipelines.
Familiar in Porting to frameworks like  GStreamer ,  OpenCV , or  MMAL (on Raspberry Pi).
Testing & Validation:
Camera bring-up, signal validation (oscilloscope, logic analyzer)
Debugging with  v4l2-ctl ,  media-ctl ,  dmesg , and I2C dumps
Image quality verification (color balance, resolution, noise levels, focus)
Tuning pipeline stages:
Black level correction
Lens shading correction (LSC)
Auto exposure (AE), auto white balance (AWB), auto focus (AF)
Demosaicing
Noise reduction (NR)
Color correction matrix (CCM)
Gamma correction
Edge enhancement
Use of tuning tools: Working with OEM-specific or third-party tuning tools (e.g., from Qualcomm, MediaTek, Sony).
Lab testing under various lighting conditions.
Adjusting ISP parameters to achieve desired IQ metrics: sharpness, noise level, color accuracy, dynamic range.
Scene-based tuning: Supporting different use cases like HDR, low-light, bokeh, or fast motion.
Frame capture tools: Capture raw/YUV image buffers for analysis.
IQ evaluation tools: Check color charts, resolution charts, noise charts, etc.
ISP performance tests: Latency, frame drop, throughput, and thermal profiling.
Long-run and stress testing: Continuous camera operation to test robustness.
System-Level Integration:
Power management and suspend/resume handling.
Thermal impact of continuous streaming and optimization.
Handling multi-camera setups and synchronization.
Integration with  ISP tuning tools  (e.g., for lens shading, color correction matrices).
Advanced Integration (Optional):
AI camera pipelines : integration with NN accelerators or edge inference engines.
Low-latency streaming or frame capture for vision-based triggers (e.g., head pose, object detection).
Integration with SLAM / AR engines using synchronized camera + IMU data.
Compressed video streaming (H.264/H.265) over USB, Wi-Fi, or BLE.
Tools & Frameworks:
v4l2-ctl, media-ctl, i2cdetect, i2cget/set, cheese, GStreamer, FFmpeg.
Logic analyzer or I²C sniffer for camera debugging.
Use of test patterns and virtual sensors for simulation.
 
                    
                    
Required Skill Profession
 
                     
                    
                    Computer Occupations