Know ATS Score
CV/Résumé Score
  • Expertini Resume Scoring: Our Semantic Matching Algorithm evaluates your CV/Résumé before you apply for this job role: Senior Software Engineer Data Infrastructure.
India Jobs Expertini

Urgent! Senior Software Engineer - Data Infrastructure Job Opening In Bengaluru – Now Hiring Rippling

Senior Software Engineer Data Infrastructure



Job description

About Rippling

Rippling gives businesses one place to run HR, IT, and Finance.

It brings together all of the workforce systems that are normally scattered across a company, like payroll, expenses, benefits, and computers.

For the first time ever, you can manage and automate every part of the employee lifecycle in a single system.

Take onboarding, for example.

With Rippling, you can hire a new employee anywhere in the world and set up their payroll, corporate card, computer, benefits, and even third-party apps like Slack and Microsoft 365all within 90 seconds.

Based in San Francisco, CA, Rippling has raised $1.4B+ from the world's top investorsincluding Kleiner Perkins, Founders Fund, Sequoia, Greenoaks, and Bedrockand was named one of America's best startup employers by Forbes.

We prioritize candidate safety.

Please be aware that all official communication will only be sent from @
Rippling.com addresses.



About the role:

Rippling is the system of record for employee data - a complete Employee Management System.

To solve this broad problem, a variety of applications and datasets need to come together as a graph connected through the employee record at its center.


We need a data platform to make it easy to make all forms of data accessible for different use cases, perform various transformations and query efficiently for a variety of online and offline use cases.

You will be working on building this distributed data platform, defining key APIs, designing to scale, high availability, and handling both online, streaming and batch scenarios.


At Rippling, to support various use cases we use Redis, Mongo, Postgres to serve APIs, Kafka for streaming, Apache Pinot and Apache Presto for OLAP, and S3 and Snowflake for data lake and warehousing.

What You'll Do:

  • Work on distributed processing engines and distributed databases.
  • Create data platforms, data lakes, and data ingestion systems that work at scale.
  • Write core libraries (in python and golang) to interact with various internal data stores.
  • Define and support internal SLAs for common data infrastructure
  • Design, develop, code, and test software systems, improvements, products and user-facing experiences
  • Leverage big data technologies like Postgres, Kafka, Presto, Pinot, Flink, Airflow, Mongo, Redis and Spark.
  • Explore new and upcoming data technologies to support Rippling's exponential growth

Qualifications:

  • 6+ years of professional work experience.
  • Experience working in a fast paced, dynamic environment.
  • Experience in building projects with good abstractions and architecture.
  • Comfortable at developing scalable and extendable core services used in many products.

If you don't meet all of the requirements listed here, we still encourage you to apply.

No job description is perfect, and we might find an even more suitable opportunity that matches your skills and experience.






PIef0978f35048-30511-38658861


Required Skill Profession

Computer Occupations



Your Complete Job Search Toolkit

✨ Smart • Intelligent • Private • Secure

Start Using Our Tools

Join thousands of professionals who've advanced their careers with our platform

Rate or Report This Job
If you feel this job is inaccurate or spam kindly report to us using below form.
Please Note: This is NOT a job application form.


    Unlock Your Senior Software Potential: Insight & Career Growth Guide