Job Overview
Category
Computer Occupations
Ready to Apply?
Take the Next Step in Your Career
Join One Click AI and advance your career in Computer Occupations
Apply for This Position
Click the button above to apply on our website
Job Description
<p><p><b>Job Opening : </b> Principal/Senior Data Engineer Product Development<br/><br/><b>Company : </b> Agivant Technologies<br/><br/><b>About Agivant Technologies :</b><br/><br/>Agivant is a new-age AI-First Digital and Cloud Engineering services company focused on building cutting-edge products and solutions.<br/><br/> <b>Roles & Responsibilities :</b><br/><br/>- Design, develop, and optimize data ingestion pipelines for distributed systems.<br/><br/>- Implement parallel processing for large-scale, high-volume data.<br/><br/>- Work with multiple data sources Cloud Storage (AWS S3, Azure, GCP), Snowflake, BigQuery, PostgreSQL, Kafka, and Data Lakehouse (Iceberg).<br/><br/>- Ensure high-availability (HA), cross-region replication, and system reliability.<br/><br/>- Build monitoring & error reporting mechanisms for data pipelines.<br/><br/>- Develop and maintain Spark connectors and integrate third-party systems (Kafka, Kafka Connect).<br/><br/>- Apply event-driven architecture and distributed computing principles in product development.<br/><br/>- Collaborate with cross-functional teams in an Agile and CI/CD environment.<br/><br/>- Contribute to product innovation with modern cloud-native and data engineering practices.<br/><br/> <b>Job Description :</b><br/><br/>- Understand and implement design specifications.<br/><br/>- Develop core product modules following best practices.<br/><br/>- Build ingestion pipelines for distributed systems using Golang, C++, Java.<br/><br/>- Manage and optimize third-party integrations.<br/><br/> <b>Requirements :</b><br/><br/>- Proven experience building distributed systems with parallel processing.<br/><br/>- Hands-on with Kafka, Zookeeper, Spark, Stream Processing.<br/><br/>- Proficiency in Kafka Connect, Kafka Streams, Kafka Security & Customization.<br/><br/>- Strong knowledge of Spark connectors & event-driven architectures.<br/><br/>- Practical experience in Agile & CI/CD development workflows.<br/><br/><b>Nice to Have :</b><br/><br/>- gRPC protocol & multi-threading.<br/><br/>- Knowledge of Zookeeper / ETCD / Consul.<br/><br/>- Familiarity with distributed consensus algorithms (Paxos / Raft).<br/><br/>- Exposure to Docker & Kubernetes.<br/><br/> <b>Expertise Required :</b><br/><br/>Java (All Versions) : 4+ Years (Advanced)<br/><br/>C++ : 3+ Years (Advanced)<br/><br/>GoLang : 3+ Years (Advanced)<br/><br/>PostgreSQL : 4+ Years (Advanced)<br/><br/>Apache Kafka : 4+ Years (Advanced)<br/><br/>Apache Spark : 3+ Years (Intermediate)<br/><br/>Snowflake : 3+ Years (Advanced)<br/><br/>BigQuery : 3+ Years (Intermediate)<br/><br/>AWS S3 / Azure / Google Cloud : 4+ Years<br/><br/>Distributed Computing : 2+ Years<br/><br/>Agile Methodology : 3+ Years (Intermediate)<br/><br/>CI/CD : 4+ Years (Advanced)<br/></p><br/></p> (ref:hirist.tech)
Don't Miss This Opportunity!
One Click AI is actively hiring for this Senior Data Engineer - Distributed Systems position
Apply Now