Know ATS Score
CV/Résumé Score
  • Expertini Resume Scoring: Our Semantic Matching Algorithm evaluates your CV/Résumé before you apply for this job role: Data Pipeline Engineer.
India Jobs Expertini

Urgent! Data Pipeline Engineer Job Opening In India, India – Now Hiring Taiyō.AI

Data Pipeline Engineer



Job description

About Us

Taiyo.AI is the world's first infrastructure intelligence platform.

We are building the largest universal

and industry standard database of opportunities (tenders, projects, news) and threats (economy,

climate, geopolitics, finance, logistics, etc.) for real assets.

Taiyo.AI has been instrumental in shaping

how infrastructure companies (infra investors, engineering, procurement, construction, and infra

insurers) benchmark new project development opportunities, get a panoramic and dynamic view of

external risks, predict prices, identify drivers, and mitigate supply-side disruptions.

We are seeking a

candidate that is willing to learn and contribute to emerging technology and policy.



Responsibilities:

1.

Work on data sourcing

2.

Use web scrapers (Beautifulsoup, selenium, etc.)

3.

Manage the data normalization and standards validation

4.

Parametrize and automate the scrapers

5.

Develop and execute the processes for monitoring data sanity and checking for data availability

and reliability

6.

Understand the business drivers and build insights through data

7.

Work with the stakeholders at all levels to establish current and ongoing data support and

reporting needs

8.

Ensure continuous data accuracy and recognize data discrepancies in systems that require

immediate attention/escalation

9.

Work and become an expert in the company's data warehouse and other data storage tools,

understanding the definition, context, and proper use of all attributes and metrics

10.

Create dashboards based on business requirements

11.

Work on the distributed systems, scale, cloud, caching, CI/CD (continuous integration and

deployment), distributed logging, data pipeline, REST API)


Requirements:

- Bachelor's degree in Computer Science, Engineering, or related field or must be completing

their graduation in 2025

- Proven experience in data engineering and ETL development.

- Proficiency in programming languages such as Python, Java, or Scala.

- Hands-on experience with big data technologies (e.G., Hadoop, Spark, Kafka) and cloud

platforms (e.G., AWS, Azure, GCP).

- Strong SQL skills and experience with database systems.

- Excellent problem-solving and analytical abilities.

- Effective communication and collaboration skills.


Benefits:

- Competitive salary

- Comprehensive benefits

- Flexible schedule

- Work from home option

- Opportunities for rapid growth and career advancement

- Exciting work culture with a focus on innovation and collaboration

- Sense of ownership and impact in a dynamic organization


Apply here: https://docs.Google.Com/forms/d/e/1FAIpQLSeHeVmkZ2h2KtQq2uisokZFrGNEVJT0UhWa8fBLZT27TyA08g/viewform?usp=sharing&ouid=109654441412058597015


Required Skill Profession

Computer Occupations



Your Complete Job Search Toolkit

✨ Smart • Intelligent • Private • Secure

Start Using Our Tools

Join thousands of professionals who've advanced their careers with our platform

Rate or Report This Job
If you feel this job is inaccurate or spam kindly report to us using below form.
Please Note: This is NOT a job application form.


    Unlock Your Data Pipeline Potential: Insight & Career Growth Guide