Know ATS Score
CV/Résumé Score
  • Expertini Resume Scoring: Our Semantic Matching Algorithm evaluates your CV/Résumé before you apply for this job role: AI Data Engineer.
India Jobs Expertini

Urgent! AI Data Engineer Job Opening In Kurnool – Now Hiring Tata Consultancy Services

AI Data Engineer



Job description

Job Title: AI Data Engineer

Exp- 4 to 8 years

Location- PAN India


Job Description

Key Responsibilities:

• Build and maintain data infrastructure: Design and construct scalable, reliable data pipelines, storage, and processing systems in the cloud.

• Ensure data quality: Clean, transform, and enrich raw data to create business truth that AI models can use for accurate insights.

• Enable AI/ML: Make data readily available and optimized for consumption by AI and machine learning models.

• Manage cloud services: Work with cloud-specific services for storage, compute, and networking to build an efficient and scalable AI data environment.

• Implement security and governance: Apply security controls to protect data and ensure compliance within the data platforms.

• Monitor and optimize: Continuously monitor data workloads and optimize for performance and cost-effectiveness.

________________________________________

Essential skills and tools

• Cloud Platforms: Deep knowledge of data services at least one major cloud provider (e.g., AWS, Google Cloud).

• Programming Languages: Strong proficiency in Python, Spark and SQL.

• Data Warehousing & Storage: Experience with technologies like Azure Synapse Snowflake, GCP BigQuery, Databricks, AWS Redshift and Data Lake.

• Data Pipelines: Familiarity with tools like Azure Data factory, AWS Glue, Apache Airflow, Kafka and dbt for orchestrating data workflows.

• AI-specific tools: Knowledge of vector databases

• Infrastructure as Code (IaC): Skills in tools like Bicep, Terraform or CloudFormation to automate infrastructure deployment.

• CI/CD: Understanding of continuous integration and continuous deployment pipelines.

________________________________________

Experience with any of the following Cloud Native Data Services:

• Azure: Azure Data Factory, MS Fabric, Azure Databricks, Azure Synapse Analytics, Datalake Gen2 and Azure Dedicated SQL Pool (ADW), Cosmos DB

• AWS: AWS Glue, AWS S3, AWS Athena, AWS Kinesis and AWS Redshift, Dynamo DB

• Google Cloud Platform (GCP): GCP Dataproc, GCP DataFlow, GCP BigQuery, GCP Cloud Storage, Cloud SQL and Pub Sub, Google BigTable, Google Spanner.


Qualifications:

• Bachelor’s or master’s degree in engineering or technology

• Proven experience in building and deploying ETL/ELT solutions in production.

• Strong understanding of Data models and Data pipelines and cloud-native Big data architectures.

• Excellent problem-solving, communication, and collaboration skills.


Required Skill Profession

Computer Occupations



Your Complete Job Search Toolkit

✨ Smart • Intelligent • Private • Secure

Start Using Our Tools

Join thousands of professionals who've advanced their careers with our platform

Rate or Report This Job
If you feel this job is inaccurate or spam kindly report to us using below form.
Please Note: This is NOT a job application form.


    Unlock Your AI Data Potential: Insight & Career Growth Guide