Job Overview
            
                
                
                
                    Category
                    Architecture & Construction
                 
                
             
            
            
         
        
            Ready to Apply?
            
                Take the Next Step in Your Career
                Join UST and advance your career in Architecture & Construction
             
            Apply for This Position
            
                Click the button above to apply on our website
            
         
        
            Job Description
            
                Candidates ready to join immediately can share their details via email for quick processing.
CCTC | ECTC | Notice Period | Location   fast for immediate attention! 5+ years if Experience.
Roles and Responsibilities Design, develop, and maintain scalable data pipelines using Spark (Py Spark or Spark with Scala).
Build data ingestion and transformation frameworks for structured and unstructured data sources.
Collaborate with data analysts, data scientists, and business stakeholders to understand requirements and deliver reliable data solutions.
Work with large volumes of data and ensure quality, integrity, and consistency.
Optimize data workflows for performance, scalability, and cost efficiency on cloud platforms (AWS, Azure, or GCP).
Implement data quality checks and automation for ETL/ELT pipelines.
Monitor and troubleshoot data issues in production and perform root cause analysis.
Document technical processes, system designs, and operational procedures.
Must-Have Skills3+ years of experience as a Data Engineer or similar role.
Hands-on experience with Py Spark or Spark using Scala.
Strong knowledge of SQL for data querying and transformation.
Experience working with any cloud platform (AWS, Azure, or GCP).
Solid understanding of data warehousing concepts and big data architecture.
Experience with version control systems like Git.
Good-to-Have Skills Experience with data orchestration tools like Apache Airflow, Databricks Workflows, or similar.
Knowledge of Delta Lake, HDFS, or Kafka.
Familiarity with containerization tools (Docker/Kubernetes).
Exposure to CI/CD practices and Dev Ops principles.
Understanding of data governance, security, and compliance standards.
            
         
  
  
  
        
        
        
        
        
            Don't Miss This Opportunity!
            
                UST is actively hiring for this 5+ yoe - data engineer – pyspark/spark scala with cloud & sql - immediate joiner - bengaluru position
            
            Apply Now