Job description
 
                         Description  
  : 
We are seeking skilled and dynamic Cloud Data Engineers specializing in AWS, Azure, Databricks, and GCP.
The ideal candidate will have a strong background in data engineering, with a focus on data ingestion, transformation, and warehousing.
They should also possess excellent knowledge of PySpark or Spark, and a proven ability to optimize performance in Spark job executions.
 
 Key Responsibilities :  
 - Design, build, and maintain scalable data pipelines for a variety of cloud platforms including AWS, Azure, Databricks, and GCP.
- Implement data ingestion and transformation processes to facilitate efficient data warehousing.
- Utilize cloud services to enhance data processing capabilities: 
 - AWS : Glue, Athena, Lambda, Redshift, Step Functions, DynamoDB, SNS.
 
 - Azure : Data Factory, Synapse Analytics, Functions, Cosmos DB, Event Grid, Logic Apps, Service Bus.
 
 - GCP : Dataflow, BigQuery, DataProc, Cloud Functions, Bigtable, Pub/Sub, Data Fusion.
 
- Optimize Spark job performance to ensure high efficiency and reliability.
- Stay proactive in learning and implementing new technologies to improve data processing frameworks.
- Collaborate with cross-functional teams to deliver robust data solutions.
- Work on Spark Streaming for real-time data processing as necessary.
 
 Qualifications:   
- 4-7 years of experience in data engineering with a strong focus on cloud environments.
- Proficiency in PySpark or Spark is mandatory.
- Proven experience with data ingestion, transformation, and data warehousing.
- In-depth knowledge and hands-on experience with cloud services(AWS/Azure/GCP): 
- Demonstrated ability in performance optimization of Spark jobs.
- Strong problem-solving skills and the ability to work independently as well as in a team.
- Cloud Certification (AWS, Azure, or GCP) is a plus.
- Familiarity with Spark Streaming is a bonus.
 
Mandatory skill sets:   
Python, Pyspark, SQL with (AWS or Azure or GCP)  
Preferred skill sets:   
Python, Pyspark, SQL with (AWS or Azure or GCP)  
Years of experience required:   
4-7 years  
Education qualification:   
BE/BTECH, ME/MTECH, MBA, MCA  
Education   
Degrees/Field of Study required: Bachelor of Technology, Bachelor of Engineering, Master of Business Administration, Master of EngineeringDegrees/Field of Study preferred:
Certifications   
Required Skills  
Python (Programming Language), Structured Query Language (SQL)
Optional Skills  
Accepting Feedback, Accepting Feedback, Active Listening, Agile Scalability, Amazon Web Services (AWS), Analytical Thinking, Apache Hadoop, Azure Data Factory, Communication, Creativity, Data Anonymization, Database Administration, Database Management System (DBMS), Database Optimization, Database Security Best Practices, Data Engineering, Data Engineering Platforms, Data Infrastructure, Data Integration, Data Lake, Data Modeling, Data Pipeline, Data Quality, Data Transformation, Data Validation {+ 19 more}
Desired Languages   
Travel Requirements  
Not Specified
Available for Work Visa Sponsorship?
 
No
Government Clearance Required?
 
No
Job Posting End Date  
 
                    
                    Required Skill Profession
 
                     
                    
                    Other General