Job Overview
Category
Computer Occupations
Ready to Apply?
Take the Next Step in Your Career
Join Amazon and advance your career in Computer Occupations
Apply for This Position
Click the button above to apply on our website
Job Description
- Data Pipeline Development: Design, implement, and manage scalable ETL/ELT pipelines using AWS services and Databricks.
- Data Integration: Ingest and process structured, semi-structured, and unstructured data from multiple sources into AWS Data Lake or Databricks.
- Data Transformation: Develop advanced data processing workflows using PySpark, Databricks SQL, or Scala to enable analytics and reporting.
- Databricks Management: Configure and optimize Databricks clusters, notebooks, and jobs for performance and cost efficiency.
- AWS Architecture: Design and implement solutions leveraging AWS-native services like S3, Glue, Redshift, EMR, Lambda, Kinesis, and Athena.
- Collaboration: Work closely with Data Analysts, Data Scientists, and other Engineers to understand business requirements and deliver data-driven solutions.
- Performance Tuning: Optimize data pipelines, storage, and queries for performance, scalability, and reliability.
- Monitoring and Security: Ensure data pipelines are secure, robust, and monitored using CloudWatch, Datadog, or equivalent tools.
- Documentation: Maintain clear and concise documentation for data pipelines, workflows, and architecture
Don't Miss This Opportunity!
Amazon is actively hiring for this Aws Data Engineer (Pyspark) position
Apply Now