We are looking for a skilled Data Engineer with strong hands-on experience in Python coding, AWS cloud services, and SQL (stored procedures).
The selected candidate will work on designing, building, and optimizing scalable data solutions and pipelines.
Key Responsibilities:
- Develop and maintain data pipelines and ETL workflows using Python.
- Write and optimize SQL queries and stored procedures for data processing.
- Implement and manage data pipelines on AWS services such as S3, Lambda, Glue, and Redshift.
- Collaborate with analytics and business teams to deliver high-quality data solutions.
- Ensure data accuracy, reliability, and performance.
- Troubleshoot and optimize data workflows and performance bottlenecks.
Required Skills:
- Strong expertise in Python (Pandas, NumPy, PySpark, etc.)
- Excellent knowledge of SQL and experience with stored procedures.
- Hands-on experience with AWS cloud platforms.
- Experience in ETL development, data integration, and data warehousing.
- Good understanding of data architecture and data modeling.
Preferred Skills:
- Exposure to Airflow, Snowflake, or Databricks.
- Knowledge of API integration, JSON/XML, and version control tools (e.G., Git).