Job description
Description
& Summary: A career within….
A career within Data and Analytics
services will provide you with the opportunity to help organisations uncover
enterprise insights and drive business results using smarter data analytics.
We focus
on a collection of organisational technology capabilities, including business
intelligence, data management, and data assurance that help our clients drive
innovation, growth, and change within their organisations in order to keep up with
the changing nature of customers and technology.
We make impactful decisions by
mixing mind and machine to leverage data, understand and navigate risk, and help
our clients gain a competitive edge.
Responsibilities: ● Design, develop, and optimize data pipelines and ETL
processes using PySpark or Scala to extract,
transform, and load large volumes of structured and
unstructured data from diverse sources.
● Implement data ingestion, processing, and storage
solutions on Azure cloud platform, leveraging
services such as Azure Databricks, Azure Data Lake
Storage, and Azure Synapse Analytics.
● Develop and maintain data models, schemas, and
metadata to support efficient data access, query
performance, and analytics requirements.
● Monitor pipeline performance, troubleshoot issues,
and optimize data processing workflows for
scalability, reliability, and cost-effectiveness.
● Implement data security and compliance measures to
protect sensitive information and ensure regulatory
compliance.
Requirement
● Proven experience as a Data Engineer, with expertise
in building and optimizing data pipelines using
PySpark, Scala, and Apache Spark.
● Hands-on experience with cloud platforms,
particularly Azure, and proficiency in Azure services
such as Azure Databricks, Azure Data Lake Storage,
Azure Synapse Analytics, and Azure SQL Database.
● Strong programming skills in Python and Scala, with
experience in software development, version control,
and CI/CD practices.
● Familiarity with data warehousing concepts,
dimensional modeling, and relational databases (e.g.,
SQL Server, PostgreSQL, MySQL).
● Experience with big data technologies and
frameworks (e.g., Hadoop, Hive, HBase) is a plus.
Mandatory skill sets: Spark, Pyspark, Azure
Preferred skill sets: Spark, Pyspark, Azure
Years of experience required: 4 - 8
Education qualification: B.Tech / M.Tech / MBA / MCA
Education
Degrees/Field of Study required: Master of Business Administration, Bachelor of Engineering, Master of EngineeringDegrees/Field of Study preferred:
Certifications
Required Skills
PySpark, Python (Programming Language)
Optional Skills
Accepting Feedback, Accepting Feedback, Active Listening, Analytical Reasoning, Analytical Thinking, Application Software, Business Data Analytics, Business Management, Business Technology, Business Transformation, Communication, Creativity, Documentation Development, Embracing Change, Emotional Regulation, Empathy, Implementation Research, Implementation Support, Implementing Technology, Inclusion, Intellectual Curiosity, Learning Agility, Optimism, Performance Assessment, Performance Management Software {+ 16 more}
Desired Languages
Travel Requirements
Not Specified
Available for Work Visa Sponsorship?
No
Government Clearance Required?
No
Job Posting End Date
Required Skill Profession
Computer Occupations