Know ATS Score
CV/Résumé Score
  • Expertini Resume Scoring: Our Semantic Matching Algorithm evaluates your CV/Résumé before you apply for this job role: Data Engineer SQL/Python.
India Jobs Expertini

Urgent! Data Engineer - SQL/Python Job Opening In Gurugram – Now Hiring Pylon

Data Engineer SQL/Python



Job description

<p><p><b>Description :</b></p><br/><p><b>Responsibilities :</b></p><br/><p>- Set up workflows and orchestration processes to streamline data pipelines and ensure efficient data movement within the Azure ecosystem.</p><br/><p>- Create and configure compute resources within Databricks, including All-Purpose and SQL Compute and Job Clusters to support data processing and analysis.</p><br/><p>- Set up and manage Azure Data Lake (ADLS) Gen 2 storage accounts and establish a seamless integration with Databricks Workspace for data ingestion and processing.</p><br/><p>- Create and manage Service Principals, key vaults to securely authenticate and authorize access to Azure resources.</p><br/><p>- Utilize ETL (Extract, Transform, Load) techniques to design and implement data warehousing solutions and ensure compliance with data governance policies.</p><br/><p>- Develop highly automated ETL scripts for data processing.</p><br/><p>- Scale infrastructure resources based on workload requirements, optimizing performance and cost-efficiency.</p><br/><p>- Profile new data sources in a different format, including CSVs, JSONs, etc.</p><br/><p>- Apply problem-solving skills to address complex business and technical challenges, such as data quality issues, performance bottlenecks, and system failures.</p><br/><p>- Demonstrate excellent soft skills and the ability to effectively communicate and collaborate with clients, stakeholders, and cross-functional teams.</p><br/><p>- Implement Continuous Integration/Continuous Deployment (CI/CD) practices to automate the deployment and testing of data pipelines and infrastructure changes.</p><br/><p>- Delivering tangible value very rapidly, collaborating with diverse teams of varying backgrounds and disciplines.</p><br/><p>- Codifying best practices for future reuse in the form of accessible, reusable patterns, templates, and code bases.</p><br/><p>- Manage timely, appropriate communication and relationships with clients, partners, and other stakeholders.</p><br/><p>- Create and manage periodic reporting of project execution status and other trackers in standard accepted formats.</p><br/><p><b>Requirements :</b></p><br/><p>- Exp in the Data Engineering domain : 2+ Years.</p><br/><p>- Skills : SQL, Python, PySpark.

Spark, Distributed Systems.

</p><p><br/></p><p>- Azure Databricks, Azure Data Factory, ADLS Gen 2 Blob Storage</p><p><br/></p><p>- Key Vaults, Azure DevOps.

</p><p><br/></p><p>- ETL, Building Data Pipelines, Data Warehousing, Data Modelling, and Governance.</p><br/><p>- Agile Practices, SDLC, Multi-year experience with Azure-Databricks ecosystem and PySpark.</p><br/><p>- Ability to write clean, concise, and organized PySpark code.

</p><p><br/></p><p>- Ability to break down the project into executable steps, prepare a DFD, and execute the same.</p><br/><p>- Propose innovative DE solutions to achieve business objectives.

Quick on his feet, good at tech, and has logically complex communication.</p><br/><p>- Good Knowledge of ADF, Docker to Have : </b></p><p><br/></p><p>- Event Hubs, Logic Apps.

</p><p><br/></p><p>- Power BI Competitive coding and knows most PySpark syntax by heart.</p><br/></p> (ref:hirist.tech)


Required Skill Profession

Computer Occupations



Your Complete Job Search Toolkit

✨ Smart • Intelligent • Private • Secure

Start Using Our Tools

Join thousands of professionals who've advanced their careers with our platform

Rate or Report This Job
If you feel this job is inaccurate or spam kindly report to us using below form.
Please Note: This is NOT a job application form.


    Unlock Your Data Engineer Potential: Insight & Career Growth Guide