Know ATS Score
CV/Résumé Score
  • Expertini Resume Scoring: Our Semantic Matching Algorithm evaluates your CV/Résumé before you apply for this job role: Data Engineer Python/Spark.
India Jobs Expertini

Urgent! Data Engineer - Python/Spark Job Opening In Bengaluru – Now Hiring MNR Solutions

Data Engineer Python/Spark



Job description

<p><p><b>KEY RESPONSIBILITIES :</b><br/><br/>- Work on multiple concurrent development projects.<br/><br/>- Participate in requirements gathering, database design, testing and production deployments.<br/><br/>- Analyse system/application requirements and design innovative solutions.<br/><br/>- Translating business requirements to technical specifications - data streams, data integrations, data transformations, databases, data warehouses & data validation rules.<br/><br/>- Design and develop SQL Server database objects (i.e., tables, views, stored procedures, indexes, triggers, constraints, etc.)<br/><br/>- Analyze and design data flow, data lineage mapping and data models.<br/><br/>- Optimize, scale and reduce the cost of analytics data platforms for multiple clients.<br/><br/>- Adherence to data management processes and capabilities.<br/><br/>- Enforcing compliance with data governance and data security.<br/><br/>- Performance tuning and query optimizations in terms of all the database objects.<br/><br/>- Perform unit and integration testing.<br/><br/>- Create technical documentation.<br/><br/>- Develop and assist team members.<br/><br/>- Provides training to the team members.<br/><br/>- Experience in data modelling and database design.<br/><br/>- Demonstrate proficiency in Python scripting language<br/><br/>- Experience of data handling using Python .

</p><p><br/></p><p>- Experience on Pandas library .

</p><p><br/></p><p>- Be able to fetch the data from API's and bring it into tabular format.<br/><br/>- Working experience on Spark.<br/><br/>- Working experience of Azure SQL, SQLMI, Synapse Analytics, ADF.<br/><br/>- The candidate must have a demonstrated experience in building and maintaining reliable and scalable Data pipeline on cloud (Azure) for big data platforms.<br/><br/>- Working experience on AWS or GCP will be an added advantage and good to have.<br/><br/>- Good to have working experience on Databricks & MS Fabric<br/><br/>- Solid understanding of database design principles.<br/><br/>- The candidate must also have had experience in data warehousing inclusive of dimensional modeling concepts and data lake concepts with practical knowledge of Data Modeling and implementation.<br/><br/>- Experience in analyzing very large real-world datasets and hands-on approach in data analytics will be a plus.<br/><br/>- Experience with Test Driven Development, Continuous Integration, Continuous Deployment etc.<br/><br/>- Strong analytical and technical skills.<br/><br/>- Good verbal and written communication.<br/><br/>- Flexible to learn and work on new technologies.</p><br/></p> (ref:hirist.tech)


Required Skill Profession

Computer Occupations



Your Complete Job Search Toolkit

✨ Smart • Intelligent • Private • Secure

Start Using Our Tools

Join thousands of professionals who've advanced their careers with our platform

Rate or Report This Job
If you feel this job is inaccurate or spam kindly report to us using below form.
Please Note: This is NOT a job application form.


    Unlock Your Data Engineer Potential: Insight & Career Growth Guide