Job description
 
                         Organization Description:  Jaipur Rugs is a social enterprise that connects rural craftsmanship with global markets through its luxurious handmade carpets.
It is a family-run business that offers an exclusive range of hand-knotted and hand-woven rugs made using 2500 years old traditional art forms.
The founder, Mr. Nand Kishore Chaudhary created a unique business model, which provides livelihood to the artisans at their doorstep.
This changed the standard practice of involving middlemen to work with artisanal communities.
The company currently has a network of over 40,000 artisans spread across 600 rural Indian villages in five states of India.
It has an end-to-end business model, right from sourcing of wool to exporting a finished handmade rug.
The modern and eclectic collection of rugs, made using the finest wool and silk, has won numerous global awards and is currently exported to more than 45 countries with the US sales arm, Jaipur Living, Inc.
located in Georgia, Atlanta.
Job Description:  The specific responsibilities of the position holder will be (though not restricted to) the following:
The Data Engineer will be responsible for building and maintaining  end-to-end data pipelines for analytics solutions , leveraging Microsoft Fabric to integrate Business applications to Jaipur Living BI platform
Build and maintain end-to-end data pipelines  across business application and BI platform
Develop and implement solutions on the  MS Fabric platform to support a modern enterprise data platform by implementing Kimball Data Lakehouse  (Fact and Dim)
Design, develop, and implement  analytics solutions on MS Fabric and PowerBI
Develop ETL/ELT processes  for large-scale  data ingestion, ensuring data quality and pipeline performance
Transform and model data to meet business requirements, loading it into Fabric Data Lakehouse (bronze, silver, gold layers)
Implement monitoring and error handling  processes for reliable data integration
Optimize pipelines  for cost and performance through query tuning, caching, and resource management
Automate repeatable  data preparation tasks to reduce manual processes
Provide troubleshooting, analysis, and  production support for existing solutions , including enhancements
Integrate GitHub for artifact management and versioning
Continuous Improvement  – Identifies opportunities, generates ideas, and implements solutions to improve processes and conditions
Skills
Minimum of 5 years  of hands-on experience building data pipelines, data models, and supporting enterprise data warehouse solutions
Hands-on experience with Microsoft Fabric for  data pipelines ,  Gen2 Data Flows ,  Activity optimization ,  Data Modeling , and  analytics
Proficiency with  Microsoft Fabric data services,  including  Azure Synapse Analytics and Dataverse
Strong SQL, data modeling, and ETL/ELT development skills
Experience working within a scaled agile framework for data engineering product delivery
4+ years of experience with ETL and cloud data technologies, including  Azure Data Lake ,  Azure Data Factory ,  Azure Synapse ,  Azure Functions ,  Azure Data Explorer , and  Power BI  (or equivalent platforms) and  Google Big Query
4+ years of experience in big data scripting languages such as  Python, or SQL
4+ years of experience in one scripting language for  data retrieval and manipulation (e.g., SQL)
Strong SQL, data modeling, data warehouse, and OLAP concepts
Experience with Azure Data Lake Storage, Azure Synapse Analytics, Fabric Spark Pools, Fabric Notebooks, Python, DevOps, and CI/CD
Familiarity with  data lake medallion architecture  and unified data models for BI platform
Familiarity with  Scaled Agile, DevOps, Scrum, and ITIL concepts
Retail experience will be a plus
WMS, ERP, CRM, PIM, Google Analytics experience is a plus
Microsoft Dynamics ERP experience is a plus
Experience with Fabric AI services will be a plus
 
                    
                    
Required Skill Profession
 
                     
                    
                    Computer Occupations