This article proposes a possible target environment on Azure Synapse for Microsoft Fabric DataVault and Microsoft Fabric DataVault and Mart generators.
Installation and configuration of the target environment are not part of biGENIUS support.
Unfortunately, we won't be able to provide any help beyond this example in this article.
Many other configurations and installations are possible for a Spark target environment.
Below is a possible target environment setup on Azure Synapse for a Microsoft Fabric generator.
The Property Target Platform should be set to the Azure Synapse value:
Setup environment
The Azure Synapse target environment needs at least the following Azure resources set in your Azure Subscription:
- A Ressource Group: some help is available here
- Inside the Ressource Group:
In the following example of the target environment, we will use the following resources:
- A Ressource Group named bg-synapse
- A Storage Account for our Source Data named bgsynapselandingzone1
- A Storage Account for our Target Data Lake named bgsynapsedatalake1
- An Apache Spark Pool named bgaasspark33v2
- A Synapse Workspace named bgsynapseworkspace1
Tools to install
Please install the following tools:
- Azure Storage Explorer: available here
Target Storage Account
We have chosen to create a folder named docu-datalake in our Target Storage Account:
- Open Azure Storage Explorer
- Connect to your Subscription
- Open the Target Storage Account
- Create a folder
For this example, we have 1 Target folder for our Data Lake:
Source Data
There are two ways to provide source data to a Microsoft Fabric Data Vault or Data Vault and Mart generators:
- From Parquet and Delta files by using the Microsoft Fabric Stage Files generator as a Linked Project
- From any database accessed through JDBC by using the Microsoft Fabric Stage JDBC generator as a Linked Project
Parquet and Delta Files
If your source data are stored in Parquet or Delta files, please:
- Create a first Project with the Microsoft Fabric Stage Files generator
- In this first Project, discover the Parquet and the Delta files, create the Stage Model Object, generate, deploy, and load data in a Lake House.
- Create a second Project with the Microsoft Fabric Data Vault or DataVault and Mart generators.
- In this second Project, use the first Project Stage Model Object as a source by using the Linked Project feature.
Database
If your source data are stored in a database such as Microsoft SQL Server or Postgres (or any database you can access through JDBC), please:
- Create a first Project with the Microsoft Fabric Stage JDBC generator
- In this first Project, discover the database tables, create the Stage Model Object, generate, deploy, and load data in a Lake House.
- Create a second Project with the Microsoft Fabric Data Vault or Microsoft Fabric DataVault and Mart generator
- In this second Project, use the first Project Stage Model Object as a source by using the Linked Project feature.
Upload Artifacts in Azure Synapse
Please now upload the generated Artifacts from the biGENIUS-X application to the Azure Synapse Workspace.
Please replace the placeholders before uploading the artifacts.
- Click on the Synapse Workspace in Azure:
- Then click on the Workspace web URL:
- Azure Synapse Analytics is opened:
- Click on the Develop menu on the left-hand-side:
- If you are in the live mode of Synapse, change it to the Azure DevOps Git mode:
- Create a new branch named demo_synapse:
- We have chosen to create a folder named Artifacts:
- Import all the generated artifacts from the folder Jupyter and Helpers:
- Import, in addition, the following helper:
In the file 500_Deploy_and_Load_DataVault_Fabric.ipynb, adapt the name of the XXX_Deployment.ipynb, the XXX_SimpleLoadexecution.ipynb, the XXX_MultithreadingLoadExecution.ipynb, and the XXX_SelectResults.ipynb by the name of your Helper files.
- Commit all in the demo_synapse branch:
- Make a pull request from your branch demo_synapse to the collaboration branch:
- Open the collaboration branch and click on the Publish button:
You're now ready to deploy these artifacts and subsequently load the data based on the Generator you are using with the following possible load controls: