Skip to content
  • There are no suggestions because the search field is empty.

Microsoft Fabric - Target environment

This article proposes a possible target environment for Microsoft Fabric generators.

Installation and configuration of the target environment are not part of biGENIUS support.

Unfortunately, we won't be able to provide any help beyond this example in this article.

Many other configurations and installations are possible for a Microsoft Fabric target environment.

Below is a possible target environment setup for a Microsoft Fabric generator.

The Property Target Platform should be set to the Fabric value:

Setup environment

You should have access to a Microsoft Fabric target environment:

To activate the git integration later, you should also have access to an Azure DevOps git repository. 

Create a Workspace

Click on the Workspace option in the left menu, then on + New workspace:

In this example, we will create a Workspace named bgfabricdvdm.

Fill in the name and click on the Apply button:

Now click on the Workspace settings menu to configure the git integration:

Choose the Git integration option in the left menu:

Connect to your Azure DevOps git repository, then click on the Connect and sync button:

Source data

There are three ways to provide source data to a Microsoft Fabric generator:

  • From Parquet and Delta files that exist in a One Lake Catalog by using a direct Discovery
  • From Parquet and Delta files by using the Microsoft Fabric Stage Files generator as a Linked Project
  • From any database accessed through JDBC by using the Microsoft Fabric Stage JDBC generator as a Linked Project

Parquet and Delta Files

If your source data are stored in Parquet files, please:

  • Create a first Project with the Microsoft Fabric Stage Files generator
  • In this first Project, discover the Parquet and the Delta files, create the Stage Model Object, generate, deploy, and load data in a Lake House.
  • Create a second Project with the Microsoft Fabric Data Vault or DataVault and Mart generators.
  • In this second Project, use the first Project Stage Model Object as a source by using the Linked Project feature.

 

To access the Parquet and/or Delta source files, they should be uploaded into a Lake House.

Click on the New button and choose Lakehouse:

>

In this example, we will call the Lake House docu_bglakehouse.

In the Files folder, click on the context menu and choose Upload > Upload folder:

For this example, we have:

  • 6 Parquet source files are organized inside six folders:

It is mandatory to have one folder per Parquet source file, and the folder name should be identical to the Parquet file name.

  • 1 Delta source file:

One folder per Delta source file is mandatory, and the folder name should be identical to the Source Model object name.

Database

If your source data are stored in a database such as Microsoft SQL Server or Postgres (or any database you can access through JDBC), please:

  • Create a first Project with the Microsoft Fabric Stage JDBC generator
  • In this first Project, discover the database tables, create the Stage Model Object, generate, deploy, and load data in a Lake House.
  • Create a second Project with the Microsoft Fabric Data Vault or Microsoft Fabric DataVault and Mart generator
  • In this second Project, use the first Project Stage Model Object as a source by using the Linked Project feature.

 

The source data are coming from a JDBC source.

In this example, we will use a Microsoft SQL Server database stored in Azure in a dedicated resource group:

The Azure database is AdventureWorks2019 and contains the data from the SQL Server sample database AdventureWorks2019.

To be able to access the Microsoft SQL Server from Microsoft Fabric, you should check the box Allow Azure services and resources to access this server in the Server Networking configuration:

Upload Artifacts in Microsoft Fabric

Please now upload the generated Artifacts from the biGENIUS-X application to the Microsoft Fabric Workspace.

Please replace the placeholders before uploading the artifacts.

  • Click on the Workspace we just created in the left menu:
  • Click on the Import button and choose Notebook then From this computer:
  • Click on the Upload button:
  • Select all the generated artifacts from the folders Jupyter, Helpers, and LoadControl:
  • Import, in addition, the following helper:

In the file 500_Deploy_and_Load_DataVault_Fabric.ipynb, adapt the name of the XXX_Deployment, the XXX_SimpleLoadexecution, the XXX_MultithreadingLoadExecution, and the XXX_SelectResults by the name of your Helper files.

  • Commit all the changes into your git repository by:
    • Clicking on the Source control menu:
    • Selecting all the changes:
    • Clicking on the Commit button:

 

Depending on your load control environment:

 

If you have already discovered your source data, modeled your project, and generated the artifacts, you're now ready to replace the placeholders in your generated artifacts, deploy these artifacts, and subsequently load the data based on the Generator you are using with the following possible load controls: