Fabric Lakehouse - Load data - Native Load Control - 1.10
Before loading the data from your Source(s) System(s) to your Fabric Lakehouse Target System, please:
You are now ready to load the data.
There is one possibility to load the data natively with Fabric Lakehouse:
- In parallel with a multi-thread approach
Load the data
We will explain how to load the data using the possible target environment for Fabric Lakehouse.
To load the data:
- Open Microsoft Fabric at https://app.fabric.microsoft.com/

- Open the Workspace where you deployed the artifacts by clicking on the Workspace option on the left menu:

- Then, on the desired Workspace (bgfabricdvdm in our example):

- Open the ./LoadControl/XXXX_MultiThreadingLoadExecution.ipynb
- Choose the Lakehouse by:
- Clicking on the Lakehouses menu on the left:

- Clicking on the Add button:

- Selecting Existing lakehouse:

- Selecting the Lake House previously created (docu_bglakehouse in our example):

- Clicking on the Lakehouses menu on the left:
- Execute all the steps
- The data were loaded:
- You should have the target Parquet files created for each Target Model Object, for example, for the Stage CreditCard:

- You should have the target Parquet files created for each Target Model Object, for example, for the Stage CreditCard:
- You can check the load:
- Open the ./Helpers/XXX_SelectResults.ipynb file
- Run it
- A resume of the number of rows loaded for each Target Model Object is displayed, for example:

You can now check that your data was correctly loaded with the following script:
--Create a new step with the following code:
mydf = spark.sql("select * from `rdv_hub_customer_hub_result`")
mydf.show(truncate = False)
And see the content of your Target Parquet file:
