Skip to content
  • There are no suggestions because the search field is empty.

Databricks Unity - Load data - Native Load Control - 1.9

Before loading the data from your Source(s) System(s) to your Databricks Unity Target System, please:

You are now ready to load the data.

To load the data natively with Databricks Unity Catalog, you will use a multi-thread approach to load them in parallel.

You can also load the data with an external tool as:

Load the data

We will explain how to load the data using the possible target environment Databricks Unity.

To load the data:

  • Open Databricks with the URL provided in the Azure Databricks Service resource:
  • Databricks is open:
  • Click on the Workspace menu on the left-hand side:
  • Expand the folders Workspace > Users > Your_user > artifacts:
  • Open the 500_Deploy_and_Load_Databricks.ipynb file, which contains four steps:
    1. %run ./LoadControl/XXX_Deployment_LoadControl: Deploy the load control structure
    2. %run ./Jupyter/XXX_Deployment.: Deploy the code
    3. %run ./LoadControl/XXXX_MultiThreadingLoadExecution.ipynb: Load the data with multiple threads
    4. %run ./XXX_SelectResults.ipynb: Display the results
  • Execute step 3, then step 4
  • Select the Personal Compute created during the target environment setup:

  • Click on the Attach and run button:
  • The data were loaded:
    • You should have the target Parquet files created for each Target Model Object, for example, for the Stage CreditCard:
    • Step 4 displayed a resume of the number of rows loaded for each Target Model  Object, for example:

    You can now check that your data was correctly loaded with the following script:

    %sql
    select * from `rawvault`.`rdv_hub_creditcard_hub`;

    And see the content of your Target Parquet file: