Connecting to Azure Synapse Analytics is easy, almost the same as connecting to the SQL Server. We will be loading the data into Synapse by using the cloud storage as an intermediate storage layer and will need to specify the storage details as well.

Connecting to Synapse

Server hosting the Synapse database is an Azure server with a URL in form of <account> No need to specify the port.

Omni Loader supports the following authentication schemes:

  • SQL Server
    This is the standard username/password combination stored in and verified by the SQL Server.
  • AAD password
    This is Azure Active Directory password authentication.
  • AAD multi-factor
    Azure Active Directory multi-factor authentication will show a pop-up authenticating you as a separate step.
  • AAD integrated
    Azure Active Directory integrated authentication is an equivalent of what was used to be called Windows authentication for SQL Server.

Once the server and authentication are provided, you can type your database name, or select it from the drop-down.

Connecting to the storage

Synapse can load data from Azure Blob Storage container, or from Azure Data Lake Storage Gen 2 (ADLSv2) container. We recommend ADLSv2.

Data format can be CSV or Parquet. CSV is inefficient row-based textual format, while Parquet is well-designed and fast columnar binary format. We recommend Parquet as it is significantly faster to ingest and produces smaller data files.
Both formats can be used uncompressed and compressed. One should always use the compressed format even if it means longer data preparation. Parquet can compress data up to 5 times, which directly impacts the efficiency of Synapse ingestion. While compressing terabytes of data takes time, Omni Loader is designed for efficiency and will use all CPU cores of the machine it is running on to compress the data. For further acceleration, you should use Omni Loader in a cluster mode to utilize the CPUs of several machines.
CSV can be compressed using a slow Gzip, but Parquet can use very fast Snappy compression as well. We recommend using Parquet with a Snappy compression.

Data handling defines the way Omni Loader will generate the files and clean up:

  • Clear before run
    Everything in the container folder will be deleted before the data copying starts.
  • Clear after run
    Everything in the container folder will be deleted after the data copying completes.
  • Timestamped
    Nothing is deleted. On each run, Omni Loader creates a new folder named as the current time, then place the files inside the folder. This allows for a complete history, but may generate a large amount of data after many runs.

We support three modes of storage authentication:

  • Connection string
    This is the least secure mode which should not be used in production. It requires account key and grants full access to the whole storage account.
  • Managed identity
    A secure authentication mode, leveraging AAD to grant access to the resource.
  • Shared access signature
    A good middle-ground where one can easily grant access to either a whole account or a specific container only.

The only thing left to specify is the name of the container in the storage account. Of course, if you use Shared Access Signature of a specific container, that's the name you need to specify.

Optionally, if you would like to put the data into a subfolder of the container, you can specify a subfolder name.