Azure Data Explorer + Apache Spark Connector

Azure Data Explorer Connector for Apache Spark

master: Build status dev: Build status

This library contains the source code for Azure Data Explorer Data Source and Data Sink Connector for Apache Spark.

Azure Data Explorer (A.K.A. Kusto) is a lightning-fast indexing and querying service.

Spark is a unified analytics engine for large-scale data processing.

Making Azure Data Explorer and Spark work together enables building fast and scalable applications, targeting a variety of Machine Learning, Extract-Transform-Load, Log Analytics and other data driven scenarios.

Changelog

For main changes from previous releases please refer to Releases. For known or new issues please refer to the issues section.

Usage

Linking

For Scala/Java applications using Maven project definitions, link your application with the artifact below in order to use the Azure Data Explorer connector for Spark.

groupId = com.microsoft.azure.kusto
artifactId = spark-kusto-connector
version = 2.0.0

In Maven:

Look for the following coordinates:

com.microsoft.azure.kusto:spark-kusto-connector:1.1.5

Or clone this repository and build it locally to add it to your local maven repository, the jar can also be found under the released package

   <dependency>
     <groupId>com.microsoft.azure.kusto</groupId>
     <artifactId>spark-kusto-connector</artifactId>
     <version>2.0.0</version>
   </dependency>

In Databricks:

Create Library -> Maven with the following coordinates:

com.microsoft.azure.kusto:spark-kusto-connector:1.1.5

Building Samples Module

Samples are packaged as a separate module with the following artifact

<artifactId>connector-samples</artifactId>

In order to build the whole project comprised of the connector module and the samples module, use the following artifact:

<artifactId>azure-kusto-spark</artifactId>

Build Prerequisites

In order to use the connector, you need to have:

Note: when working with 2.3 Spark version or lower, please refer to Building for legacy Spark versions section of the CHANGELIST document

Build Commands

// Builds jar and runs all tests
mvn clean package

// Builds jar, runs all tests, and installs jar to your local maven repository
mvn clean install

Pre-Compiled Libraries

In order to facilitate ramp-up on platforms such as Azure Databricks, pre-compiled libraries are published under GitHub Releases. These libraries include:

Dependencies

Spark Azure Data Explorer connector takes dependency on Azure Data Explorer Data Client Library and Azure Data Explorer Ingest Client Library, available on maven repository. When Key Vault based authentication is used, there is an additional dependency on Microsoft Azure SDK For Key Vault.

Note: When working with Databricks, Azure Data Explorer connector requires Azure Data Explorer java client libraries (and azure key-vault library if used) to be installed. This can be done by accessing Databricks Create Library -> Maven and specifying the following coordinates:

  • com.microsoft.azure.kusto:spark-kusto-connector:1.1.4

Documentation

Detailed documentation can be found here.

Samples

Usage examples can be found here

Available Azure Data Explorer client libraries:

Here is a list of currently available client libraries for Azure Data Explorer:

For the comfort of the user, here is a Pyspark sample for the connector.

Need Support?

Contributing

This project welcomes contributions and suggestions. Most contributions require you to agree to a Contributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us the rights to use your contribution. For details, visit https://cla.microsoft.com.

When you submit a pull request, a CLA-bot will automatically determine whether you need to provide a CLA and decorate the PR appropriately (e.g., label, comment). Simply follow the instructions provided by the bot. You will only need to do this once across all repos using our CLA.

This project has adopted the Microsoft Open Source Code of Conduct. For more information see the Code of Conduct FAQ or contact [email protected] with any additional questions or comments.