Verify Build Workflow Automated Release Notes by gren

Apicurio Registry

An API/Schema registry - stores and retrieves APIs and Schemas.

Build Configuration

This project supports several build configuration options that affect the produced executables.

By default, mvn clean install produces an executable JAR with the dev Quarkus configuration profile enabled, and in-memory persistence implementation.

Apicurio Registry supports 4 persistence implementations:

If you enable one, a separate set of artifacts is produced with the persistence implementation available.

Additionally, there are 2 main configuration profiles:

Build Options

Runtime Configuration

The following parameters are available for executable files:

JPA

Option Command argument Env. variable
Data Source URL -Dquarkus.datasource.url QUARKUS_DATASOURCE_URL
DS Username -Dquarkus.datasource.username QUARKUS_DATASOURCE_USERNAME
DS Password -Dquarkus.datasource.password QUARKUS_DATASOURCE_PASSWORD

To see additional options, visit:

Kafka

Kafka storage implementation uses the following Kafka API / architecture

We already have sensible defaults for all these things, but they can still be overridden or added by adding appropriate properties to app's configuration. The following property name prefix must be used:

We then strip away the prefix and use the rest of the property name in instance's Properties.

e.g. registry.kafka.storage-producer.enable.idempotence=true --> enable.idempotence=true

For the actual configuration options check (although best config docs are in the code itself):

To help setup development / testing environment for the module, see kafka_setup.sh script. You just need to have KAFKA_HOME env variable set, and script does the rest.

Streams

Streams storage implementation goes beyond plain Kafka usage and uses Kafka Streams to handle storage in a distributed and fault-tolerant way.

Both modes require 2 topics: storage topic and globalId topic. This is configurable, by default we use storage-topic and global-id-topic names.

Streams storage implementation uses the following Kafka (Streams) API / architecture

The two KeyValueStores keep the following structure:

We use global id store to map unique id to <artifactId, version> pair, which also uniquely identifies an artifact. The data is distributed among node's stores, where we access those remote stores based on key distribution via gRPC.

We already have sensible defaults for all these things, but they can still be overridden or added by adding appropriate properties to app's configuration. The following property name prefix must be used:

We then strip away the prefix and use the rest of the property name in instance's Properties.

e.g. registry.streams.topology.replication.factor=1 --> replication.factor=1

For the actual configuration options check (although best config docs are in the code itself):

To help setup development / testing environment for the module, see streams_setup.sh script. You just need to have KAFKA_HOME env variable set, and script does the rest.

Docker containers

Every time a commit is pushed to master an updated set of docker images are built and pushed to Docker Hub. There are several docker images to choose from, one for each storage option. The images include:

Run one of the above docker images like this:

docker run -it -p 8080:8080 apicurio/apicurio-registry-mem

The same configuration options are available for the docker containers, but only in the form of environment variables (The command line parameters are for the java executable and at the moment it's not possible to pass them into the container). Each docker image will support the environment variable configuration options documented above for their respective storage type.

There are a variety of docker image tags to choose from when running the registry docker images. Each release of the project has a specific tag associated with it. So release 1.2.0.Final has an equivalent docker tag specific to that release. We also support the following moving tags:

Examples

Run Apicurio Registry with Postgres:

services: postgres: image: postgres environment: POSTGRES_USER: apicurio-registry POSTGRES_PASSWORD: password app: image: apicurio/apicurio-registry-jpa:1.0.0-SNAPSHOT ports:

Security

To run Apicurio Registry against secured Kafka broker(s) in Docker/Kubernetes/OpenShift, you can put the following system properties into JAVA_OPTIONS env var:

Of course that %dev depends on the Quarkus profile you're gonna use -- should be %prod when used in production.

Eclipse IDE

Some notes about using the Eclipse IDE with the Apicurio Registry codebase. Before importing the registry into your workspace, we recommend some configuration of the Eclipse IDE.

Lombok Integration

We use the Lombok code generation utility in a few places. This will cause problems when Eclipse builds the sources unless you install the Lombok+Eclipse integration. To do this, either download the Lombok JAR or find it in your .m2/repository directory (it will be available in .m2 if you've done a maven build of the registry). Once you find that JAR, simply "run" it (e.g. double-click it) and using the resulting UI installer to install Lombok support in Eclipse.

Maven Dependency Plugin (unpack, unpack-dependencies)

We use the maven-dependency-plugin in a few places to unpack a maven module in the reactor into another module. For example, the app module unpacks the contents of the ui module to include/embed the user interface into the running application. Eclipse does not like this. To fix this, configure the Eclipse Maven "Lifecycle Mappings" to ignore the usage of maven-dependency-plugin.

    <pluginExecution>
      <pluginExecutionFilter>
        <groupId>org.apache.maven.plugins</groupId>
        <artifactId>maven-dependency-plugin</artifactId>
        <versionRange>3.1.2</versionRange>
        <goals>
          <goal>unpack</goal>
          <goal>unpack-dependencies</goal>
        </goals>
      </pluginExecutionFilter>
      <action>
        <ignore />
      </action>
    </pluginExecution>

Prevent Eclipse from aggressively cleaning generated classes

We use some Google Protobuf files and a maven plugin to generate some Java classes that get stored in various modules' target directories. These are then recognized by m2e but are sometimes deleted during the Eclipse "clean" phase. To prevent Eclipse from over-cleaning these files, find the os-maven-plugin-1.6.2.jar JAR in your .m2/repository directory and copy it into $ECLIPSE_HOME/dropins.