Configuring Apicurio Registry storage
This chapter explains how to configure the available Apicurio Registry storage options:
This chapter mostly focuses on storage configuration procedures for OpenShift using OperatorHub UI. If you are deploying to Kubernetes, you can use command line tools to perform the equivalent steps. The Apicurio Registry Operator supports the same configuration options on OpenShift and Kubernetes. |
Apicurio Registry storage options
The main decision to make when deploying Apicurio Registry is which storage backend to use.
The following storage options are available:
Storage option | Description |
---|---|
In-memory |
Data is stored in RAM on each Apicurio Registry node. This is the easiest deployment to use, but is not recommended for production environment. All data is lost when restarting Apicurio Registry with this storage option, which is suitable for a development environment only. |
SQL database |
Data is stored in a relational database, in this case PostgreSQL 12+. This is the recommended storage option in a production environment for performance, stability, and data management (backup/restore, and so on). |
Apache Kafka |
Data is stored using Apache Kafka, with the help of a local SQL database on each node. This storage option is provided for production environments where database management expertise is not available, or where storage in Kafka is a specific requirement. |
The following options require that the storage is already installed as a prerequisite:
-
SQL (PostgreSQL)
-
Apache Kafka
Configuring Apicurio Registry In-Memory storage using CLI
The in-memory storage option uses RAM to store the data on nodes where Apicurio Registry is deployed, and it is the simplest persistence option to use.
The Apicurio Registry Operator will deploy Apicurio Registry in this way if you do not provide any configuration in the ApicurioRegistry
CR:
apiVersion: registry.apicur.io/v1
kind: ApicurioRegistry
metadata:
name: example-apicurioregistry-mem
spec:
configuration:
persistence: "mem" # Optional (default value)
# NOTE: No additional configuration required for *dev* deployment
-
You must have an Kubernetes cluster with cluster administrator access.
-
You must have already installed the Apicurio Registry Operator.
-
Deploy the example CR using following commands:
export NAMESPACE="default" curl -sSL "https://raw.githubusercontent.com/Apicurio/apicurio-registry-operator/main/docs/modules/ROOT/examples/apicurioregistry_mem_cr.yaml" | kubectl apply -f - -n $NAMESPACE
This persistence option does not support data distribution across Apicurio Registry nodes.
Therefore, it is only recommended for development environments using a single replica (Pod ).
Use embedded Infinispan persistence option when deploying multiple replicas.
|
Configuring SQL (PostgreSQL) storage
-
You must have an OpenShift cluster with cluster administrator access.
-
You must have already installed Apicurio Registry Operator
-
You have a PostgreSQL database reachable from your OpenShift cluster. See Installing Apicurio Registry Operator using the OperatorHub.
-
In the OpenShift Container Platform web console, log in with cluster administrator privileges.
-
Change to the OpenShift project in which Apicurio Registry and your PostgreSQL Operator are installed. For example, from the Project drop-down, select
my-project
. -
Click Installed Operators > Apicurio Registry > ApicurioRegistry > Create ApicurioRegistry.
-
Paste in the following
ApicurioRegistry
CR, and edit the values for the databaseurl
and credentials to match your environment:apiVersion: registry.apicur.io/v1 kind: ApicurioRegistry metadata: name: example-apicurioregistry-sql spec: configuration: persistence: "sql" sql: dataSource: url: "jdbc:postgresql://<service name>.<namespace>.svc:5432/<database name>" userName: "postgres" password: "<password>" # Optional
-
Click Create and wait for the Apicurio Registry route to be created on OpenShift.
-
Click Networking > Route to access the new route for the Apicurio Registry web console.
Configuring plain Kafka storage with no security
You can configure the Strimzi Operator and Apicurio Registry Operator to use a default connection with no security.
-
You have installed the Apicurio Registry Operator using the OperatorHub or command line.
-
You have installed the Strimzi Operator or have Kafka accessible from your OpenShift cluster.
-
In the OpenShift web console, click Installed Operators, select the Strimzi Operator details, and then the Kafka tab.
-
Click Create Kafka to provision a new Kafka cluster for Apicurio Registry storage. You can use the default value, for example:
apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster namespace: registry-example-kafkasql-plain # Change or remove the explicit namespace spec: kafka: config: offsets.topic.replication.factor: 3 transaction.state.log.replication.factor: 3 transaction.state.log.min.isr: 2 log.message.format.version: '2.7' inter.broker.protocol.version: '2.7' version: 2.7.0 storage: type: ephemeral replicas: 3 listeners: - name: plain port: 9092 type: internal tls: false - name: tls port: 9093 type: internal tls: true entityOperator: topicOperator: {} userOperator: {} zookeeper: storage: type: ephemeral replicas: 3
Your OpenShift project namespace might be different. -
When the cluster is ready, open the Kafka resource, examine the
status
block, and copy thebootstrapServers
value for later use when deploying Apicurio Registry. For example:status: conditions: ... listeners: - addresses: - host: my-cluster-kafka-bootstrap.registry-example-kafkasql-plain.svc port: 9092 bootstrapServers: 'my-cluster-kafka-bootstrap.registry-example-kafkasql-plain.svc:9092' type: plain ...
The default Kafka topic name automatically created by Apicurio Registry to store data is
kafkasql-journal
. You can override this behavior or the default topic name by setting environment variables. The default values are as follows:-
REGISTRY_KAFKASQL_TOPIC_AUTO_CREATE=true
-
REGISTRY_KAFKASQL_TOPIC=kafkasql-journal
If you decide not to create the Kafka topic manually, skip the next step.
-
-
Click the Kafka Topic tab, and then Create Kafka Topic to create the
kafkasql-journal
topic:apiVersion: kafka.strimzi.io/v1beta1 kind: KafkaTopic metadata: name: kafkasql-journal labels: strimzi.io/cluster: my-cluster namespace: registry-example-kafkasql-plain spec: partitions: 2 replicas: 1 config: cleanup.policy: compact
-
Select the Apicurio Registry Operator, and in the ApicurioRegistry tab, click Create ApicurioRegistry, using the following example, but replace your value in the
bootstrapServers
field.apiVersion: registry.apicur.io/v1 kind: ApicurioRegistry metadata: name: example-apicurioregistry-kafkasql spec: configuration: persistence: "kafkasql" kafkasql: bootstrapServers: "my-cluster-kafka-bootstrap.registry-example-kafkasql-plain.svc:9092"
-
Wait a few minutes to see the Route being created, where you can access the application.
Configuring Kafka storage with TLS security
You can configure the Strimzi Operator and Apicurio Registry Operator to use an encrypted Transport Layer Security (TLS) connection.
-
You have installed the Apicurio Registry Operator using the OperatorHub or command line.
-
You have installed the Strimzi Operator or have Kafka accessible from your OpenShift cluster.
This section assumes that the Strimzi Operator is available, however you can use any Kafka deployment. In that case, you must manually create the Openshift secrets that the Apicurio Registry Operator expects. |
-
In the OpenShift web console, click Installed Operators, select the Strimzi Operator details, and then the Kafka tab.
-
Click Create Kafka to provision a new Kafka cluster for Apicurio Registry storage.
-
Configure the
authorization
andtls
fields to use TLS authentication for the Kafka cluster, for example:apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster namespace: registry-example-kafkasql-tls # Change or remove the explicit namespace spec: kafka: config: offsets.topic.replication.factor: 3 transaction.state.log.replication.factor: 3 transaction.state.log.min.isr: 2 log.message.format.version: '2.7' inter.broker.protocol.version: '2.7' version: 2.7.0 storage: type: ephemeral replicas: 3 listeners: - name: tls port: 9093 type: internal tls: true authentication: type: tls authorization: type: simple entityOperator: topicOperator: {} userOperator: {} zookeeper: storage: type: ephemeral replicas: 3
The default Kafka topic name automatically created by Apicurio Registry to store data is
kafkasql-journal
. You can override this behavior or the default topic name by setting environment variables. The default values are as follows:-
REGISTRY_KAFKASQL_TOPIC_AUTO_CREATE=true
-
REGISTRY_KAFKASQL_TOPIC=kafkasql-journal
If you decide not to create the Kafka topic manually, skip the next step.
-
-
Click the Kafka Topic tab, and then Create Kafka Topic to create the
kafkasql-journal
topic:apiVersion: kafka.strimzi.io/v1beta1 kind: KafkaTopic metadata: name: kafkasql-journal labels: strimzi.io/cluster: my-cluster namespace: registry-example-kafkasql-tls spec: partitions: 2 replicas: 1 config: cleanup.policy: compact
-
Create a Kafka User resource to configure authentication and authorization for the Apicurio Registry user. You can specify a user name in the
metadata
section or use the defaultmy-user
.apiVersion: kafka.strimzi.io/v1beta1 kind: KafkaUser metadata: name: my-user labels: strimzi.io/cluster: my-cluster namespace: registry-example-kafkasql-tls spec: authentication: type: tls authorization: acls: - operation: All resource: name: '*' patternType: literal type: topic - operation: All resource: name: '*' patternType: literal type: cluster - operation: All resource: name: '*' patternType: literal type: transactionalId - operation: All resource: name: '*' patternType: literal type: group type: simple
This simple example assumes admin permissions and creates the Kafka topic automatically. You must configure the authorization
section specifically for the topics and resources that the Apicurio Registry requires.The following example shows the minimum configuration required when the Kafka topic is created manually:
... authorization: acls: - operations: - Read - Write resource: name: kafkasql-journal patternType: literal type: topic - operations: - Read - Write resource: name: apicurio-registry- patternType: prefix type: group type: simple
-
Click Workloads and then Secrets to find two secrets that Strimzi creates for Apicurio Registry to connect to the Kafka cluster:
-
my-cluster-cluster-ca-cert
- contains the PKCS12 truststore for the Kafka cluster -
my-user
- contains the user’s keystoreThe name of the secret can vary based on your cluster or user name.
-
-
If you create the secrets manually, they must contain the following key-value pairs:
-
my-cluster-ca-cert
-
ca.p12
- truststore in PKCS12 format -
ca.password
- truststore password
-
-
my-user
-
user.p12
- keystore in PKCS12 format -
user.password
- keystore password
-
-
-
Configure the following example configuration to deploy the Apicurio Registry.
apiVersion: registry.apicur.io/v1 kind: ApicurioRegistry metadata: name: example-apicurioregistry-kafkasql-tls spec: configuration: persistence: "kafkasql" kafkasql: bootstrapServers: "my-cluster-kafka-bootstrap.registry-example-kafkasql-tls.svc:9093" security: tls: keystoreSecretName: my-user truststoreSecretName: my-cluster-cluster-ca-cert
You must use a different bootstrapServers address than in the plain insecure use case. The address must support TLS connections and is found in the specified Kafka resource under the type: tls field.
|
Configuring Kafka storage with SCRAM security
You can configure the Strimzi Operator and Apicurio Registry Operator to use Salted Challenge Response Authentication Mechanism (SCRAM-SHA-512) for the Kafka cluster.
-
You have installed the Apicurio Registry Operator using the OperatorHub or command line.
-
You have installed the Strimzi Operator or have Kafka accessible from your OpenShift cluster.
This section assumes that Strimzi Operator is available, however you can use any Kafka deployment. In that case, you must manually create the Openshift secrets that the Apicurio Registry Operator expects. |
-
In the OpenShift web console, click Installed Operators, select the Strimzi Operator details, and then the Kafka tab.
-
Click Create Kafka to provision a new Kafka cluster for Apicurio Registry storage.
-
Configure the
authorization
andtls
fields to use SCRAM-SHA-512 authentication for the Kafka cluster, for example:apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster namespace: registry-example-kafkasql-scram # Change or remove the explicit namespace spec: kafka: config: offsets.topic.replication.factor: 3 transaction.state.log.replication.factor: 3 transaction.state.log.min.isr: 2 log.message.format.version: '2.7' inter.broker.protocol.version: '2.7' version: 2.7.0 storage: type: ephemeral replicas: 3 listeners: - name: tls port: 9093 type: internal tls: true authentication: type: scram-sha-512 authorization: type: simple entityOperator: topicOperator: {} userOperator: {} zookeeper: storage: type: ephemeral replicas: 3
The default Kafka topic name automatically created by Apicurio Registry to store data is
kafkasql-journal
. You can override this behavior or the default topic name by setting environment variables. The default values are as follows:-
REGISTRY_KAFKASQL_TOPIC_AUTO_CREATE=true
-
REGISTRY_KAFKASQL_TOPIC=kafkasql-journal
If you decide not to create the Kafka topic manually, skip the next step.
-
-
Click the Kafka Topic tab, and then Create Kafka Topic to create the
kafkasql-journal
topic:apiVersion: kafka.strimzi.io/v1beta1 kind: KafkaTopic metadata: name: kafkasql-journal labels: strimzi.io/cluster: my-cluster namespace: registry-example-kafkasql-scram spec: partitions: 2 replicas: 1 config: cleanup.policy: compact
-
Create a Kafka User resource to configure SCRAM authentication and authorization for the Apicurio Registry user. You can specify a user name in the
metadata
section or use the defaultmy-user
.apiVersion: kafka.strimzi.io/v1beta1 kind: KafkaUser metadata: name: my-user labels: strimzi.io/cluster: my-cluster namespace: registry-example-kafkasql-scram spec: authentication: type: scram-sha-512 authorization: acls: - operation: All resource: name: '*' patternType: literal type: topic - operation: All resource: name: '*' patternType: literal type: cluster - operation: All resource: name: '*' patternType: literal type: transactionalId - operation: All resource: name: '*' patternType: literal type: group type: simple
This simple example assumes admin permissions and creates the Kafka topic automatically. You must configure the authorization
section specifically for the topics and resources that the Apicurio Registry requires.The following example shows the minimum configuration required when the Kafka topic is created manually:
... authorization: acls: - operations: - Read - Write resource: name: kafkasql-journal patternType: literal type: topic - operations: - Read - Write resource: name: apicurio-registry- patternType: prefix type: group type: simple
-
Click Workloads and then Secrets to find two secrets that Strimzi creates for Apicurio Registry to connect to the Kafka cluster:
-
my-cluster-cluster-ca-cert
- contains the PKCS12 truststore for the Kafka cluster -
my-user
- contains the user’s keystoreThe name of the secret can vary based on your cluster or user name.
-
-
If you create the secrets manually, they must contain the following key-value pairs:
-
my-cluster-ca-cert
-
ca.p12
- the truststore in PKCS12 format -
ca.password
- truststore password
-
-
my-user
-
password
- user password
-
-
-
Configure the following example settings to deploy the Apicurio Registry:
apiVersion: registry.apicur.io/v1 kind: ApicurioRegistry metadata: name: example-apicurioregistry-kafkasql-scram spec: configuration: persistence: "kafkasql" kafkasql: bootstrapServers: "my-cluster-kafka-bootstrap.registry-example-kafkasql-scram.svc:9093" security: scram: truststoreSecretName: my-cluster-cluster-ca-cert user: my-user passwordSecretName: my-user
You must use a different bootstrapServers address than in the plain insecure use case. The address must support TLS connections, and is found in the specified Kafka resource under the type: tls field.
|