Implementing multitenancy with Apicurio Registry
This chapter explains how to implement multitenancy with Apicurio Registry on Kubernetes using the Apicurio Registry Operator.
Apicurio Registry achieves multitenancy through a multi-instance operator pattern: each tenant receives a dedicated
ApicurioRegistry3 custom resource (CR), which the operator reconciles into an independent set of Kubernetes
resources. This approach provides strong isolation between tenants at the infrastructure level, leveraging
Kubernetes-native mechanisms for security, resource management, and network segmentation.
-
You must have already installed the Apicurio Registry Operator. See [deploying-registry-operator_registry].
Overview
The Apicurio Registry Operator watches for ApicurioRegistry3 custom resources and creates the following Kubernetes resources
for each CR:
| Component | Resources created |
|---|---|
Apicurio Registry backend (REST API) component |
Deployment, Service, Ingress, NetworkPolicy, PodDisruptionBudget |
Apicurio Registry web console component |
Deployment, Service, Ingress, NetworkPolicy, PodDisruptionBudget |
Each CR instance is completely independent. Resources are named using the pattern
{cr-name}-{component}-{resource-type} and labeled with instance-specific selectors to prevent any
cross-tenant interference.
Isolation boundaries
The multi-instance approach provides the following isolation boundaries:
-
Compute isolation: Separate Deployments and Pods per tenant.
-
Network isolation: Separate Services, Ingresses, and optional NetworkPolicies per tenant.
-
Storage isolation: Each tenant can use a separate database or Kafka cluster.
-
Authentication isolation: Each tenant can have its own OIDC provider or realm.
-
Configuration isolation: Environment variables and settings are per-CR.
Deployment patterns
You can deploy multiple Apicurio Registry instances using three primary patterns.
Pattern A: Single namespace, multiple tenants
All tenant registry instances reside in the same namespace. The operator creates uniquely named resources for each CR.
Namespace: apicurio-registries
├── Operator Pod
├── ApicurioRegistry3: tenant-alpha
│ ├── tenant-alpha-app-deployment
│ ├── tenant-alpha-ui-deployment
│ └── ...
└── ApicurioRegistry3: tenant-beta
├── tenant-beta-app-deployment
├── tenant-beta-ui-deployment
└── ...
This pattern is best for a small number of tenants managed by a single platform team.
Pattern B: Namespace per tenant (recommended)
Each tenant gets its own namespace. A single operator instance watches all namespaces.
Cluster
├── apicurio-system namespace
│ └── Operator Pod (watches all namespaces)
├── tenant-alpha namespace
│ └── ApicurioRegistry3: registry
└── tenant-beta namespace
└── ApicurioRegistry3: registry
This pattern provides stronger isolation using Kubernetes RBAC, ResourceQuotas, and NetworkPolicies at the namespace level. It is the recommended approach for most production environments.
Pattern C: Operator per namespace
Each namespace has its own operator instance, restricted to watching only that namespace
via the APICURIO_OPERATOR_WATCHED_NAMESPACES environment variable.
Cluster
├── tenant-alpha namespace
│ ├── Operator Pod (APICURIO_OPERATOR_WATCHED_NAMESPACES="tenant-alpha")
│ └── ApicurioRegistry3: registry
└── tenant-beta namespace
├── Operator Pod (APICURIO_OPERATOR_WATCHED_NAMESPACES="tenant-beta")
└── ApicurioRegistry3: registry
This pattern provides maximum isolation and is suitable when tenants manage their own operator lifecycle.
Deploying with namespace-per-tenant isolation
This section walks through deploying Apicurio Registry using the namespace-per-tenant pattern (Pattern B).
-
You must have cluster administrator access to a Kubernetes cluster.
-
You must have already installed the Apicurio Registry Operator. See Installing Apicurio Registry on OpenShift.
-
You must have storage infrastructure (PostgreSQL or Kafka) available for each tenant.
-
Install the operator in a dedicated namespace. By default, it watches all namespaces:
kubectl create namespace apicurio-system kubectl apply -f operator/install/install.yaml -n apicurio-system -
Verify the operator is running:
kubectl get pods -n apicurio-system -
Create tenant namespaces:
kubectl create namespace tenant-alpha kubectl create namespace tenant-beta -
Create a database credentials Secret for each tenant:
apiVersion: v1 kind: Secret metadata: name: db-credentials namespace: tenant-alpha type: Opaque stringData: password: tenant-alpha-db-password -
Deploy an
ApicurioRegistry3CR for each tenant. For example, Tenant Alpha with PostgreSQL storage:apiVersion: registry.apicur.io/v1 kind: ApicurioRegistry3 metadata: name: registry namespace: tenant-alpha spec: app: storage: type: postgresql sql: dataSource: url: jdbc:postgresql://postgres-alpha.tenant-alpha.svc:5432/apicurio username: apicurio password: name: db-credentials key: password ingress: host: tenant-alpha-registry.apps.cluster.example ui: ingress: host: tenant-alpha-ui.apps.cluster.exampleAnd Tenant Beta with KafkaSQL storage:
apiVersion: registry.apicur.io/v1 kind: ApicurioRegistry3 metadata: name: registry namespace: tenant-beta spec: app: storage: type: kafkasql kafkasql: bootstrapServers: "kafka-beta.tenant-beta.svc:9092" ingress: host: tenant-beta-registry.apps.cluster.example ui: ingress: host: tenant-beta-ui.apps.cluster.example -
Apply the custom resources:
kubectl apply -f tenant-alpha-registry.yaml kubectl apply -f tenant-beta-registry.yaml
-
Check the status of all registry instances across the cluster:
kubectl get apicurioregistries3 --all-namespaces -
Verify that pods are running in each tenant namespace:
kubectl get pods -n tenant-alpha kubectl get pods -n tenant-beta -
Access each tenant’s Apicurio Registry web console using the configured hostnames and verify that the UI loads successfully.
Configuring storage isolation
Each tenant’s CR must point to a separate storage backend. The operator does not automatically provision databases or Kafka topics; these must be pre-provisioned.
Separate databases on a shared PostgreSQL instance
The recommended approach is to use one database per tenant on a shared PostgreSQL instance. Apicurio Registry manages its own schema within each database. Use separate PostgreSQL users with permissions limited to their respective database for additional security.
# Tenant Alpha
spec:
app:
storage:
type: postgresql
sql:
dataSource:
url: jdbc:postgresql://shared-postgres.infra.svc:5432/tenant_alpha
username: tenant_alpha
password:
name: tenant-alpha-db-credentials
key: password
# Tenant Beta
spec:
app:
storage:
type: postgresql
sql:
dataSource:
url: jdbc:postgresql://shared-postgres.infra.svc:5432/tenant_beta
username: tenant_beta
password:
name: tenant-beta-db-credentials
key: password
Separate PostgreSQL instances
For maximum storage isolation, each tenant can have its own PostgreSQL instance:
spec:
app:
storage:
type: postgresql
sql:
dataSource:
url: jdbc:postgresql://postgres-alpha.tenant-alpha.svc:5432/apicurio
username: apicurio
password:
name: db-credentials
key: password
KafkaSQL storage
When using KafkaSQL storage, each tenant should use separate Kafka bootstrap servers or, at minimum, separate topics:
spec:
app:
storage:
type: kafkasql
kafkasql:
bootstrapServers: "kafka-alpha.tenant-alpha.svc:9092"
When using KafkaSQL, ensure each tenant uses separate Kafka topics. The default topic name
kafkasql-journal is the same for all instances. Configure separate topic names using the
APICURIO_KAFKASQL_TOPIC environment variable if tenants share a Kafka cluster.
|
Configuring authentication per tenant
Each tenant instance can be configured with its own authentication settings. This allows different tenants to use different identity providers or OIDC realms.
Using shared Keycloak with separate realms
apiVersion: registry.apicur.io/v1
kind: ApicurioRegistry3
metadata:
name: registry
namespace: tenant-alpha
spec:
app:
auth:
enabled: true
appClientId: registry-api
uiClientId: apicurio-registry
authServerUrl: https://keycloak.example.com/realms/alpha
redirectUri: https://tenant-alpha-ui.apps.cluster.example
logoutUrl: https://tenant-alpha-ui.apps.cluster.example
authz:
enabled: true
ownerOnlyEnabled: true
groupAccessEnabled: true
readAccessEnabled: true
roles:
source: token
admin: sr-admin
developer: sr-developer
readOnly: sr-readonly
ingress:
host: tenant-alpha-registry.apps.cluster.example
ui:
ingress:
host: tenant-alpha-ui.apps.cluster.example
Using separate identity providers
Each tenant CR can reference a completely different OIDC provider by specifying different authServerUrl values:
# Tenant Alpha uses Keycloak
spec:
app:
auth:
enabled: true
authServerUrl: https://keycloak.example.com/realms/alpha
# Tenant Beta uses Microsoft Entra ID
spec:
app:
auth:
enabled: true
authServerUrl: https://login.microsoftonline.com/{tenant-id}/v2.0
Managing resources per tenant
Default resource allocation
Each Apicurio Registry instance receives the following default resource requests and limits:
| Component | CPU request | CPU limit | Memory request | Memory limit |
|---|---|---|---|---|
App |
500m |
1 |
512Mi |
1Gi |
UI |
100m |
200m |
256Mi |
512Mi |
The total minimum per tenant is approximately 600m CPU and 768Mi memory.
Customizing resources per tenant
Use podTemplateSpec to adjust resources for high-traffic or resource-constrained tenants:
spec:
app:
podTemplateSpec:
spec:
containers:
- name: apicurio-registry-app
resources:
requests:
cpu: "1"
memory: 2Gi
limits:
cpu: "2"
memory: 4Gi
Scaling per tenant
Each tenant instance can be independently scaled:
spec:
app:
replicas: 3
ui:
replicas: 2
Using Kubernetes ResourceQuotas
Use Kubernetes ResourceQuota objects to enforce per-namespace resource limits when using the
namespace-per-tenant pattern:
apiVersion: v1
kind: ResourceQuota
metadata:
name: tenant-quota
namespace: tenant-alpha
spec:
hard:
requests.cpu: "2"
requests.memory: 4Gi
limits.cpu: "4"
limits.memory: 8Gi
pods: "10"
Configuring network isolation
Namespace-level network isolation
For additional security when using the namespace-per-tenant pattern, apply a default-deny policy in each tenant namespace:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: default-deny-all
namespace: tenant-alpha
spec:
podSelector: {}
policyTypes:
- Ingress
- Egress
Then selectively allow traffic for the registry pods:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-registry-ingress
namespace: tenant-alpha
spec:
podSelector:
matchLabels:
app.kubernetes.io/name: apicurio-registry
policyTypes:
- Ingress
ingress:
- from:
- namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: ingress-nginx
ports:
- protocol: TCP
port: 8080
Restricting the operator to specific namespaces
To limit the operator to watching only certain namespaces, set the APICURIO_OPERATOR_WATCHED_NAMESPACES
environment variable on the operator Deployment:
env:
- name: APICURIO_OPERATOR_WATCHED_NAMESPACES
value: "tenant-alpha,tenant-beta"
When this variable is empty or unset, the operator watches all namespaces. When installed via OLM, this is
automatically derived from the olm.targetNamespaces annotation.
Alternative: Lightweight isolation with groups
For scenarios where full instance-per-tenant isolation is excessive, Apicurio Registry supports logical isolation within a single instance using groups and owner-based access control.
How it works
-
Each logical tenant uses one or more groups as their namespace within the registry.
-
Users can only modify artifacts and groups they own.
Configuration
Enable owner-based access control in the ApicurioRegistry3 CR:
spec:
app:
auth:
enabled: true
appClientId: registry-api
uiClientId: apicurio-registry
authServerUrl: https://keycloak.example.com/realms/registry
redirectUri: https://registry-ui.apps.cluster.example
logoutUrl: https://registry-ui.apps.cluster.example
authz:
enabled: true
ownerOnlyEnabled: true
groupAccessEnabled: true
With this configuration:
-
ownerOnlyEnabledrestricts artifact modifications to the artifact’s creator. -
groupAccessEnabledrestricts group modifications to the group’s creator.
Trade-offs
| Advantage | Limitation |
|---|---|
Shared compute and storage reduces overhead |
Weaker isolation than separate instances |
Faster tenant provisioning (create a group, not a deployment) |
No separate storage per tenant |
Single deployment to monitor and upgrade |
A bug or outage affects all tenants |
Lower per-tenant cost |
No independent scaling per tenant |
This approach is suitable for development environments, internal teams, or scenarios where tenants do not require strict data or infrastructure isolation.
-
For more information on deploying Apicurio Registry using the operator, see Deploying Apicurio Registry using the Operator.
-
For more information on configuring Apicurio Registry security, see Configuring Apicurio Registry security options.
-
For the complete operator configuration reference, see Apicurio Registry Operator configuration reference.
