Integrations on On-Premise
The integration module is part of every Business Wallet deployment, but its built-in channels and the SPI for custom adapters expose extra knobs when you self-host. This page covers what you need to know on top of the SaaS-equivalent guides:
Operator profiles
There are two ways to run the wallet on-premise, and they have slightly different integration stories:
| Profile | Description | Channel options |
|---|---|---|
| Docker-image users | You run the published organization-webwallet-backend container as-is, configured via application.yml and Helm values. | Webhook (always). Kafka and JMS if the corresponding optional dependencies are on the image's classpath (they are, in the default published image). |
| Embedded-library users | You depend on the wallet as a Maven artifact inside your own Spring Boot application. | All of the above, plus the custom adapter SPI for sending events to systems we do not ship out of the box. |
Both profiles share the same configuration surface for the built-in channels.
Built-in channel configuration
Each built-in channel can be globally enabled or disabled in application.yml. Disabling a channel here makes it impossible for a wallet operator to create a channel of that type — useful if your security policy forbids, say, raw JMS to a specific broker.
integration:
channels:
webhook:
enabled: true # default — almost always leave on
kafka:
enabled: false # toggle on if you have a Kafka cluster
jms:
enabled: false # toggle on if you have a JMS broker
Per-channel runtime parameters (Kafka bootstrap servers, JMS broker URL, default timeouts, …) live under the same prefix — see the on-premise installation guide's autogenerated values reference for the exhaustive list.
Optional Maven dependencies
To keep the slim deployment slim, Kafka and JMS are declared as optional Maven dependencies in the wallet's pom.xml:
<dependency>
<groupId>org.springframework.kafka</groupId>
<artifactId>spring-kafka</artifactId>
<optional>true</optional>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-activemq</artifactId>
<optional>true</optional>
</dependency>
The <optional>true</optional> declaration means:
- The wallet compiles against these libraries (so the channel implementations exist), but
- They are not transitively pulled into a downstream project that depends on the wallet artifact.
For embedded-library users this matters: if you do not need Kafka or JMS, simply do not add those dependencies to your own application's pom.xml. The auto-configuration is guarded by @ConditionalOnClass, so the channel beans are silently skipped when the libraries are absent — no configuration change required.
For docker-image users, the published image always ships with both Kafka and JMS on the classpath; the runtime toggle is integration.channels.{kafka,jms}.enabled.
Custom-adapter SPI
The custom-adapter SPI is the embedded-library escape hatch. Use it when you want to deliver wallet events to a system the wallet does not ship a built-in channel for — for example, an internal service bus, AWS SNS, or Google Pub/Sub.
A custom adapter consists of three parts:
- A
ChannelTypeRegistrationbean that announces the new channel type to the wallet. - A
MessageHandlerbean namedintegrationOutboundChannel.<TYPE>that delivers a single CloudEvent. - A piece of channel-config JSON the operator fills in when creating the channel.
Below is a worked example for AWS SNS.
1. Register the channel type
package com.example.wallet.integration.sns;
import com.credenco.webwallet.backend.integration.api.ChannelTypeRegistration;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
@Configuration
public class SnsChannelRegistration {
@Bean
public ChannelTypeRegistration snsChannelType() {
return ChannelTypeRegistration.builder()
.type("SNS")
.label("AWS SNS")
.description("Publish CloudEvents to an AWS SNS topic.")
.configSchema("""
{
"$schema": "https://json-schema.org/draft/2020-12/schema",
"type": "object",
"required": ["topicArn", "region"],
"properties": {
"topicArn": { "type": "string", "pattern": "^arn:aws:sns:" },
"region": { "type": "string" }
}
}
""")
.build();
}
}
The type value is opaque to the wallet but must match the suffix of the message handler's bean name (see step 2). The configSchema is a JSON Schema validated by the wallet UI when an operator creates a channel of this type.
2. Implement and name the message handler
package com.example.wallet.integration.sns;
import com.credenco.webwallet.backend.integration.api.CloudEventEnvelope;
import com.credenco.webwallet.backend.integration.api.IntegrationChannel;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import org.springframework.messaging.Message;
import org.springframework.messaging.MessageHandler;
import software.amazon.awssdk.services.sns.SnsClient;
import software.amazon.awssdk.services.sns.model.PublishRequest;
@Configuration
public class SnsChannelAutoConfiguration {
@Bean(name = "integrationOutboundChannel.SNS")
public MessageHandler snsOutboundHandler(@Autowired SnsClient sns) {
return (Message<?> message) -> {
CloudEventEnvelope envelope = (CloudEventEnvelope) message.getPayload();
IntegrationChannel channel = (IntegrationChannel) message.getHeaders().get("integrationChannel");
String topicArn = channel.config().get("topicArn").asText();
String body = envelope.toStructuredJson();
sns.publish(PublishRequest.builder()
.topicArn(topicArn)
.message(body)
.messageAttributes(envelope.toSnsAttributes())
.build());
};
}
}
The bean must be named integrationOutboundChannel.<TYPE> — the dispatcher resolves the handler by exactly this convention. If the handler throws, the dispatcher applies the standard retry-with-back-off behaviour.
3. Operators configure the channel
Once your application starts with the registration above on the classpath, operators can create a channel of type SNS via the wallet UI. The form will validate their JSON against the configSchema. The channel's deliveries appear in the standard Deliveries view, just like webhook or Kafka deliveries.
The same pattern works for Google Pub/Sub, Azure Service Bus, NATS, an internal HTTP gateway, or any other transport you can reach from your JVM.
Provisioning the integration authorities in Keycloak
The four integration authorities are added to Keycloak by the wallet's standard realm-import process — but if you are upgrading an existing on-premise installation you may need to provision them manually.
In the realm where wallet roles live, create the following four authorities (under the roles that should grant them):
| Authority | Typical role |
|---|---|
WALLET_INTEGRATION_CHANNEL_READ | Wallet operator (read-only). |
WALLET_INTEGRATION_CHANNEL_CRUD | Wallet administrator. |
WALLET_INTEGRATION_DELIVERY_READ | Wallet operator (read-only). |
WALLET_INTEGRATION_DELIVERY_CRUD | Wallet administrator. |
A super-admin role with GLOBAL_WALLET_CRUD already implies all four. The wallet refreshes its authority cache on token refresh, so newly granted authorities are picked up at the next login or token renewal.
If you use the public realm-import bundle that ships with the wallet, the authorities are pre-declared — you only need to assign them to the roles that match your operating model.
Troubleshooting
Failed deliveries
When a delivery moves to the FAILED state the wallet retains the last response code, a truncated response body, and the error message from the underlying client. Open the delivery in the Deliveries tab to see the full request and the response.
Common causes and fixes:
| Symptom | Likely cause | Action |
|---|---|---|
HTTP 401 on every webhook delivery | Receiver is rejecting the API key or Bearer token, or signature verification is failing on their side. | Confirm shared secret matches; double-check raw-body capture; for OAuth2, check the token endpoint and scope. |
HTTP 403 | Receiver does not authorise this caller. | Check authorization on the receiver. |
| Connection timeout | Network ACL or firewall is dropping outbound traffic. | Verify wallet's egress rules and the receiver's ingress rules. |
Kafka LEADER_NOT_AVAILABLE | Topic does not exist or broker is mid-rebalance. | Create the topic; let rebalance complete; then replay the failed deliveries. |
| Repeated DNS failures | Receiver hostname is not resolvable from the wallet's network. | Add the receiver to the wallet's resolver / hosts. |
Replaying
A FAILED delivery can be moved back to PENDING (and re-attempted from scratch) by selecting it in the Deliveries tab and clicking Replay. You can bulk-replay many deliveries at once. Replays reset attempts = 0, so the same retry budget applies.
ShedLock
Cleanup jobs (deleting old DELIVERED rows after the retention window, removing already-completed Modulith publications) run under ShedLock so they fire exactly once per cluster per scheduled tick, regardless of how many wallet replicas you run.
If the cleanup job is silent, check:
- The
shedlocktable in the database — every cluster-wide scheduled job has a row here. Stalelock_untilvalues point to a previous, now-dead replica that crashed mid-job; the lock will release automatically when itslock_untilpasses. - Wallet logs for warnings that mention
LockProviderorLockableTaskScheduler— connectivity issues to the database surface here. - The
INTEGRATIONhistory events in the audit log — successful cleanup runs are recorded here for auditability.
A force-unlock is rarely required; in normal operation lock_until self-expires within minutes.
Where to next
- Integrations overview for the cross-deployment basics.
- On-premise installation for the full Helm-chart deployment guide.
- On-premise IDP setup for Keycloak realm and role configuration.