Migrate from Apache Kafka
Map Kafka concepts to npayload and migrate your event streaming infrastructure
If you are coming from Apache Kafka, many concepts will feel familiar. npayload shares Kafka's publish-subscribe model and supports consumer groups, offsets, and ordered delivery. This guide maps your existing Kafka knowledge to npayload and walks through a practical migration path.
Concept mapping
| Kafka | npayload | Notes |
|---|---|---|
| Topic | Channel | Direct 1:1 mapping. Channels support the same publish/subscribe semantics. |
| Partition | Partition key | npayload auto-partitions internally. No manual partition count to configure. |
| Consumer group | Consumer group | Similar semantics. npayload distributes messages across group members automatically. |
| Offset | Consumer offset | Managed automatically per stream consumer. No manual offset commits needed. |
| Producer | SDK publish | npayload.messages.publish() replaces producer.send(). |
| Consumer (pull) | Stream | Pull-based consumption with automatic offset tracking. |
| Consumer (push) | Subscription (webhook) | Push-based delivery to an HTTP endpoint. No consumer process to run. |
| Schema Registry | Event catalogue | Built-in schema versioning and validation. No separate service to deploy. |
| Kafka Connect | Connectors | npayload includes a built-in Kafka connector for bridge mode. |
| Zookeeper / KRaft | N/A | No cluster coordination layer needed. npayload is fully managed. |
| Log compaction | Compacted channel | Set channel type to compacted for latest-value-wins semantics per key. |
| Transactions | Transactional publish | Atomic multi-channel publish with npayload.messages.publishBatch(). |
Key differences
No infrastructure to manage
Kafka requires broker provisioning, partition planning, replication factor tuning, and ongoing cluster maintenance. npayload is fully managed. There are no brokers, no partition rebalancing, and no disk capacity alerts.
Push and pull delivery
Kafka is pull-only: consumers poll for messages. npayload supports both push-based delivery (webhooks) and pull-based consumption (streams). Webhook delivery eliminates the need to run long-lived consumer processes for many use cases.
Built-in reliability features
npayload includes dead letter queues, circuit breakers, and configurable retry policies per subscription. In Kafka, these patterns require custom implementation or additional tooling.
No partition count decisions
Kafka requires you to choose a partition count at topic creation time, and changing it later can break ordering guarantees. npayload handles partitioning internally based on routing keys, so you never need to make this decision upfront.
Built-in encryption
npayload offers three privacy modes (standard, end-to-end, and hybrid) without additional infrastructure. Kafka encryption typically requires TLS configuration across all brokers and clients.
Migration steps
Step 1: Create npayload channels for your Kafka topics
For each Kafka topic, create a corresponding npayload channel:
import { NPayload } from "@npayload/node";
const npayload = new NPayload({
appId: "your-app-id",
apiKey: "your-api-key",
});
// Create channels matching your Kafka topics
await npayload.channels.create({
name: "order-events",
description: "Order lifecycle events",
});
await npayload.channels.create({
name: "user-activity",
description: "User activity tracking",
});If you use Kafka log compaction, create a compacted channel:
await npayload.channels.create({
name: "user-preferences",
type: "compacted",
description: "Latest user preferences per user ID",
});Step 2: Set up the Kafka connector for dual-write
npayload's built-in Kafka connector lets you mirror messages from your existing Kafka cluster into npayload channels during migration. This avoids custom dual-write code and ensures both systems stay in sync.
await npayload.connectors.create({
type: "kafka",
config: {
bootstrapServers: "kafka-broker-1:9092,kafka-broker-2:9092",
topics: ["order-events", "user-activity"],
groupId: "npayload-bridge",
security: {
protocol: "SASL_SSL",
mechanism: "PLAIN",
username: process.env.KAFKA_USERNAME,
password: process.env.KAFKA_PASSWORD,
},
},
});The Kafka connector consumes from your existing Kafka topics and publishes to the matching npayload channels. This lets you migrate consumers without changing producers first.
Step 3: Migrate consumers one by one
Replace each Kafka consumer with an npayload subscription. Choose between webhook delivery (push) and stream consumption (pull) based on your use case.
Webhook delivery (recommended for most use cases):
await npayload.subscriptions.create({
channelName: "order-events",
endpoint: "https://api.example.com/webhooks/orders",
retryPolicy: {
maxRetries: 5,
backoffMultiplier: 2,
},
});Stream consumption (for high-throughput pull-based processing):
const stream = await npayload.streams.create({
channelName: "order-events",
consumerGroup: "order-processor",
});
// Read messages in batches
const messages = await npayload.streams.read({
streamId: stream.id,
batchSize: 100,
});Step 4: Switch producers to the npayload SDK
Once consumers are verified on npayload, update your producers:
Before (Kafka):
const { Kafka } = require("kafkajs");
const kafka = new Kafka({
clientId: "order-service",
brokers: ["kafka-1:9092", "kafka-2:9092"],
});
const producer = kafka.producer();
await producer.connect();
await producer.send({
topic: "order-events",
messages: [
{
key: orderId,
value: JSON.stringify({ type: "order.created", orderId, amount }),
},
],
});After (npayload):
import { NPayload } from "@npayload/node";
const npayload = new NPayload({
appId: "your-app-id",
apiKey: "your-api-key",
});
await npayload.messages.publish({
channel: "order-events",
routingKey: orderId,
payload: { type: "order.created", orderId, amount },
});Step 5: Remove the Kafka connector and decommission
Once all producers and consumers have been migrated:
- Verify message delivery in npayload is stable (check DLQ for failures).
- Remove the Kafka connector.
- Shut down Kafka consumer groups.
- Decommission Kafka brokers.
What you gain
After migrating from Kafka to npayload, you gain:
- Zero infrastructure management. No brokers, no Zookeeper/KRaft, no partition rebalancing.
- Webhook delivery. Push messages to HTTP endpoints without running consumer processes.
- Built-in DLQ and circuit breaker. Failed messages are automatically routed to dead letter queues with configurable retry policies.
- Event catalogue. Schema versioning and validation without a separate Schema Registry service.
- Encryption modes. Standard, end-to-end, and hybrid encryption built in.
- ASP protocol. Agent Session Protocol for structured communication between autonomous systems.
- Marketplace. Publish and discover event streams across organizations.
Next steps
- Connectors guide for bridging npayload with Kafka and other systems
- Streams concept for pull-based consumption patterns
- Consumer groups concept for load-balanced message processing
Was this page helpful?