OPC-UA Pub/Sub vs Client/Server: Choosing the Right Pattern for Your Plant Floor [2026]

If you've spent any time connecting PLCs to cloud dashboards, you've run into OPC-UA. The protocol dominates industrial interoperability conversations — and for good reason. Its information model, security architecture, and cross-vendor compatibility make it the lingua franca of modern manufacturing IT.
But here's what trips up most engineers: OPC-UA isn't a single communication pattern. It's two fundamentally different paradigms sharing one information model. Client/server has been the workhorse since OPC-UA's inception. Pub/sub, ratified in Part 14 of the specification, is the newer pattern designed for one-to-many data distribution. Picking the wrong one can mean the difference between a system that scales to 500 machines and one that falls over at 50.
Let's break down when you need each, how they actually behave on the wire, and where the real-world performance boundaries lie.
The Client/Server Model: What You Already Know (and What You Don't)
OPC-UA client/server follows a familiar request-response paradigm. A client establishes a secure channel to a server, opens a session, creates one or more subscriptions, and receives notifications when monitored item values change.
How Subscriptions Actually Work
This is where many engineers have an incomplete mental model. A subscription isn't a simple "tell me when X changes." It's a multi-layered construct:
-
Monitored Items — Each tag you want to observe becomes a monitored item with its own sampling interval (how often the server checks the underlying data source) and queue size (how many values to buffer between publish cycles).
-
Publishing Interval — The subscription itself has a publishing interval that determines how frequently the server packages up change notifications and sends them to the client. This is independent of the sampling interval.
-
Keep-alive — If no data changes occur within the publishing interval, the server sends a keep-alive message. After a configurable number of missed keep-alives, the subscription is considered dead.
The key insight: sampling and publishing are decoupled. You might sample a temperature sensor at 100ms but only publish aggregated notifications every 1 second. This reduces network traffic without losing fidelity at the source.
Real-World Performance Characteristics
In practice, a single OPC-UA server can typically handle:
- 50-200 concurrent client sessions (depending on hardware)
- 5,000-50,000 monitored items per server across all sessions
- Publishing intervals down to ~50ms before CPU becomes the bottleneck
- Secure channel negotiation takes 200-800ms depending on security policy
The bottleneck isn't usually bandwidth — it's the server's CPU. Every subscription requires the server to maintain state, evaluate sampling queues, and serialize notification messages for each connected client independently. This is the fan-out problem.
When Client/Server Breaks Down
Consider a plant with 200 machines, each exposing 100 tags. A central historian, a real-time dashboard, an analytics engine, and an alarm system all need access. That's four clients × 200 servers × 100 tags each.
Every server must maintain four independent subscription contexts. Every data change gets serialized and transmitted four times — once per client. The server doesn't know or care that all four clients want the same data. It can't share work between them.
At moderate scale, this works fine. At plant-wide scale with hundreds of devices and dozens of consumers, you're asking each embedded OPC-UA server on a PLC to handle work that grows linearly with the number of consumers. That's the architectural tension pub/sub was designed to resolve.
The Pub/Sub Model: How It Actually Differs
OPC-UA Pub/Sub fundamentally changes the relationship between data producers and consumers. Instead of maintaining per-client connections, a publisher emits data to a transport (typically UDP multicast or an MQTT broker) and subscribers independently consume from that transport.
The Wire Format: UADP vs JSON
Pub/sub messages can be encoded in two ways:
UADP (UA Data Protocol) — A compact binary encoding optimized for bandwidth-constrained networks. A typical dataset message with 50 variables fits in ~400 bytes. Headers contain security metadata, sequence numbers, and writer group identifiers. This is the format you want for real-time control loops.
JSON encoding — Human-readable, easier to debug, but 3-5x larger on the wire. Useful when messages need to traverse IT infrastructure (firewalls, API gateways, log aggregators) where binary inspection is impractical.
Publisher Configuration
A publisher organizes its output into a hierarchy:
Publisher
└── WriterGroup (publishing interval, transport settings)
└── DataSetWriter (maps to a PublishedDataSet)
└── PublishedDataSet (the actual variables)
Each WriterGroup controls the publishing cadence and encoding. A single publisher might have one WriterGroup at 100ms for critical process variables and another at 10 seconds for auxiliary measurements.
DataSetWriters bind the data model to the transport. They define which variables go into which messages and how they're sequenced.
Subscriber Discovery
One of pub/sub's elegant features is publisher-subscriber decoupling. A subscriber doesn't need to know the publisher's address. It subscribes to a multicast group or MQTT topic and discovers available datasets from the messages themselves. DataSet metadata (field names, types, engineering units) can be embedded in the message or discovered via a separate metadata channel.
In practice, this means you can add a new analytics consumer to a running plant network without touching a single PLC configuration. The publisher doesn't even know the new subscriber exists.
Head-to-Head: The Numbers That Matter
| Dimension | Client/Server | Pub/Sub (UADP/UDP) | Pub/Sub (JSON/MQTT) |
|---|---|---|---|
| Latency (typical) | 5-50ms | 1-5ms | 10-100ms |
| Connection setup | 200-800ms | None (connectionless) | Broker-dependent |
| Bandwidth per 100 tags | ~2-4 KB/s | ~0.5-1 KB/s | ~3-8 KB/s |
| Max consumers per dataset | ~50 practical | Unlimited (multicast) | Broker-limited |
| Security | Session-level encryption | Message-level signing/encryption | TLS + message-level |
| Firewall traversal | Easy (single TCP) | Hard (multicast) | Easy (TCP to broker) |
| Deterministic timing | No | Yes (with TSN) | No |
The Latency Story
Client/server latency is bounded by the publishing interval plus network round-trip plus serialization overhead. The server must evaluate all monitored items in the subscription, package the notification, encrypt it, and transmit it — for each client independently.
Pub/sub with UADP over UDP can achieve sub-millisecond delivery when combined with Time-Sensitive Networking (TSN). The publisher serializes the dataset once, and the network fabric handles delivery to all subscribers simultaneously. There's no per-subscriber work on the publisher side.
Security Trade-offs
Client/server has the more mature security story. Each session negotiates its own secure channel with certificate-based authentication, message signing, and encryption. The server knows exactly who's connected and can enforce fine-grained access control.
Pub/sub security is message-based. Publishers sign and optionally encrypt messages using security keys distributed through a Security Key Server (SKS). Subscribers must obtain the appropriate keys to decrypt and verify messages. This works, but key distribution and rotation add operational complexity that client/server doesn't have.
Practical Architecture Patterns
Pattern 1: Client/Server for Configuration, Pub/Sub for Telemetry
The most common hybrid approach uses client/server for interactive operations — reading configuration parameters, writing setpoints, browsing the address space, acknowledging alarms — while pub/sub handles the high-frequency telemetry stream.
This plays to each model's strengths. Configuration operations are infrequent, require acknowledgment, and benefit from the request/response guarantee. Telemetry is high-volume, one-directional, and needs to scale to many consumers.
Pattern 2: Edge Aggregation with Pub/Sub Fan-out
Deploy an edge gateway that connects to PLCs via client/server (or native protocols like Modbus or EtherNet/IP), normalizes the data, and re-publishes it via OPC-UA pub/sub. The gateway absorbs the per-device connection complexity while providing a clean, scalable distribution layer.
This is exactly the pattern that platforms like machineCDN implement — the edge software handles the messy reality of multi-protocol PLC communication while providing a unified data stream that any number of consumers can tap into.
Pattern 3: MQTT Broker as Pub/Sub Transport
If your plant network can't support UDP multicast (many can't, due to switch configurations or security policies), use an MQTT broker as the pub/sub transport. The publisher sends OPC-UA pub/sub messages (JSON-encoded) to MQTT topics. Subscribers consume from those topics.
You lose the latency advantage of raw UDP, but you gain:
- Standard IT infrastructure compatibility
- Built-in persistence (retained messages)
- Existing monitoring and management tools
- Firewall-friendly TCP connections
The overhead is measurable — expect 10-50ms additional latency per hop through the broker — but for most monitoring and analytics use cases, this is perfectly acceptable.
Migration Strategy: Moving from Pure Client/Server
If you're running a pure client/server architecture today and hitting scale limits, don't rip and replace. Migrate incrementally:
-
Identify high-fan-out datasets — Which datasets have 3+ consumers? Those are your first pub/sub candidates.
-
Deploy an edge pub/sub gateway — Stand up a gateway that subscribes to your existing OPC-UA servers (via client/server) and republishes via pub/sub. Existing consumers continue to work unchanged.
-
Migrate consumers one at a time — Move each consumer from direct server connections to the pub/sub stream. Monitor for data quality and latency differences.
-
Push pub/sub to the source — Once proven, configure PLCs and servers that support native pub/sub to publish directly, eliminating the gateway hop for those devices.
When to Use Which: The Decision Matrix
Choose Client/Server when:
- You need request/response semantics (writes, method calls)
- Consumer count is small and stable (< 10 per server)
- You need to browse and discover the address space interactively
- Security audit requirements demand per-session access control
- Your network doesn't support multicast
Choose Pub/Sub when:
- You have many consumers for the same dataset
- You need deterministic, low-latency delivery (especially with TSN)
- Publishers are resource-constrained (embedded PLCs)
- You're distributing data across network boundaries (IT/OT convergence)
- You want to decouple publisher lifecycle from consumer lifecycle
Choose both when:
- You're building a plant-wide platform (this is most real deployments)
- Configuration and telemetry have different reliability requirements
- You need to scale consumers independently of device count
The Future: TSN + Pub/Sub
The convergence of OPC-UA Pub/Sub with IEEE 802.1 Time-Sensitive Networking is arguably the most significant development in industrial networking since Ethernet hit the plant floor. TSN provides guaranteed bandwidth allocation, bounded latency, and time synchronization at the network switch level. Combined with UADP encoding, this enables OPC-UA to replace proprietary fieldbus protocols in deterministic control applications.
We're not there yet for most brownfield deployments. TSN-capable switches are expensive, and PLC vendor support is still rolling out. But for greenfield installations making architecture decisions today, TSN-ready pub/sub infrastructure is worth designing for.
Getting Started
If you're evaluating OPC-UA patterns for your plant:
-
Audit your current fan-out — Count how many consumers connect to each data source. If any source serves 5+ consumers, pub/sub will reduce its load.
-
Test your network for multicast — Many industrial Ethernet switches support multicast, but it may not be configured. Work with your network team to test IGMP snooping and multicast routing.
-
Start with MQTT transport — If multicast isn't viable, MQTT-based pub/sub is the lowest-friction path. You can always migrate to UADP/UDP later.
-
Consider an edge platform — Platforms like machineCDN handle the protocol translation and data normalization layer, letting you focus on the analytics and business logic rather than wrestling with transport plumbing.
The choice between client/server and pub/sub isn't either/or. It's understanding which pattern serves which data flow — and designing your architecture accordingly.