Skip to main content

32 posts tagged with "industrial-protocols"

View All Tags

EtherNet/IP and CIP Objects Explained: Implicit vs Explicit Messaging for IIoT [2026]

· 12 min read

If you've spent any time integrating Allen-Bradley PLCs, Rockwell automation cells, or Micro800-class controllers into a modern IIoT stack, you've encountered EtherNet/IP. It's the most widely deployed industrial Ethernet protocol in North America, yet the specifics of how it actually moves data — CIP objects, implicit vs explicit messaging, the scanner/adapter relationship — remain poorly understood by many engineers who use it daily.

This guide breaks down EtherNet/IP from the perspective of someone who has built edge gateways that communicate with these controllers in production. No marketing fluff, just the protocol mechanics that matter when you're writing code that reads tags from a PLC at sub-second intervals.

EtherNet/IP CIP messaging architecture

What EtherNet/IP Actually Is (And Isn't)

EtherNet/IP stands for EtherNet/Industrial Protocol — not "Ethernet IP" as in TCP/IP. The "IP" is intentionally capitalized to distinguish it. At its core, EtherNet/IP is an application-layer protocol that runs CIP (Common Industrial Protocol) over standard TCP/IP and UDP/IP transport.

The key architectural insight: CIP is the protocol. EtherNet/IP is just one of its transport layers. CIP also runs over DeviceNet (CAN bus) and ControlNet (token-passing). This means the object model, service codes, and data semantics are identical whether you're talking to a device over Ethernet, a CAN network, or a deterministic control network.

For IIoT integration, this matters because your edge gateway's parsing logic for CIP objects translates directly across all three physical layers — even if 90% of modern deployments use EtherNet/IP exclusively.

The CIP Object Model

CIP organizes everything as objects. Every device on the network is modeled as a collection of object instances, each with attributes, services, and behaviors. Understanding this hierarchy is essential for programmatic tag access.

Object Addressing

Every piece of data in a CIP device is addressed by three coordinates:

LevelDescriptionExample
ClassThe type of objectClass 0x04 = Assembly Object
InstanceA specific occurrence of that classInstance 1 = Output assembly
AttributeA property of that instanceAttribute 3 = Data bytes

When your gateway creates a tag path like protocol=ab-eip&gateway=192.168.1.10&cpu=micro800&name=temperature_setpoint, the underlying CIP request resolves that symbolic tag name into a class/instance/attribute triplet.

Essential CIP Objects for IIoT

Here are the objects you'll interact with most frequently:

Identity Object (Class 0x01) — Every CIP device has one. Vendor ID, device type, serial number, product name. This is your first read when auto-discovering devices on a network. For fleet management, querying this object gives you hardware revision, firmware version, and a unique serial number that serves as a device fingerprint.

Message Router (Class 0x02) — Routes incoming requests to the correct object. You never address it directly, but understanding that it exists explains why a single TCP connection can multiplex requests to dozens of different objects without confusion.

Assembly Object (Class 0x04) — This is where I/O data lives. Assemblies aggregate multiple data points into a single, contiguous block. When you configure implicit messaging, you're essentially subscribing to an assembly object that the PLC updates at a fixed rate.

Connection Manager (Class 0x06) — Manages the lifecycle of connections. Forward Open, Forward Close, and Large Forward Open requests all go through this object. When your edge gateway opens a connection to read 50 tags, the Connection Manager allocates resources and returns a connection ID.

Implicit vs Explicit Messaging: The Critical Distinction

This is where most IIoT integration mistakes happen. EtherNet/IP supports two fundamentally different messaging paradigms, and choosing the wrong one leads to either wasted bandwidth or missed data.

Explicit Messaging (Request/Response)

Explicit messaging works like HTTP: your gateway sends a request, the PLC processes it, and sends a response. It uses TCP for reliability.

When to use explicit messaging:

  • Reading configuration parameters
  • Writing setpoints or recipe values
  • Querying device identity and diagnostics
  • Any operation where you need a guaranteed response
  • Tag reads at intervals > 100ms

The tag read flow:

Gateway                           PLC (Micro800)
| |
|--- TCP Connect (port 44818) -->|
|<-- TCP Accept ------------------|
| |
|--- Register Session ---------->|
|<-- Session Handle: 0x1A2B ----|
| |
|--- Read Tag Service ---------->|
| (class 0x6B, service 0x4C) |
| tag: "blender_speed" |
|<-- Response: FLOAT 1250.5 -----|
| |
|--- Read Tag Service ---------->|
| tag: "motor_current" |
|<-- Response: FLOAT 12.3 ------|

Each tag read is a separate CIP request encapsulated in a TCP packet. For reading dozens of tags, this adds up — each round trip includes TCP overhead, CIP encapsulation, and PLC processing time.

Performance characteristics:

  • Typical round-trip: 5–15ms per tag on a local network
  • 50 tags × 10ms = 500ms minimum cycle time
  • Connection timeout: typically 2000ms (configurable)
  • Maximum concurrent sessions: depends on PLC model (Micro800: ~8–16)

Implicit Messaging (I/O Data)

Implicit messaging is a scheduled, connectionless data exchange using UDP. The PLC pushes data at a fixed rate without being asked — think of it as a PLC-initiated publish.

When to use implicit messaging:

  • Continuous process monitoring (temperature, pressure, flow)
  • Motion control feedback
  • Any data that changes frequently (< 100ms intervals)
  • High tag counts where polling overhead is unacceptable

The connection flow:

Gateway                           PLC
| |
|--- Forward Open (TCP) ------->|
| RPI: 50ms |
| Connection type: Point-to-Point |
| O→T Assembly: Instance 100 |
| T→O Assembly: Instance 101 |
|<-- Forward Open Response ------|
| Connection ID: 0x4F2E |
| |
|<== I/O Data (UDP, every 50ms) =|
|<== I/O Data (UDP, every 50ms) =|
|<== I/O Data (UDP, every 50ms) =|
| ...continuous... |

The Requested Packet Interval (RPI) is specified in microseconds during the Forward Open. Common values:

  • 10ms (10,000 μs) — motion control
  • 50ms — process monitoring
  • 100ms — general I/O
  • 500ms–1000ms — slow-changing values (temperature, level)

Critical detail: The data format of implicit messages is defined by the assembly object, not by the message itself. Your gateway must know the assembly layout in advance — which bytes correspond to which tags, their data types, and byte ordering. There's no self-describing metadata in the UDP packets.

Scanner/Adapter Architecture

In EtherNet/IP terminology:

  • Scanner = the device that initiates connections and consumes data (your edge gateway, HMI, or supervisory PLC)
  • Adapter = the device that produces data (field I/O modules, drives, instruments)

A PLC can act as both: it's an adapter to the SCADA system above it, and a scanner to the I/O modules below it.

What This Means for IIoT Gateways

Your edge gateway is a scanner. When designing its communication stack, you need to handle:

  1. Session registration — Before any CIP communication, register a session with the target device. This returns a session handle that must be included in every subsequent request. Session handles are 32-bit integers; manage them carefully across reconnects.

  2. Connection management — For explicit messaging, a single TCP connection can carry multiple CIP requests. For implicit messaging, each connection requires a Forward Open with specific parameters. Plan your connection budget — Micro800 controllers support 8–16 simultaneous connections depending on firmware.

  3. Tag path resolution — Symbolic tag names (like B3_0_0_blender_st_INT) must be resolved to CIP paths. For Micro800 controllers, the tag path format is:

    protocol=ab-eip&gateway=<ip>&cpu=micro800&elem_count=<n>&elem_size=<s>&name=<tagname>

    Where elem_size is 1 (bool/int8), 2 (int16), or 4 (int32/float).

  4. Array handling — CIP supports reading arrays with a start index and element count. A single request can read up to 255 elements. For arrays, the tag path includes the index: tagname[start_index].

Data Types and Byte Ordering

CIP uses little-endian byte ordering for all integer types, which is native to x86-based controllers. However, when tag values arrive at your gateway, the handling depends on the data type:

CIP TypeSizeByte OrderNotes
BOOL1 byteN/A0x00=false, 0x01=true
INT8 / USINT1 byteN/ASigned: -128 to 127
INT16 / INT2 bytesLittle-endian-32,768 to 32,767
INT32 / DINT4 bytesLittle-endianIndexed at offset × 4
UINT16 / UINT2 bytesLittle-endian0 to 65,535
UINT32 / UDINT4 bytesLittle-endianIndexed at offset × 4
REAL / FLOAT4 bytesIEEE 754Indexed at offset × 4

A common gotcha: When reading 32-bit values, the element offset in the response buffer is index × 4 bytes from the start. For 16-bit values, it's index × 2. Getting this wrong silently produces garbage values that look plausible — a classic source of phantom sensor readings.

Practical Integration Pattern: Interval-Based Tag Reading

In production IIoT deployments, not every tag needs to be read at the same rate. A blender's running status might change once per shift, while a motor current needs 1-second resolution. A well-designed gateway implements per-tag interval scheduling:

Tag Configuration:
- blender_status: type=bool, interval=60s, compare=true
- motor_speed: type=float, interval=5s, compare=false
- temperature_sp: type=float, interval=10s, compare=true
- alarm_word: type=uint16, interval=1s, compare=true

The compare flag is crucial for bandwidth optimization. When enabled, the gateway only forwards a value to the cloud if it has changed since the last read. For boolean status tags that might stay constant for hours, this eliminates 99%+ of redundant transmissions.

Dependent Tag Chains

Some tags are only meaningful when a parent tag changes. For example, when a machine_state tag transitions from IDLE to RUNNING, you want to immediately read a cascade of operational tags (speed, temperature, pressure) regardless of their normal intervals.

This pattern — triggered reads on value change — dramatically reduces average bandwidth while ensuring you never miss the data that matters. The gateway maintains a dependency graph where certain tags trigger force-reads of their children.

Handling Connection Failures

EtherNet/IP connections fail. PLCs reboot. Network switches drop packets. A production-grade gateway implements:

  1. Retry with backoff — On read failure (typically error code -32 for connection timeout), retry up to 3 times before declaring the link down.
  2. Link state tracking — Maintain a boolean link state per device. Transition to DOWN on persistent failures; transition to UP on the first successful read. Deliver link state changes immediately (not batched) as they're high-priority events.
  3. Automatic reconnection — On link DOWN, destroy the existing connection context and attempt to re-establish. Don't just retry on the dead socket.
  4. Hourly forced reads — Even when using compare-based transmission, periodically force-read and deliver all tags. This prevents state drift where the gateway and cloud have different views of a value that changed during a brief disconnection.

Batching for MQTT Delivery

The gateway doesn't forward each tag value individually to the cloud. Instead, it implements a batch-and-forward pattern:

  1. Start a batch group with a timestamp
  2. Accumulate tag values (with ID, status, type, and value data)
  3. Close the group when either:
    • The batch size exceeds the configured maximum (typically 4KB)
    • The collection timeout expires (typically 60 seconds)
  4. Serialize the batch (JSON or binary) and push to an output buffer
  5. The output buffer handles MQTT QoS 1 delivery with page-based flow control

Binary serialization is preferred for bandwidth-constrained cellular connections. A typical binary batch frame:

Header:  0xF7 (command byte)
4 bytes: number of groups
Per group:
4 bytes: timestamp
2 bytes: device type
4 bytes: serial number
4 bytes: number of values
Per value:
2 bytes: tag ID
1 byte: status (0x00 = OK)
1 byte: array size
1 byte: element size (1, 2, or 4)
N bytes: packed data (MSB → LSB)

This binary format achieves roughly 3–5x compression over equivalent JSON, which matters when you're paying per-megabyte on cellular or satellite links.

Performance Benchmarks

Based on production deployments with Micro800 controllers:

ScenarioTagsCycle TimeBandwidth
All explicit, 1s interval50~800ms~2KB/s JSON
All explicit, 5s interval100~1200ms~1KB/s JSON
Mixed interval + compare100Varies~200B/s binary
Implicit I/O, 50ms RPI2050ms fixed~4KB/s

The "mixed interval + compare" row shows the power of intelligent scheduling — by reading fast-changing tags frequently and slow-changing tags infrequently, and only forwarding values that actually changed, you can monitor 100+ tags with less bandwidth than 20 tags on implicit I/O.

Common Pitfalls

1. Exhausting connection slots. Each Forward Open consumes a connection slot on the PLC. Open too many and you'll get "Connection Refused" errors. Pool your connections and reuse sessions.

2. Mismatched element sizes. If you request elem_size=4 but the tag is actually INT16, you'll read adjacent memory and get corrupted values. Always match element size to the tag's actual data type.

3. Ignoring the simulator trap. When testing with a PLC simulator, random values mask real issues like byte-ordering bugs and timeout handling. Test against real hardware before deploying.

4. Not handling -32 errors. Error code -32 from libplctag means "connection failed." Three consecutive -32s should trigger a full disconnect/reconnect cycle, not just a retry on the same broken connection.

5. Blocking on tag creation. Creating a tag handle (plc_tag_create) can block for the full timeout duration if the PLC is unreachable. Use appropriate timeouts (2000ms is a reasonable default) and handle negative return values.

How machineCDN Handles EtherNet/IP

machineCDN's edge gateway natively supports EtherNet/IP with the patterns described above: per-tag intervals, compare-based change detection, dependent tag chains, binary batch serialization, and store-and-forward buffering. When you connect a Micro800 or CompactLogix controller, the gateway auto-detects the protocol, reads device identity, and begins scheduled tag acquisition — no manual configuration of CIP class/instance/attribute paths required.

The platform handles the complexity of connection management, retry logic, and bandwidth optimization so your engineering team can focus on the data rather than the protocol plumbing.

Conclusion

EtherNet/IP is more than "Modbus over Ethernet." Its CIP object model provides a rich, typed, hierarchical data architecture. Understanding the difference between implicit and explicit messaging — and knowing when to use each — is the difference between a gateway that polls itself to death and one that efficiently scales to hundreds of tags across dozens of controllers.

The key takeaways:

  • Use explicit messaging for configuration reads and tags with intervals > 100ms
  • Use implicit messaging for high-frequency process data
  • Implement per-tag intervals with compare flags to minimize bandwidth
  • Design for failure with retry logic, link state tracking, and periodic forced reads
  • Batch before sending — never forward individual tag values to the cloud

Master these patterns and you'll build IIoT integrations that run reliably for years, not demos that break in production.

Binary vs JSON Payloads for Industrial MQTT Telemetry: Bandwidth, Encoding Strategies, and When Each Wins [2026]

· 14 min read

Every IIoT platform faces the same fundamental design decision for machine telemetry: do you encode data as human-readable JSON, or pack it into a compact binary format?

The answer affects bandwidth consumption, edge buffer capacity, parsing performance, debugging experience, and how well your system degrades under constrained connectivity. Despite what vendor marketing suggests, neither format universally wins. The engineering tradeoffs are real, and the right choice depends on your deployment constraints.

This article breaks down both approaches with the depth that plant engineers and IIoT architects need to make an informed decision.

OPC-UA Pub/Sub Over TSN: Building Deterministic Industrial Networks [2026 Guide]

· 12 min read

OPC-UA Pub/Sub over TSN architecture

The traditional OPC-UA client/server model has served manufacturing well for decades of SCADA modernization. But as factories push toward converged IT/OT networks — where machine telemetry, MES transactions, and enterprise ERP traffic share the same Ethernet fabric — the client/server polling model starts to buckle under latency requirements that demand microsecond-level determinism.

OPC-UA Pub/Sub over TSN solves this by decoupling data producers from consumers entirely, while TSN's IEEE 802.1 extensions guarantee bounded latency delivery. This guide breaks down how these technologies work together, the pitfalls of real-world deployment, and the configuration patterns that actually work on production floors.

Why Client/Server Breaks Down at Scale

In a typical OPC-UA client/server deployment, every consumer opens a session to every producer. A plant with 50 machines and 10 data consumers (HMIs, historians, analytics engines, edge gateways) generates 500 active sessions. Each session carries its own subscription, and the server must serialize, authenticate, and deliver data to each client independently.

The math gets brutal quickly:

  • 50 machines × 200 tags each = 10,000 data points
  • 10 consumers polling at 1-second intervals = 100,000 read operations per second
  • Session overhead: ~2KB per subscription keepalive × 500 sessions = 1MB/s baseline traffic before any actual data moves

In practice, most OPC-UA servers in PLCs hit their connection ceiling around 15-20 simultaneous sessions. Allen-Bradley Micro800 series and Siemens S7-1200 controllers — the workhorses of mid-market automation — will start rejecting connections well before you've connected all your consumers.

Pub/Sub eliminates the N×M session problem by introducing a one-to-many data distribution model where publishers push data to the network without knowing (or caring) who's consuming it.

The Pub/Sub Architecture: How Data Actually Flows

OPC-UA Pub/Sub introduces three key concepts that don't exist in the client/server model:

Publishers and DataSets

A publisher is any device that produces data — typically a PLC, edge gateway, or sensor hub. Instead of waiting for client requests, publishers periodically assemble DataSets — structured collections of tag values with metadata — and push them to the network.

A DataSet maps directly to the OPC-UA information model. If your PLC exposes temperature, pressure, and flow rate variables in an ObjectType node, the corresponding DataSet contains those three fields with their current values, timestamps, and quality codes.

The publisher configuration defines:

  • Which variables to include in each DataSet
  • Publishing interval (how often to push updates, typically 10ms-10s)
  • Transport protocol (UDP multicast for TSN, MQTT for cloud-bound data, AMQP for enterprise messaging)
  • Encoding format (UADP binary for low-latency, JSON for interoperability)

Subscribers and DataSetReaders

Subscribers declare interest in specific DataSets by configuring DataSetReaders that filter incoming network messages. A subscriber doesn't connect to a publisher — it listens on a multicast group or MQTT topic and selectively processes messages that match its reader configuration.

This is the critical architectural shift: publishers and subscribers are completely decoupled. A publisher doesn't know how many subscribers exist. A subscriber can receive data from multiple publishers without establishing any sessions.

WriterGroups and NetworkMessages

Between individual DataSets and the wire, Pub/Sub introduces WriterGroups — logical containers that batch multiple DataSets into a single NetworkMessage for efficient transport. A single NetworkMessage might contain DataSets from four temperature sensors, two pressure transducers, and a motor current monitor — all packed into one UDP frame.

This batching is crucial for TSN. Each WriterGroup maps to a TSN traffic class, and each traffic class gets its own guaranteed bandwidth reservation. By grouping DataSets with similar latency requirements into the same WriterGroup, you minimize the number of TSN stream reservations needed.

TSN: The Network Layer That Makes It Deterministic

Standard Ethernet is "best effort" — frames compete for bandwidth with no delivery guarantees. TSN (IEEE 802.1) adds four capabilities that transform Ethernet into a deterministic transport:

Time Synchronization (IEEE 802.1AS-2020)

Every device on a TSN network synchronizes to a grandmaster clock with sub-microsecond accuracy. This is non-negotiable — without a shared time reference, scheduled transmission is meaningless.

In practice, configure your TSN switches as boundary clocks and your edge gateways as slave clocks. The synchronization protocol (gPTP) runs automatically, but you need to verify accuracy after deployment:

# Check gPTP synchronization status on a Linux-based edge gateway
pmc -u -b 0 'GET CURRENT_DATA_SET'
# Look for: offsetFromMaster < 1000ns (1μs)

If your offset exceeds 1μs consistently, check cable lengths (asymmetric path delay), switch hop count (keep it under 7), and whether any non-TSN switches are breaking the timing chain.

Scheduled Traffic (IEEE 802.1Qbv)

This is the heart of TSN for industrial use. 802.1Qbv implements time-aware shaping — the switch opens and closes transmission "gates" on a strict schedule. During a gate's open window, only frames from that traffic class can transmit. During the closed window, frames are queued.

A typical gate schedule for a manufacturing cell:

Time SlotDurationTraffic ClassContent
0-250μs250μsTC7 (Scheduled)Motion control data (servo positions)
250-750μs500μsTC6 (Scheduled)Process data (temperatures, pressures)
750-5000μs4250μsTC0-5 (Best Effort)IT traffic, diagnostics, file transfers

The cycle repeats every 5ms (200Hz), giving motion control data a guaranteed 250μs window every cycle — regardless of how much IT traffic is on the network.

Stream Reservation (IEEE 802.1Qcc)

Before a publisher starts transmitting, it reserves bandwidth end-to-end through every switch in the path. The reservation specifies maximum frame size, transmission interval, and latency requirement. Switches that can't honor the reservation reject it — you find out at configuration time, not at 2 AM when the line goes down.

Frame Preemption (IEEE 802.1Qbu)

When a high-priority frame needs to transmit but a low-priority frame is already in flight, preemption splits the low-priority frame, transmits the high-priority data, then resumes the interrupted frame. This reduces worst-case latency from one maximum-frame-time (12μs at 1Gbps for a 1500-byte frame) to near-zero.

Mapping OPC-UA Pub/Sub to TSN Traffic Classes

Here's where theory meets configuration. Each WriterGroup needs a TSN traffic class assignment based on its latency and jitter requirements:

Motion Control Data (TC7, under 1ms cycle)

  • Servo positions, encoder feedback, torque commands
  • Publishing interval: 1-4ms
  • UADP encoding (binary, no JSON overhead)
  • Fixed DataSet layout (no dynamic fields — the subscriber knows the structure at compile time)
  • Configuration tip: Set MaxNetworkMessageSize to fit within one Ethernet frame (1472 bytes for UDP). Fragmentation kills determinism.

Process Data (TC6, 10-100ms cycle)

  • Temperatures, pressures, flow rates, OEE counters
  • Publishing interval: 10-1000ms
  • UADP encoding for edge-to-edge, JSON for cloud-bound paths
  • Variable DataSet layout acceptable (metadata included in messages)

Diagnostic and Configuration (TC0-5, best effort)

  • Alarm states, configuration changes, firmware updates
  • No strict timing requirement
  • JSON encoding fine — human-readable diagnostics matter more than microseconds

Practical Configuration Example

For a plastics injection molding cell with 6 machines, each reporting 30 process variables at 100ms intervals:

# OPC-UA Pub/Sub Publisher Configuration (conceptual)
publisher:
transport: udp-multicast
multicast_group: 239.0.1.10
port: 4840

writer_groups:
- name: "ProcessData_Cell_A"
publishing_interval_ms: 100
tsn_traffic_class: 6
max_message_size: 1472
encoding: UADP
datasets:
- name: "IMM_01_Process"
variables:
- barrel_zone1_temp # int16, °C × 10
- barrel_zone2_temp # int16, °C × 10
- barrel_zone3_temp # int16, °C × 10
- mold_clamp_pressure # float32, bar
- injection_pressure # float32, bar
- cycle_time_ms # uint32
- shot_count # uint32

- name: "Alarms_Cell_A"
publishing_interval_ms: 0 # event-driven
tsn_traffic_class: 5
encoding: UADP
key_frame_count: 1 # every message is a key frame
datasets:
- name: "IMM_01_Alarms"
variables:
- alarm_word_1 # uint16, bitfield
- alarm_word_2 # uint16, bitfield

The Data Encoding Decision: UADP vs JSON

OPC-UA Pub/Sub supports two wire formats, and choosing wrong will cost you either bandwidth or interoperability.

UADP (UA DataPoints Protocol)

  • Binary encoding, tightly packed
  • A 30-variable DataSet encodes to ~200 bytes
  • Supports delta frames — after an initial key frame sends all values, subsequent frames only include changed values
  • Requires subscribers to know the DataSet layout in advance (discovered via OPC-UA client/server or configured statically)
  • Use for: Edge-to-edge communication, TSN paths, anything latency-sensitive

JSON Encoding

  • Human-readable, self-describing
  • The same 30-variable DataSet expands to ~2KB
  • Every message carries field names and type information
  • No prior configuration needed — subscribers can parse dynamically
  • Use for: Cloud-bound telemetry, debugging, integration with IT systems

The Hybrid Pattern That Works

In practice, most deployments run UADP on the factory-floor TSN network and JSON on the cloud-bound MQTT path. The edge gateway — the device sitting between the OT and IT networks — performs the translation:

  1. Subscribe to UADP multicast on the TSN interface
  2. Decode DataSets using pre-configured metadata
  3. Re-publish as JSON over MQTT to the cloud broker
  4. Add store-and-forward buffering for cloud connectivity gaps

This is exactly the pattern that platforms like machineCDN implement — the edge gateway handles protocol translation transparently so that neither the PLCs nor the cloud backend need to understand each other's wire format.

Security Considerations for Pub/Sub Over TSN

The multicast nature of Pub/Sub changes the security model fundamentally. In client/server OPC-UA, each session is authenticated and encrypted end-to-end with X.509 certificates. In Pub/Sub, there's no session — data flows to anyone on the multicast group.

SecurityMode Options

OPC-UA Pub/Sub defines three security modes per WriterGroup:

  1. None — no encryption, no signing. Acceptable only on physically isolated networks with no IT connectivity.
  2. Sign — messages are signed with the publisher's private key. Subscribers verify authenticity but data is readable by anyone on the network.
  3. SignAndEncrypt — messages are both signed and encrypted. Requires key distribution to all authorized subscribers.

Key Distribution: The Hard Problem

Unlike client/server where keys are exchanged during session establishment, Pub/Sub needs a Security Key Server (SKS) that distributes symmetric keys to publishers and subscribers. The SKS rotates keys periodically (recommended: every 1-24 hours depending on sensitivity).

In practice, deploy the SKS on a hardened server in the DMZ between OT and IT networks. Use OPC-UA client/server (with mutual certificate authentication) for key distribution, and Pub/Sub (with those distributed keys) for data delivery.

Network Segmentation

Even with encrypted Pub/Sub, follow defense-in-depth:

  • Isolate TSN traffic on dedicated VLANs
  • Use managed switches with ACLs to restrict multicast group membership
  • Deploy a data diode or unidirectional gateway between the TSN network and any internet-facing systems

Common Deployment Pitfalls

Pitfall 1: Multicast Flooding

TSN switches handle multicast natively, but if your path crosses a non-TSN switch (even one), multicast frames flood to all ports. This can saturate uplinks and crash unrelated systems. Verify every switch in the path supports IGMP snooping at minimum.

Pitfall 2: Clock Drift Under Load

gPTP synchronization works well at low CPU load, but when an edge gateway is processing 10,000 tags per second, the system clock can drift because gPTP packets get delayed in software queues. Use hardware timestamping (PTP-capable NICs) — software timestamping adds 10-100μs of jitter, which defeats the purpose of TSN.

Pitfall 3: DataSet Version Mismatch

When you add a variable to a publisher's DataSet, all subscribers with static configurations will misparse subsequent messages. UADP includes a DataSetWriterId and ConfigurationVersion — increment the version on every schema change and implement version checking in subscriber code.

Pitfall 4: Oversubscribing TSN Bandwidth

Each TSN stream reservation is guaranteed, but the total bandwidth allocated to scheduled traffic classes can't exceed ~75% of link capacity (the remaining 25% prevents guard-band starvation of best-effort traffic). On a 1Gbps link, that's 750Mbps for all scheduled streams combined. Do the bandwidth math before deployment, not after.

When to Use Pub/Sub vs Client/Server

Pub/Sub over TSN isn't a universal replacement for client/server. Use this decision matrix:

ScenarioRecommended Model
HMI reading 50 tags from one PLCClient/Server
Historian collecting from 100+ PLCsPub/Sub
Real-time motion control (under 1ms)Pub/Sub over TSN
Configuration and commissioningClient/Server
Cloud telemetry pipelinePub/Sub over MQTT
10+ consumers need same dataPub/Sub
Firewall traversal requiredClient/Server (reverseConnect)

The Road Ahead: OPC-UA FX

The OPC Foundation's Field eXchange (FX) initiative extends Pub/Sub with controller-to-controller communication profiles — enabling PLCs from different vendors to exchange data over TSN without custom integration. FX defines standardized connection management, diagnostics, and safety communication profiles.

For manufacturers, FX means the edge gateway that today bridges between incompatible PLCs will eventually become optional for direct PLC-to-PLC communication — while remaining essential for the cloud telemetry path where platforms like machineCDN normalize data across heterogeneous equipment.

Key Takeaways

  1. Pub/Sub eliminates the N×M session problem that limits OPC-UA client/server at scale
  2. TSN provides deterministic delivery with bounded latency guaranteed by the network infrastructure
  3. UADP encoding on TSN, JSON over MQTT is the hybrid pattern that works for most manufacturing deployments
  4. Hardware timestamping is non-negotiable for sub-microsecond synchronization accuracy
  5. Security requires a Key Server — Pub/Sub's multicast model doesn't support session-based authentication
  6. Budget 75% of link capacity for scheduled traffic to prevent guard-band starvation

The convergence of OPC-UA Pub/Sub and TSN represents the most significant shift in industrial networking since the migration from fieldbus to Ethernet. Getting the architecture right at deployment time saves years of retrofitting — and the practical patterns in this guide reflect what actually works on production floors, not just in vendor demo labs.

Device Provisioning and Authentication for Industrial IoT Gateways: SAS Tokens, Certificates, and Auto-Reconnection [2026]

· 13 min read

Every industrial edge gateway faces the same fundamental challenge: prove its identity to a cloud platform, establish a secure connection, and keep that connection alive for months or years — all while running on hardware with limited memory, intermittent connectivity, and no IT staff on-site to rotate credentials.

Getting authentication wrong doesn't just mean lost telemetry. It means a factory floor device that silently stops reporting, burning through its local buffer until data is permanently lost. Or worse — an improperly secured device that becomes an entry point into an OT network.

This guide covers the practical reality of device provisioning, from the first boot through ongoing credential management, with patterns drawn from production deployments across thousands of industrial gateways.

DeviceNet to EtherNet/IP Migration: A Practical Guide for Modernizing Legacy CIP Networks [2026]

· 14 min read

DeviceNet isn't dead — it's still running in thousands of manufacturing plants worldwide. But if you're maintaining a DeviceNet installation in 2026, you're living on borrowed time. Parts are getting harder to find. New devices are EtherNet/IP-only. Your IIoT platform can't natively speak CAN bus. And the engineers who understand DeviceNet's quirks are retiring.

The good news: DeviceNet and EtherNet/IP share the same application layer — the Common Industrial Protocol (CIP). That means migration isn't a complete rearchitecture. It's more like upgrading the transport while keeping the logic intact.

The bad news: the differences between a CAN-based serial bus and modern TCP/IP Ethernet are substantial, and the migration is full of subtle gotchas that can turn a weekend project into a month-long nightmare.

This guide covers what actually changes, what stays the same, and how to execute the migration without shutting down your production line.

Why Migrate Now

The Parts Clock Is Ticking

DeviceNet uses CAN (Controller Area Network) at the physical layer — the same bus technology from automotive. DeviceNet taps, trunk cables, terminators, and CAN-specific interface cards are all becoming specialty items. Allen-Bradley 1756-DNB DeviceNet scanners cost 2-3x what they did five years ago on the secondary market.

EtherNet/IP uses standard Ethernet infrastructure. Cat 5e/6 cable, commodity switches, and off-the-shelf NICs. You can buy replacement parts at any IT supplier.

IIoT Demands Ethernet

Modern IIoT platforms connect to PLCs via EtherNet/IP (CIP explicit messaging), Modbus TCP, or OPC-UA — all Ethernet-based protocols. Connecting to DeviceNet requires a protocol converter or a dedicated scanner module, adding cost and complexity.

When an edge gateway reads tags from an EtherNet/IP-connected PLC, it speaks CIP directly over TCP/IP. The tag path, element count, and data types map cleanly to standard read operations. With DeviceNet, there's an additional translation layer — the gateway must talk to the DeviceNet scanner module, which then mediates communication to the DeviceNet devices.

Eliminating that layer means faster polling, simpler configuration, and fewer failure points.

Bandwidth Limitations

DeviceNet runs at 125, 250, or 500 kbps — kilobits, not megabits. For simple discrete I/O (24 photoelectric sensors and a few solenoid valves), this is fine. But modern manufacturing cells generate far more data:

  • Servo drive diagnostics
  • Process variable trends
  • Vision system results
  • Safety system status words
  • Energy monitoring data

A single EtherNet/IP connection runs at 100 Mbps minimum (1 Gbps typical) — that's 200-8,000x more bandwidth. The difference isn't just theoretical: it means you can read every tag at full speed without bus contention errors.

What Stays the Same: CIP

The Common Industrial Protocol is protocol-agnostic. CIP defines objects (Identity, Connection Manager, Assembly), services (Get Attribute Single, Set Attribute, Forward Open), and data types independently of the transport layer.

This is DeviceNet's salvation — and yours. A CIP Assembly object that maps 32 bytes of I/O data works identically whether the transport is:

  • DeviceNet (CAN frames, MAC IDs, fragmented messaging)
  • EtherNet/IP (TCP/IP encapsulation, IP addresses, implicit I/O connections)
  • ControlNet (scheduled tokens, node addresses)

Your PLC program doesn't care how the Assembly data arrives. The I/O mapping is the same. The tag names are the same. The data types are the same.

Practical Implication

If you're running a Micro850 or CompactLogix PLC with DeviceNet I/O modules, migrating to EtherNet/IP I/O modules means:

  1. PLC logic stays unchanged (mostly — more on this later)
  2. Assembly instances map directly (same input/output sizes)
  3. CIP services work identically (Get Attribute, Set Attribute, explicit messaging)
  4. Data types are preserved (BOOL, INT, DINT, REAL — same encoding)

What changes is the configuration: MAC IDs become IP addresses, DeviceNet scanner modules become EtherNet/IP adapter ports, and CAN trunk cables become Ethernet switches.

What Changes: The Deep Differences

Addressing: MAC IDs vs. IP Addresses

DeviceNet uses 6-bit MAC IDs (0-63) set via physical rotary switches or software. Each device on the bus has a unique MAC ID, and the scanner references devices by this number.

EtherNet/IP uses standard IP addressing. Devices get addresses via DHCP, BOOTP, or static configuration. The scanner references devices by IP address and optionally by hostname.

Migration tip: Create an address mapping spreadsheet before you start:

DeviceNet MAC ID → EtherNet/IP IP Address
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
MAC 01 (Motor Starter #1) → 192.168.1.101
MAC 02 (Photoelectric Bank) → 192.168.1.102
MAC 03 (Valve Manifold) → 192.168.1.103
MAC 10 (VFD Panel A) → 192.168.1.110
MAC 11 (VFD Panel B) → 192.168.1.111

Use the last two octets of the IP address to mirror the old MAC ID where possible. Maintenance technicians who know "MAC 10 is the VFD panel" will intuitively map to 192.168.1.110.

Communication Model: Polled I/O vs. Implicit Messaging

DeviceNet primarily uses polled I/O or change-of-state messaging. The scanner sends a poll request to each device, and the device responds with its current data. This is sequential — device 1, then device 2, then device 3, and so on.

EtherNet/IP uses implicit (I/O) messaging with Requested Packet Interval (RPI). The scanner opens a CIP connection to each adapter, and data flows at a configured rate (typically 5-100ms) using UDP multicast. All connections run simultaneously — no sequential polling.

DeviceNet (Sequential):
Scanner → Poll MAC01 → Response → Poll MAC02 → Response → ...
Total cycle = Sum of all individual transactions

EtherNet/IP (Parallel):
Scanner ──┬── Connection to 192.168.1.101 (RPI 10ms)
├── Connection to 192.168.1.102 (RPI 10ms)
├── Connection to 192.168.1.103 (RPI 10ms)
└── Connection to 192.168.1.110 (RPI 20ms)
Total cycle = Max(individual RPIs) = 20ms

Performance impact: A DeviceNet bus with 20 devices at 500kbps might have a scan cycle of 15-30ms. The same 20 devices on EtherNet/IP can all run at 10ms RPI simultaneously, with room to spare. Your control loop gets faster, not just your bandwidth.

Error Handling: Bus Errors vs. Connection Timeouts

DeviceNet has explicit error modes tied to the CAN bus: bus-off, error passive, CAN frame errors. When a device misses too many polls, it goes into a "timed out" state. The scanner reports which MAC ID failed.

EtherNet/IP uses TCP connection timeouts and UDP heartbeats. If an implicit I/O connection misses 4x its RPI without receiving data, the connection times out. The error reporting is more granular — you can distinguish between "device unreachable" (ARP failure), "connection refused" (CIP rejection), and "data timeout" (UDP loss).

Important: DeviceNet's error behavior is synchronous with the bus scan. When a device fails, you know immediately on the next scan cycle. EtherNet/IP's timeout behavior is asynchronous — a connection can be timing out while others continue normally. Your fault-handling logic may need adjustment to handle this differently.

Wiring and Topology

DeviceNet is a bus topology with a single trunk line. All devices tap into the same cable. Maximum trunk length depends on baud rate:

  • 500 kbps: 100m trunk
  • 250 kbps: 250m trunk
  • 125 kbps: 500m trunk

Drop cables from trunk to device are limited to 6m. Total combined drop length has a bus-wide limit (156m at 125 kbps).

EtherNet/IP is a star topology. Each device connects to a switch port via its own cable (up to 100m per run for copper, kilometers for fiber). No trunk length limits, no drop length limits, no shared-bus contention.

Migration implication: You can't just swap cables. DeviceNet trunk cables are typically 18 AWG with integrated power (24V bus power). Ethernet uses Cat 5e/6 without power. If your DeviceNet devices were bus-powered, you'll need separate 24V power runs to each device location, or use PoE (Power over Ethernet) switches.

Migration Strategy: The Three Approaches

Replace everything at once during a planned shutdown. Remove all DeviceNet hardware, install EtherNet/IP modules, reconfigure the PLC, test, and restart.

Pros: Clean cutover, no mixed-network complexity. Cons: If anything goes wrong, your entire line is down. Testing time is limited to the shutdown window. Very high risk.

2. Parallel Run with Protocol Bridge

Install a DeviceNet-to-EtherNet/IP protocol bridge (like ProSoft MVI69-DFNT or HMS Anybus). Keep the DeviceNet bus running while you add EtherNet/IP connectivity.

PLC ──── EtherNet/IP ──── Protocol Bridge ──── DeviceNet Bus

IIoT Edge Gateway
(native EtherNet/IP access)

Pros: Zero downtime, gradual migration, IIoT connectivity immediately. Cons: Protocol bridge adds latency (~5-10ms), cost ($500-2000 per bridge), another device to maintain. Assembly mapping through the bridge can be tricky.

Replace DeviceNet devices one at a time (or one machine cell at a time) with EtherNet/IP equivalents. The PLC runs both a DeviceNet scanner and an EtherNet/IP adapter simultaneously during the transition.

Most modern PLCs (CompactLogix, ControlLogix) support both. The DeviceNet scanner module (1756-DNB or 1769-SDN) stays in the rack alongside the Ethernet port. As devices are migrated, their I/O is remapped from the DeviceNet scanner tree to the EtherNet/IP I/O tree.

Migration sequence per device:

  1. Order the EtherNet/IP equivalent of the DeviceNet device
  2. Pre-configure IP address, Assembly instances, RPI
  3. During a micro-stop (shift change, lunch break):
    • Disconnect DeviceNet device
    • Install EtherNet/IP device + Ethernet cable
    • Remap I/O tags in PLC from DeviceNet scanner to EtherNet/IP adapter
    • Test
  4. Remove old DeviceNet device

Typical timeline: 15-30 minutes per device. A 20-device network can be migrated over 2-3 weeks of micro-stops.

PLC Program Changes

Tag Remapping

The biggest PLC-side change is tag paths. DeviceNet I/O tags reference the scanner module and MAC ID:

DeviceNet:
Local:1:I.Data[0] (Scanner in slot 1, input word 0)

EtherNet/IP I/O tags reference the connection by IP:

EtherNet/IP:
Valve_Manifold:I.Data[0] (Named connection, input word 0)

Best practice: Use aliased tags in your PLC program. If your rungs reference Motor_1_Running (alias of Local:1:I.Data[0].2), you only need to change the alias target — not every rung that uses it. If your rungs directly reference the I/O path... you have more work to do.

RPI Tuning

DeviceNet scan rates are managed at the bus level. EtherNet/IP lets you set RPI per connection. Start with:

  • Discrete I/O (photoelectrics, solenoids): 10-20ms RPI
  • Analog I/O (temperatures, pressures): 50-100ms RPI
  • VFDs (speed/torque data): 20-50ms RPI
  • Safety I/O (CIP Safety): 10-20ms RPI (match safety PFD requirements)

Don't over-poll. Setting everything to 2ms RPI because you can will create unnecessary network load and CPU consumption. Match the RPI to the actual process dynamics.

Connection Limits

DeviceNet scanners support 63 devices (MAC ID limit). EtherNet/IP has no inherent device limit, but each PLC has a connection limit — typically 128-256 CIP connections depending on the controller model.

Each EtherNet/IP I/O device uses at least one connection. Devices with multiple I/O assemblies (e.g., separate safety and standard I/O) use multiple connections. Monitor your controller's connection count during migration.

IIoT Benefits After Migration

Once your devices are on EtherNet/IP, your IIoT edge gateway can access them directly via CIP explicit messaging — no protocol converters needed.

The gateway opens CIP connections to each device, reads tags at configurable intervals, and publishes the data to MQTT or another cloud transport. This is how platforms like machineCDN operate: they speak native EtherNet/IP (and Modbus TCP) to the devices, handling type conversion, batch aggregation, and store-and-forward for cloud delivery.

What this enables:

  • Direct device diagnostics: Read CIP identity objects (vendor ID, product name, firmware version) from every device on the network. No more walking the floor with a DeviceNet configurator.
  • Process data at full speed: Read servo drive status, VFD parameters, and temperature controllers at 1-2 second intervals without bus contention.
  • Predictive maintenance signals: Vibration data, motor current, bearing temperature — all available over EtherNet/IP from modern drives.
  • Remote troubleshooting: An engineer can read device parameters from anywhere on the plant network (or through VPN) without physically connecting to a DeviceNet bus.

Tag Reads After Migration

With EtherNet/IP, the edge gateway connects using CIP's ab-eip protocol to read tags by name:

Protocol: EtherNet/IP (CIP)
Gateway: 192.168.1.100 (PLC IP)
CPU: Micro850
Tag: barrel_temp_zone_1
Type: REAL (float32)

The gateway reads the tag value, applies type conversion (the PLC stores IEEE 754 floats natively, so no Modbus byte-swapping gymnastics), and delivers it to the cloud. Compared to reading the same value through a DeviceNet scanner's polled I/O words — where you'd need to know which word offset maps to which variable — named tags are dramatically simpler.

Network Design for EtherNet/IP

Switch Selection

Use managed industrial Ethernet switches, not consumer/office switches. Key features:

  • IGMP snooping: EtherNet/IP uses UDP multicast for implicit I/O. Without IGMP snooping, multicast traffic floods every port.
  • QoS/DiffServ: Prioritize CIP I/O traffic (DSCP 47/55) over best-effort traffic.
  • Port mirroring: Essential for troubleshooting with Wireshark.
  • DIN-rail mounting: Because this is going in an industrial panel, not a server room.
  • Extended temperature range: -10°C to 60°C minimum for factory environments.

VLAN Segmentation

Separate your EtherNet/IP I/O traffic from IT/IIoT traffic using VLANs:

VLAN 10: Control I/O (PLC ↔ I/O modules, drives)
VLAN 20: HMI/SCADA (operator stations)
VLAN 30: IIoT/Cloud (edge gateways, MQTT)
VLAN 99: Management (switch configuration)

The edge gateway lives on VLAN 30 with a routed path to VLAN 10 for CIP reads. This ensures IIoT traffic can never interfere with control I/O at the switch level.

Ring Topology for Redundancy

DeviceNet is a bus — one cable break takes down everything downstream. EtherNet/IP with DLR (Device Level Ring) or RSTP (Rapid Spanning Tree) provides sub-second failover. A single cable cut triggers a topology change, and traffic reroutes automatically.

Most Allen-Bradley EtherNet/IP modules support DLR natively. Third-party devices may require an external DLR-capable switch.

Common Migration Mistakes

1. Forgetting Bus Power

DeviceNet provides 24V bus power on the trunk cable. Many DeviceNet devices (especially compact I/O blocks) draw power from the bus and have no separate power terminals. When you remove the DeviceNet trunk, those devices need a dedicated 24V supply.

Check every device's power requirements before migration. This is the most commonly overlooked issue.

2. IP Address Conflicts

DeviceNet MAC IDs are set physically — you can see them. IP addresses are invisible. Two devices with the same IP will cause intermittent communication failures that are incredibly difficult to diagnose.

Reserve a dedicated subnet for EtherNet/IP I/O (e.g., 192.168.1.0/24) and maintain a strict IP allocation spreadsheet. Use DHCP reservations or BOOTP if your devices support it.

3. Not Testing Failover Behavior

DeviceNet and EtherNet/IP handle device failures differently. Your PLC program may assume DeviceNet-style fault behavior (synchronous, bus-wide notification). EtherNet/IP faults are per-connection and asynchronous.

Test every failure mode: device power loss, cable disconnection, switch failure. Verify that your fault-handling rungs respond correctly.

4. Ignoring Firmware Compatibility

EtherNet/IP devices from the same vendor may have different Assembly instance mappings across firmware versions. The device you tested in the lab may behave differently from the one installed on the floor if the firmware versions don't match.

Document firmware versions and maintain spare devices with matching firmware.

Timeline and Budget

For a typical migration of a 20-device DeviceNet network:

ItemEstimated Cost
EtherNet/IP equivalent devices (20 units)$8,000-15,000
Industrial Ethernet switches (2-3 managed)$1,500-3,000
Cat 6 cabling and patch panels$500-1,500
Engineering time (40-60 hours)$4,000-9,000
Commissioning and testing$2,000-4,000
Total$16,000-32,500

Timeline: 2-4 weeks with rolling migration approach, including engineering prep, device installation, and testing. The line can continue running throughout.

Compare this to the alternative: maintaining a DeviceNet network with $800 replacement scanner modules, 4-week lead times on DeviceNet I/O blocks, and no IIoT connectivity. The migration pays for itself in reduced maintenance costs and operational visibility within 12-18 months.

Conclusion

DeviceNet to EtherNet/IP migration is not a question of if — it's a question of when. The CIP application layer makes it far less painful than migrating between incompatible protocols. Your PLC logic stays intact, your I/O mappings transfer directly, and you gain immediate benefits in bandwidth, diagnostic capability, and IIoT readiness.

Start with a network audit. Map every device, its MAC ID, its I/O configuration, and its power requirements. Then execute a rolling migration — one device at a time, one micro-stop at a time — until the last DeviceNet tap is removed.

Your reward: a modern Ethernet infrastructure that speaks the same CIP language, runs 1,000x faster, and connects directly to every IIoT platform on the market.

MQTT Last Will and Testament for Industrial Device Health Monitoring [2026]

· 12 min read

MQTT Last Will and Testament for Industrial Device Health

In industrial environments, knowing that a device is offline is just as important as knowing what it reports when it's online. A temperature sensor that silently stops publishing doesn't trigger alarms — it creates a blind spot. And in manufacturing, blind spots kill uptime.

MQTT's Last Will and Testament (LWT) mechanism solves this problem at the protocol level. When properly implemented alongside birth certificates, status heartbeats, and connection watchdogs, LWT transforms MQTT from a simple pub/sub pipe into a self-diagnosing industrial nervous system.

This guide covers the practical engineering behind LWT in industrial deployments — not just the theory, but the real-world patterns that survive noisy factory networks.

MQTT QoS Levels for Industrial Telemetry: Choosing the Right Delivery Guarantee [2026]

· 11 min read

When an edge gateway publishes a temperature reading from a plastics extruder running at 230°C, does it matter if that message arrives exactly once, at least once, or possibly not at all? The answer depends on what you're doing with the data — and getting it wrong can mean either lost production insights or a network drowning in redundant traffic.

MQTT's Quality of Service (QoS) levels are one of the most misunderstood aspects of industrial IoT deployments. Most engineers default to QoS 1 for everything, which is rarely optimal. This guide breaks down each level with real industrial scenarios, bandwidth math, and patterns that actually work on factory floors where cellular links drop and PLCs generate thousands of data points per second.

OPC-UA Information Modeling and Subscriptions: A Deep Dive for IIoT Engineers [2026]

· 12 min read

If you've spent time wiring Modbus registers to cloud platforms, you know the pain: flat address spaces, no built-in semantics, and endless spreadsheets mapping register 40004 to "Mold Temperature Zone 2." OPC-UA was designed to solve exactly this problem — but its information modeling layer is far richer (and more complex) than most engineers realize when they first encounter it.

This guide goes deep on how OPC-UA structures industrial data, how subscriptions efficiently deliver changes to clients, and how security policies protect the entire stack. Whether you're evaluating OPC-UA for a greenfield deployment or bridging it into an existing Modbus/EtherNet-IP environment, this is the practical knowledge you need.

EtherNet/IP and CIP: A Practical Guide for Plant Engineers [2026]

· 11 min read

If you've ever connected to an Allen-Bradley Micro800 or CompactLogix PLC, you've used EtherNet/IP — whether you knew it or not. It's one of the most widely deployed industrial Ethernet protocols in North America, and for good reason: it runs on standard Ethernet hardware, supports TCP/IP natively, and handles everything from high-speed I/O updates to configuration and diagnostics over a single cable.

But EtherNet/IP is more than just "Modbus over Ethernet." Its underlying protocol — the Common Industrial Protocol (CIP) — is a sophisticated object-oriented messaging framework that fundamentally changes how edge devices, gateways, and cloud platforms interact with PLCs.

This guide covers what plant engineers and IIoT architects actually need to know.

Event-Driven Tag Delivery in IIoT: Why Polling Everything at Fixed Intervals Is Wasting Your Bandwidth [2026]

· 11 min read

Event-Driven Tag Detection

Most IIoT deployments start the same way: poll every PLC register every second, serialize all values to JSON, and push everything to the cloud over MQTT. It works — until your cellular data bill arrives, or your broker starts choking on 500,000 messages per day from a single gateway, or you realize that 95% of those messages contain values that haven't changed since the last read.

The reality of industrial data is that most values don't change most of the time. A chiller's tank temperature drifts by a fraction of a degree per minute. A blender's motor state is "running" for 8 hours straight. A conveyor's alarm register reads zero all day — until the instant it doesn't, and that instant matters more than the previous 86,400 identical readings.

This guide covers a smarter approach: event-driven tag delivery, where the edge gateway reads at regular intervals but only transmits when something actually changes — and when something does change, it can trigger reads of related tags for complete context.

The Problem with Fixed-Interval Everything

Let's quantify the waste. Consider a typical industrial chiller with 10 compressor circuits, each exposing 16 process tags (temperatures, pressures, flow rates) and 3 alarm registers:

Tags per circuit:  16 process + 3 alarm = 19 tags
Total tags: 10 circuits × 19 = 190 tags
Poll interval: All at 1 second

At JSON format with timestamp, tag ID, and value, each data point is roughly 50 bytes. Per second, that's:

190 tags × 50 bytes = 9,500 bytes/second
= 570 KB/minute
= 34.2 MB/hour
= 821 MB/day

Over a cellular connection at $5/GB, that's $4.10/day per chiller — just for data that's overwhelmingly identical to what was sent one second ago.

Now let's separate the tags by their actual change frequency:

Tag TypeCountActual Change Frequency% of Total Data
Process temperatures100Every 30-60 seconds52.6%
Process pressures50Every 10-30 seconds26.3%
Flow rates10Every 5-15 seconds5.3%
Alarm bits30~1-5 times per day15.8%

Those 30 alarm registers — 15.8% of your data volume — change roughly 5 times per day. You're transmitting them 86,400 times. That's a 17,280x overhead on alarm data.

The Three Pillars of Event-Driven Delivery

A well-designed edge gateway implements three complementary strategies:

1. Compare-on-Read (Change Detection)

The simplest optimization: after reading a tag value from the PLC, compare it against the last transmitted value. If it hasn't changed, don't send it.

The implementation is straightforward:

# Pseudocode — NOT from any specific codebase
def should_deliver(tag, new_value, new_status):
# Always deliver the first reading
if not tag.has_been_read:
return True

# Always deliver on status change (device went offline/online)
if tag.last_status != new_status:
return True

# Compare values if compare flag is enabled
if tag.compare_enabled:
if tag.last_value != new_value:
return True
return False # Value unchanged, skip

# If compare disabled, always deliver
return True

Which tags should use change detection?

  • Alarm/status registers: Always. These are event-driven by nature — you need the transitions, not the steady state.
  • Digital I/O: Always. Binary values either changed or they didn't.
  • Configuration registers: Always. Software version numbers, setpoints, and device parameters change rarely.
  • Temperatures and pressures: Situational. If the process is stable, most readings are identical. But if you need trending data for analytics, you may want periodic delivery regardless.
  • Counter registers: Never. Counters increment continuously — every reading is "different" — and you need the raw values for accurate rate calculations.

The gotcha with floating-point comparison: Comparing IEEE 754 floats for exact equality is unreliable due to rounding. For float-typed tags, use a deadband:

# Apply deadband for float comparison
def float_changed(old_val, new_val, deadband=0.1):
return abs(new_val - old_val) > deadband

A temperature deadband of 0.1°F means you'll transmit when the temperature moves meaningfully, but ignore sensor noise.

2. Dependent Tags (Contextual Reads)

Here's where event-driven delivery gets powerful. Consider this scenario:

A chiller's compressor status word is a 16-bit register where each bit represents a different state: running, loaded, alarm, lockout, etc. You poll this register every second with change detection enabled. When bit 7 flips from 0 to 1 (alarm condition), you need more than just the status word — you need the discharge pressure, suction temperature, refrigerant level, and superheat at that exact moment to diagnose the alarm.

The solution: dependent tag chains. When a parent tag's value changes, the gateway immediately triggers a forced read of all dependent tags, delivering the complete snapshot:

Parent Tag:    Compressor Status Word (polled every 1s, compare=true)
Dependent Tags:
├── Discharge Pressure (read only when status changes)
├── Suction Temperature (read only when status changes)
├── Refrigerant Liquid Temp (read only when status changes)
├── Superheat (read only when status changes)
└── Subcool (read only when status changes)

In normal operation, the gateway reads only the status word — one register per second per compressor. When the status word changes, it reads 6 registers total and delivers them as a single timestamped group. The result:

  • Steady state: 1 register/second → 50 bytes/second
  • Event triggered: 6 registers at once → 300 bytes (once, at the moment of change)
  • vs. polling everything: 6 registers/second → 300 bytes/second (continuously)

Bandwidth savings: 99.8% during steady state, with zero data loss at the moment that matters.

3. Calculated Tags (Bit-Level Decomposition)

Industrial PLCs often pack multiple boolean signals into a single 16-bit or 32-bit "status word" or "alarm word." Each bit has a specific meaning defined in the PLC program documentation:

Alarm Word (uint16):
Bit 0: High Temperature Alarm
Bit 1: Low Pressure Alarm
Bit 2: Flow Switch Fault
Bit 3: Motor Overload
Bit 4: Sensor Open Circuit
Bit 5: Communication Fault
Bits 6-15: Reserved

A naive approach reads the entire word and sends it to the cloud, leaving the bit-level parsing to the backend. A better approach: the edge gateway decomposes the word into individual boolean tags at read time.

The gateway reads the parent tag (the alarm word), and for each calculated tag, it applies a shift and mask operation to extract the individual bit:

Individual Alarm = (alarm_word >> bit_position) & mask

Each calculated tag gets its own change detection. So when Bit 2 (Flow Switch Fault) transitions from 0 to 1, the gateway transmits only that specific alarm — not the entire word, and not any unchanged bits.

Why this matters at scale: A 10-circuit chiller has 30 alarm registers (3 per circuit), each 16 bits wide. That's 480 individual alarm conditions. Without bit decomposition, a single bit flip in one register transmits all 30 registers (because the polling cycle doesn't know which register changed). With calculated tags, only the one changed boolean is transmitted.

Batching: Grouping Efficiency

Even with change detection, transmitting each changed tag as an individual MQTT message creates excessive overhead. MQTT headers, TLS framing, and TCP acknowledgments add 80-100 bytes of overhead per message. A 50-byte tag value in a 130-byte envelope is 62% overhead.

The solution: time-bounded batching. The gateway accumulates changed tag values into a batch, then transmits the batch when either:

  1. The batch reaches a size threshold (e.g., 4KB of accumulated data)
  2. A time limit expires (e.g., 10-30 seconds since the batch started collecting)

The batch structure groups values by timestamp:

{
"groups": [
{
"ts": 1709335200,
"device_type": 1018,
"serial_number": 2411001,
"values": [
{"id": 1, "values": [245]},
{"id": 6, "values": [187]},
{"id": 7, "values": [42]}
]
}
]
}

Critical exception: alarm tags bypass batching. When a status register changes, you don't want the alarm notification sitting in a batch buffer for 30 seconds. Alarm tags should be marked as do_not_batch — they're serialized and transmitted immediately as individual messages with QoS 1 delivery confirmation.

This creates a two-tier delivery system:

Data TypeDeliveryLatencyBatching
Process valuesChange-detected, batched10-30 secondsYes
Alarm/status bitsChange-detected, immediate<1 secondNo
Periodic valuesTime-based, batched10-60 secondsYes

Binary vs. JSON: The Encoding Decision

The batch payload format has a surprisingly large impact on bandwidth. Consider a batch with 50 tag values:

JSON format:

{"groups":[{"ts":1709335200,"device_type":1018,"serial_number":2411001,"values":[{"id":1,"values":[245]},{"id":2,"values":[187]},...]}]}

Typical size: 2,500-3,000 bytes for 50 values

Binary format:

Header:     1 byte  (magic byte 0xF7)
Group count: 4 bytes
Per group:
Timestamp: 4 bytes
Device type: 2 bytes
Serial number: 4 bytes
Value count: 4 bytes
Per value:
Tag ID: 2 bytes
Status: 1 byte
Value count: 1 byte
Value size: 1 byte (1=bool/int8, 2=int16, 4=int32/float)
Values: 1-4 bytes each

Typical size: 400-600 bytes for 50 values

That's a 5-7x reduction — from 3KB to ~500 bytes per batch. Over cellular, this is transformative. A device that transmits 34 MB/day in JSON drops to 5-7 MB/day in binary, before even accounting for change detection.

The trade-off: binary payloads require a schema-aware decoder on the cloud side. Both the gateway and the backend must agree on the encoding format. In practice, most production IIoT platforms use binary encoding for device-to-cloud telemetry and JSON for cloud-to-device commands (where human readability matters and message volume is low).

The Hourly Reset: Catching Drift

One subtle problem with pure change detection: if a value drifts by tiny increments — each below the comparison threshold — the cloud's cached value can slowly diverge from reality. After hours of accumulated micro-drift, the dashboard shows 72.3°F while the actual temperature is 74.1°F.

The solution: periodic forced reads. Every hour (or at another configurable interval), the gateway resets all "read once" flags and forces a complete read of every tag, delivering all current values regardless of change. This acts as a synchronization pulse that corrects any accumulated drift and confirms that all devices are still online.

The hourly reset typically generates one large batch — a snapshot of all 190 tags — adding roughly 10-15KB once per hour. That's negligible compared to the savings from change detection during the other 3,599 seconds.

Quantifying the Savings

Let's revisit our 10-circuit chiller example with event-driven delivery:

Before (fixed interval, everything at 1s):

190 tags × 86,400 seconds × 50 bytes = 821 MB/day

After (event-driven with change detection):

Process values: 160 tags × avg 2 changes/min × 1440 min × 50 bytes = 23 MB/day
Alarm bits: 30 tags × avg 5 changes/day × 50 bytes = 7.5 KB/day
Hourly resets: 190 tags × 24 resets × 50 bytes = 228 KB/day
Overhead (headers, keepalives): ≈ 2 MB/day
──────────────────────────────────────────────────────
Total: ≈ 25.2 MB/day

With binary encoding instead of JSON:

≈ 25.2 MB/day ÷ 5.5 (binary compression) ≈ 4.6 MB/day

Net reduction: 821 MB → 4.6 MB = 99.4% bandwidth savings.

On a $5/GB cellular plan, that's $4.10/day → $0.02/day per chiller.

Implementation Checklist

If you're building or evaluating an edge gateway for event-driven tag delivery, here's what to look for:

  • Per-tag compare flag — Can you enable/disable change detection per tag?
  • Per-tag polling interval — Can fast-changing and slow-changing tags have different read rates?
  • Dependent tag chains — Can a parent tag's change trigger reads of related tags?
  • Bit-level calculated tags — Can alarm words be decomposed into individual booleans?
  • Bypass batching for alarms — Are alarm tags delivered immediately, bypassing the batch buffer?
  • Binary encoding option — Can the gateway serialize in binary instead of JSON?
  • Periodic forced sync — Does the gateway do hourly (or configurable) full reads?
  • Link state tracking — Is device online/offline status treated as a first-class event?

How machineCDN Handles Event-Driven Delivery

machineCDN's edge gateway implements all of these strategies natively. Every tag in the device configuration carries its own polling interval, change detection flag, and batch/immediate delivery preference. Alarm registers are automatically configured for 1-second polling with change detection and immediate delivery. Process values use configurable intervals with batched transmission. The gateway supports both JSON and compact binary encoding, with automatic store-and-forward buffering that retains data through connectivity outages.

The result: plants running machineCDN gateways over cellular connections typically see 95-99% lower data volumes compared to naive fixed-interval polling — without losing a single alarm event or meaningful process change.


Tired of paying for the same unchanged data point 86,400 times a day? machineCDN delivers only the data that matters — alarms instantly, process values on change, with full periodic sync. See how much bandwidth you can save.