Skip to main content

Protocol Bridging: Translating Modbus to MQTT at the Industrial Edge [2026]

· 15 min read

Protocol Bridging Architecture

Every plant floor speaks Modbus. Every cloud platform speaks MQTT. The 20 inches of Ethernet cable between them is where industrial IoT projects succeed or fail.

Protocol bridging — the act of reading data from one industrial protocol and publishing it via another — sounds trivial on paper. Poll a register, format a JSON payload, publish to a topic. Three lines of pseudocode. But the engineers who've actually deployed these bridges at scale know the truth: the hard problems aren't in the translation. They're in the timing, the buffering, the failure modes, and the dozens of edge cases that only surface when a PLC reboots at 2 AM while your MQTT broker is mid-failover.

This guide covers the real engineering of Modbus-to-MQTT bridges — from register-level data mapping to store-and-forward architectures that survive weeks of disconnection.

Why Bridging Is Harder Than It Looks

Modbus and MQTT are fundamentally different communication paradigms. Understanding these differences is critical to building a bridge that doesn't collapse under production conditions.

Modbus is synchronous and polled. The master (your gateway) initiates every transaction. It sends a request frame, waits for a response, processes the data, and moves on. There's no concept of subscriptions, push notifications, or asynchronous updates. If you want a value, you ask for it. Every. Single. Time.

MQTT is asynchronous and event-driven. Publishers send messages whenever they have data. Subscribers receive messages whenever they arrive. The broker decouples producers from consumers. There's no concept of polling — data flows when it's ready.

Bridging these two paradigms means your gateway must act as a Modbus master on one side (issuing timed read requests) and an MQTT client on the other (publishing messages asynchronously). The gateway is the only component that speaks both languages, and it bears the full burden of timing, error handling, and data integrity.

The Timing Mismatch

Modbus RTU on RS-485 at 9600 baud takes roughly 20ms per single-register transaction (request frame + inter-frame delay + response frame + turnaround time). Reading 100 registers individually would take 2 seconds — an eternity if you need sub-second update rates.

Modbus TCP eliminates the serial timing constraints but introduces TCP socket management, connection timeouts, and the possibility of the PLC's TCP stack running out of connections (most PLCs support only 4–8 simultaneous TCP connections).

MQTT, meanwhile, can handle thousands of messages per second. The bottleneck is never the MQTT side — it's always the Modbus side. Your bridge architecture must respect the slower protocol's constraints while maximizing throughput.

Register Mapping: The Foundation

The first engineering decision is how to map Modbus registers to MQTT topics and payloads. There are three common approaches, each with trade-offs.

Approach 1: One Register, One Message

Topic: plant/line3/plc1/holding/40001
Payload: {"value": 1847, "ts": 1709312400, "type": "uint16"}

Pros: Simple, granular, easy to subscribe to individual data points. Cons: Catastrophic at scale. 200 registers means 200 MQTT publishes per poll cycle. At a 1-second poll rate, that's 200 messages/second — sustainable for the broker, but wasteful in bandwidth and processing overhead on constrained gateways.

Approach 2: Batched JSON Messages

Topic: plant/line3/plc1/batch
Payload: {
"ts": 1709312400,
"device_type": 1010,
"tags": [
{"id": 1, "value": 1847, "type": "uint16"},
{"id": 2, "value": 23.45, "type": "float"},
{"id": 3, "value": true, "type": "bool"}
]
}

Pros: Drastically fewer MQTT messages. One publish carries an entire poll cycle's worth of data. Cons: JSON encoding adds CPU overhead on embedded gateways. Payload size can grow large if you have hundreds of tags.

Approach 3: Binary-Encoded Batches

Instead of JSON, encode tag values in a compact binary format: a header with timestamp and device metadata, followed by packed tag records (tag ID + status + type + value). A single 16-bit register value takes 2 bytes in binary vs. ~30 bytes in JSON.

Pros: Minimum bandwidth. Critical for cellular-connected gateways where data costs money per megabyte. Cons: Requires matching decoders on the cloud side. Harder to debug.

The right approach depends on your constraints. For Ethernet-connected gateways with ample bandwidth, batched JSON is the sweet spot. For cellular or satellite links, binary encoding can reduce data costs by 10–15x.

Contiguous Register Coalescing

The single most impactful optimization in any Modbus-to-MQTT bridge is contiguous register coalescing: instead of reading registers one at a time, group adjacent registers into a single Modbus read request.

Consider a tag list where you need registers at addresses 40100, 40101, 40102, 40103, and 40110. A naive implementation makes 5 read requests. A smart bridge recognizes that 40100–40103 are contiguous and reads them in one Read Holding Registers (function code 03) call with a quantity of 4. That's 2 transactions instead of 5.

The coalescing logic must respect several constraints:

  1. Same function code. You can't coalesce a coil read (FC 01) with a holding register read (FC 03). The bridge must group tags by their Modbus register type — coils (0xxxxx), discrete inputs (1xxxxx), input registers (3xxxxx), and holding registers (4xxxxx) — and coalesce within each group.

  2. Maximum register count per transaction. The Modbus specification limits a single read to 125 registers (for 16-bit registers) or 2000 coils. In practice, keeping blocks under 50 registers reduces the risk of timeout errors on slower PLCs.

  3. Addressing gaps. If registers 40100 and 40150 both need reading, coalescing them into a single 51-register read wastes 49 registers worth of response data. Set a maximum gap threshold (e.g., 10 registers) — if the gap exceeds it, split into separate transactions.

  4. Same polling interval. Tags polled every second shouldn't be grouped with tags polled every 60 seconds. Coalescing must respect per-tag timing configuration.

// Pseudocode: Coalescing algorithm
sort tags by address ascending
group_head = first_tag
group_count = 1

for each subsequent tag:
if tag.function_code == group_head.function_code
AND tag.address == group_head.address + group_registers
AND group_registers < MAX_BLOCK_SIZE
AND tag.interval == group_head.interval:
// extend current group
group_registers += tag.elem_count
group_count += 1
else:
// read current group, start new one
read_modbus_block(group_head, group_count, group_registers)
group_head = tag
group_count = 1

In production deployments, contiguous coalescing routinely reduces Modbus transaction counts by 5–10x, which directly translates to faster poll cycles and fresher data.

Data Type Handling: Where the Devils Live

Modbus registers are 16-bit words. Everything else — 32-bit integers, IEEE 754 floats, booleans packed into bit fields — is a convention imposed by the PLC programmer. Your bridge must handle all of these correctly.

32-Bit Values Across Two Registers

A 32-bit float or integer spans two consecutive 16-bit Modbus registers. The critical question: which register contains the high word?

There's no standard. Some PLCs use big-endian word order (high word first, often called "ABCD" byte order). Others use little-endian word order (low word first, "CDAB"). Some use mid-endian orders ("BADC" or "DCBA"). You must know your PLC's convention, or your 23.45°C temperature reading becomes 1.7e+38 garbage.

For IEEE 754 floats specifically, the conversion from two 16-bit registers to a float is:

// Big-endian word order (ABCD)
float_value = ieee754_decode(register[n] << 16 | register[n+1])

// Little-endian word order (CDAB)
float_value = ieee754_decode(register[n+1] << 16 | register[n])

Production bridges must support configurable byte/word ordering on a per-tag basis, because it's common to have PLCs from different manufacturers on the same network.

Boolean Extraction From Status Words

PLCs frequently pack multiple boolean states into a single 16-bit register — machine running, alarm active, door open, etc. Extracting individual bits requires configurable shift-and-mask operations:

bit_value = (register_value >> shift_count) & mask

Where shift_count identifies the bit position (0–15) and mask is typically 0x01 for a single bit. The bridge's tag configuration should support this as a first-class feature, not a post-processing hack.

Type Safety Across the Bridge

When values cross from Modbus to MQTT, type information must be preserved. A uint16 register value of 65535 means something very different from a signed int16 value of -1 — even though the raw bits are identical. Your MQTT payload must carry the type alongside the value, whether in JSON field names or binary format headers.

Connection Resilience: The Store-and-Forward Pattern

The Modbus side of a protocol bridge is local — wired directly to PLCs over Ethernet or RS-485. It rarely fails. The MQTT side connects to a remote broker over a WAN link that will fail. Cellular drops out. VPN tunnels collapse. Cloud brokers restart for maintenance.

A production bridge must implement store-and-forward: continue reading from Modbus during MQTT outages, buffer the data locally, and drain the buffer when connectivity returns.

Page-Based Ring Buffers

The most robust buffering approach for embedded gateways uses a page-based ring buffer in pre-allocated memory:

  1. Format a fixed memory region into equal-sized pages at startup.
  2. Write incoming Modbus data to the current "work page." When a page fills, move it to the "used" queue.
  3. Send pages from the "used" queue to MQTT, one message at a time. Wait for the MQTT publish acknowledgment (at QoS 1) before advancing the read pointer.
  4. Recycle fully-delivered pages back to the "free" list.

If the MQTT connection drops:

  • Stop sending, but keep writing to new pages.
  • If all pages fill up (true buffer overflow), start overwriting the oldest used page. You lose the oldest data, but never the newest.

This design has several properties that matter for industrial deployments:

  • No dynamic memory allocation. The entire buffer is pre-allocated. No malloc, no fragmentation, no out-of-memory crashes at 3 AM.
  • Bounded memory usage. You know exactly how much RAM the buffer consumes. Critical on gateways with 64–256 MB.
  • Delivery guarantees. Each page tracks its own read pointer. If the gateway crashes mid-delivery, the page is re-sent on restart (at-least-once semantics).

How Long Can You Buffer?

Quick math: A gateway reading 100 tags every 5 seconds generates roughly 2 KB of batched JSON per poll cycle. That's 24 KB/minute, 1.4 MB/hour, 34 MB/day. A 256 MB buffer holds 7+ days of data. In binary format, that extends to 50+ days.

For most industrial applications, 24–48 hours of buffering is sufficient to survive maintenance windows, network outages, and firmware upgrades.

MQTT Connection Management

The MQTT side of the bridge deserves careful engineering. Industrial connections aren't like web applications — they run for months without restart, traverse multiple NATs and firewalls, and must recover automatically from every failure mode.

Async Connection With Threaded Reconnect

Never block the Modbus polling loop waiting for an MQTT connection. The correct architecture uses a separate thread for MQTT connection management:

  1. The main thread polls Modbus on a tight timer and writes data to the buffer.
  2. A connection thread handles MQTT connect/reconnect attempts asynchronously.
  3. The buffer drains automatically when the MQTT connection becomes available.

This separation ensures that a 30-second MQTT connection timeout doesn't stall your 1-second Modbus poll cycle. Data keeps flowing into the buffer regardless of MQTT state.

Reconnect Strategy

Use a fixed reconnect delay (5 seconds works well for most deployments) rather than exponential backoff. Industrial MQTT connections are long-lived — the overhead of a 5-second retry is negligible compared to the cost of missing data during a 60-second exponential backoff.

However, protect against connection storms: if the broker is down for an extended period, ensure reconnect attempts don't overwhelm the gateway's CPU or the broker's TCP listener.

TLS Certificate Management

Production MQTT bridges almost always use TLS (port 8883 rather than 1883). The bridge must handle:

  • Certificate expiration. Monitor the TLS certificate file's modification timestamp. If the cert file changes on disk, tear down the current MQTT connection and reinitialize with the new certificate. Don't wait for the existing connection to fail — proactively reconnect.
  • SAS token rotation. When using Azure IoT Hub or similar services with time-limited tokens, parse the token's expiration timestamp and reconnect before it expires.
  • CA certificate bundles. Embedded gateways often ship with minimal CA stores. Ensure your IoT hub's root CA is explicitly included in the gateway's certificate chain.

Change-of-Value vs. Periodic Reporting

Not all tags need the same reporting strategy. A bridge should support both:

Periodic reporting publishes every tag value at a fixed interval, regardless of whether the value changed. Simple, predictable, but wasteful for slowly-changing values like ambient temperature or firmware version.

Change-of-value (COV) reporting compares each newly read value against the previous value and only publishes when a change is detected. This dramatically reduces MQTT traffic for boolean states (machine on/off), setpoints, and alarm registers that change infrequently.

The implementation stores the last-read value for each tag and performs a comparison before deciding whether to publish:

if tag.compare_enabled:
if new_value != tag.last_value:
publish(tag, new_value)
tag.last_value = new_value
else:
publish(tag, new_value) # always publish

A hybrid approach works best: use COV for digital signals and alarm words, periodic for analog measurements like temperature and pressure. Some tags (critical alarms, safety interlocks) should always be published immediately — bypassing both the normal comparison logic and the batching system — to minimize latency.

Calculated and Dependent Tags

Real-world PLCs don't always expose data in the format you need. A bridge should support calculated tags — values derived from raw register data through mathematical or bitwise operations.

Common patterns include:

  • Bit extraction from status words. A 16-bit register contains 16 individual boolean states. The bridge extracts each bit as a separate tag using shift-and-mask operations.
  • Scaling and offset. Raw register value 4000 represents 400.0°F when divided by 10. The bridge applies a linear transformation (value × k1 / k2) to produce engineering units.
  • Dependent tag chains. When a parent tag's value changes, the bridge automatically reads and publishes a set of dependent tags. Example: when the "recipe number" register changes, immediately read all recipe parameter registers.

These calculations must happen at the edge, inside the bridge, before data is published to MQTT. Pushing raw register values to the cloud and calculating there wastes bandwidth and adds latency.

A bridge should publish its own health status alongside machine data. The most critical metric is link state — whether the gateway can actually communicate with the PLC.

When a Modbus read fails with a connection error (timeout, connection reset, connection refused, or broken pipe), the bridge should:

  1. Set the link state to "down" and publish immediately (not batched).
  2. Close the existing Modbus connection and attempt reconnection.
  3. Continue publishing link-down status at intervals so the cloud system knows the gateway is alive but the PLC is unreachable.
  4. When reconnection succeeds, set link state to "up" and force-read all tags to re-establish baseline values.

This link state telemetry is invaluable for distinguishing between "the machine is off" and "the network cable is unplugged" — two very different problems that look identical without gateway-level diagnostics.

How machineCDN Handles Protocol Bridging

machineCDN's edge gateway was built from the ground up for exactly this problem. The gateway daemon handles Modbus RTU (serial), Modbus TCP, and EtherNet/IP on the device side, and publishes all data over MQTT with TLS to the cloud.

Key architectural decisions in the machineCDN gateway:

  • Pre-allocated page buffer with configurable page sizes for zero-allocation runtime operation.
  • Automatic contiguous register coalescing that respects function code boundaries, tag intervals, and register limits.
  • Per-tag COV comparison with an option to bypass batching for latency-critical values.
  • Calculated tag chains for bit extraction and dependent tag reads.
  • Hourly full refresh — every 60 minutes, the gateway resets all COV baselines and publishes every tag value, ensuring the cloud always has a complete snapshot even if individual change events were missed.
  • Async MQTT reconnection with certificate hot-reloading and SAS token expiration monitoring.

The result is a bridge that reliably moves data from plant-floor PLCs to cloud dashboards with sub-second latency during normal operation and zero data loss during outages lasting hours or days.

Deployment Checklist

Before deploying a Modbus-to-MQTT bridge in production:

  • Map every register — document address, data type, byte order, scaling factor, and engineering units
  • Set appropriate poll intervals — 1s for process-critical, 5–60s for environmental, 300s+ for configuration data
  • Size the buffer — calculate daily data volume and ensure the buffer can hold 24+ hours
  • Test byte ordering — verify float and 32-bit integer decoding against known PLC values before trusting the data
  • Configure COV vs periodic — boolean and alarm tags = COV, analog = periodic
  • Enable TLS — never run MQTT unencrypted on production networks
  • Monitor link state — alert on PLC disconnections, not just missing data
  • Test failover — unplug the WAN cable for 4 hours and verify data drains correctly when it reconnects

Protocol bridging isn't glamorous work. It's plumbing. But it's the plumbing that determines whether your IIoT deployment delivers reliable data or expensive noise. Get the bridge right, and everything downstream — analytics, dashboards, predictive maintenance — just works.