Skip to main content

Best Alarm Management Software for Manufacturing in 2026: Reduce Noise, Catch Real Problems

· 9 min read
MachineCDN Team
Industrial IoT Experts

The average manufacturing plant generates thousands of alarms per day. Most operators ignore them. Not because they're lazy — because they've learned from experience that 90% of alarms are noise. Nuisance alarms. Standing alarms. Alarm floods during startup sequences. The sheer volume has trained operators to dismiss alerts that might actually matter.

This is the alarm management crisis in manufacturing, and it kills people, destroys equipment, and costs billions annually. The ISA-18.2 standard for alarm management exists precisely because poor alarm practices have been linked to major industrial incidents worldwide.

The good news: modern IIoT platforms are finally giving manufacturers the tools to rationalize, prioritize, and manage alarms effectively — if you pick the right one.

Augury Pricing in 2026: What Does Augury Actually Cost?

· 7 min read
MachineCDN Team
Industrial IoT Experts

If you've been evaluating vibration monitoring and machine health platforms, Augury's name has probably come up. Their sensor-based approach to predictive maintenance has earned attention from manufacturers across food & beverage, chemicals, and consumer goods.

But when it comes to pricing, Augury follows the same playbook as most enterprise IIoT vendors: no public pricing, mandatory sales calls, and quotes that vary wildly based on how many machines you're monitoring.

Let's break down what Augury actually costs in 2026 — based on publicly available information, industry analyst reports, and what manufacturing engineers report paying.

Binary Payload Encoding for Industrial MQTT: Cutting Bandwidth by 10x on Constrained Networks [2026]

· 13 min read

Binary Payload Encoding

JSON is killing your cellular data budget.

When your edge gateway publishes a single temperature reading as {"tag_id": 42, "value": 23.45, "type": "float", "status": 0, "ts": 1709312400}, that's 72 bytes of text to convey 10 bytes of actual information: a 2-byte tag ID, a 4-byte float, a 1-byte status code, and a 4-byte timestamp (which is shared across all tags in the same poll cycle anyway).

At 200 tags polled every 5 seconds, JSON payloads consume roughly 100 KB/minute — over 4 GB/month. On a $15/month cellular plan with a 1 GB cap, you've blown your data budget by day 8.

Binary encoding solves this. By designing a compact wire format purpose-built for industrial telemetry, you can reduce per-tag overhead from ~70 bytes to ~7 bytes — a 10x reduction that makes cellular and satellite IIoT deployments economically viable.

This article covers the engineering of binary payload formats for industrial MQTT, from byte-level encoding decisions to the buffering and delivery systems that ensure data integrity.

Why JSON Falls Short for Industrial Telemetry

JSON became the default payload format for MQTT in the IIoT world because it's human-readable, self-describing, and every platform can parse it. These are real advantages during development and debugging. But they come at a cost that compounds brutally at scale.

The Overhead Tax

Let's dissect a typical JSON telemetry message:

{
"device_type": 1010,
"serial": 1106550353,
"ts": 1709312400,
"tags": [
{"id": 1, "status": 0, "type": "uint16", "values": [4200]},
{"id": 2, "status": 0, "type": "float", "values": [23.45]},
{"id": 3, "status": 0, "type": "bool", "values": [1]}
]
}

This payload is approximately 250 bytes. The actual data content:

  • Device type: 2 bytes
  • Serial number: 4 bytes
  • Timestamp: 4 bytes
  • 3 tag values: 2 + 4 + 1 = 7 bytes
  • 3 tag IDs: 6 bytes
  • 3 status codes: 3 bytes

Total useful data: 26 bytes. The other 224 bytes are structural overhead — curly braces, square brackets, quotation marks, colons, commas, key names, and redundant type strings.

That's an overhead ratio of 9.6x. For every byte of machine data, you're transmitting nearly 10 bytes of JSON syntax.

CPU Cost on Embedded Gateways

JSON serialization isn't free on embedded hardware. Constructing JSON objects, converting numbers to strings, escaping special characters, and computing string lengths all consume CPU cycles that could be spent polling more tags or running edge analytics.

On an ARM Cortex-A7 gateway (common in industrial routers), JSON serialization of a 200-tag batch takes 2–5ms. The equivalent binary encoding takes 200–500μs — an order of magnitude faster. When you're polling Modbus every second and need to leave CPU headroom for other tasks, this matters.

Designing a Binary Telemetry Format

A practical binary format for industrial MQTT must balance compactness with extensibility. Here's a proven structure used in production industrial gateways.

Message Structure

┌─────────────────────────────────────────┐
│ Header │
│ ├─ Timestamp (4 bytes, uint32) │
│ ├─ Device Type (2 bytes, uint16) │
│ └─ Serial Number (4 bytes, uint32) │
├─────────────────────────────────────────┤
│ Tag Group │
│ ├─ Tag Count (2 bytes, uint16) │
│ ├─ Tag Record 1 │
│ │ ├─ Tag ID (2 bytes, uint16) │
│ │ ├─ Status (1 byte, uint8) │
│ │ ├─ Type (1 byte, uint8) │
│ │ ├─ Value Count (1 byte, uint8) │
│ │ └─ Values (variable) │
│ ├─ Tag Record 2 │
│ │ └─ ... │
│ └─ Tag Record N │
└─────────────────────────────────────────┘

Type Encoding

Use a single byte to encode the value type, which also determines the byte width of each value:

Type CodeTypeBytes per Value
0x01bool1
0x02int324
0x03uint324
0x04float324
0x05int162
0x06uint162
0x07int81
0x08uint81

This type system covers every data type you'll encounter in Modbus and EtherNet/IP PLCs. The decoder uses the type code to determine exactly how many bytes to read for each value — no parsing ambiguity, no delimiter scanning.

Size Comparison

For the same 3-tag example above:

Binary encoding:

  • Header: 10 bytes (timestamp + device type + serial)
  • Tag count: 2 bytes
  • Tag 1 (uint16): 2 + 1 + 1 + 1 + 2 = 7 bytes
  • Tag 2 (float32): 2 + 1 + 1 + 1 + 4 = 9 bytes
  • Tag 3 (bool): 2 + 1 + 1 + 1 + 1 = 6 bytes

Total: 34 bytes vs. 250 bytes for JSON. That's a 7.3x reduction.

The savings compound as tag count increases. At 100 tags (a typical mid-size PLC), a JSON batch runs 6–8 KB; the binary equivalent is 700–900 bytes. At 200 tags, JSON hits 12–16 KB while binary stays under 2 KB.

Data Grouping: Batches and Groups

Individual tag values shouldn't be published as individual MQTT messages. The MQTT protocol itself adds overhead: a PUBLISH packet includes a fixed header (2 bytes minimum), topic string (20–50 bytes for a typical industrial topic), and packet identifier (2 bytes for QoS 1). Publishing 200 individual messages means 200× this overhead.

Timestamp-Grouped Batches

The most effective grouping strategy collects all tag values from a single poll cycle into one batch, sharing a single timestamp:

[Batch Start: timestamp=1709312400]
Tag 1: id=1, status=0, type=uint16, value=4200
Tag 2: id=2, status=0, type=float, value=23.45
Tag 3: id=3, status=0, type=bool, value=1
...
[Batch End]

The timestamp in the batch header applies to all contained tags. This eliminates per-tag timestamp overhead — a savings of 4 bytes per tag, or 800 bytes across 200 tags.

Batch Size Limits

MQTT brokers and clients have maximum message size limits. Azure IoT Hub limits messages to 256 KB. AWS IoT Core allows 128 KB. Most on-premise Mosquitto deployments default to 256 MB but should be configured lower for production use.

More importantly, your edge gateway's memory and processing constraints impose practical limits. A 4 KB batch size works well for most deployments:

  • Large enough to hold 200+ tags in binary format
  • Small enough to fit in constrained gateway memory
  • Fast enough to serialize without impacting the poll loop

When a batch exceeds the configured size, close it and start a new one. The cloud decoder handles multiple batches with the same timestamp gracefully.

Change-of-Value Filtering Before Batching

Apply change-of-value (COV) filtering before adding values to the batch, not after. If a tag's value hasn't changed since the last report and COV is enabled for that tag, skip it entirely. This reduces batch sizes further during steady-state operation — when 80% of tags are unchanged, your binary batch shrinks proportionally.

However, implement a periodic full-refresh: every hour (or configurable interval), reset all COV baselines and include every tag in the next batch. This ensures the cloud always has a complete snapshot, even if individual change events were lost during a brief disconnection.

The Page Buffer: Store-and-Forward in Fixed Memory

Binary encoding solves the bandwidth problem. But you still need to handle MQTT disconnections without losing data. The page-based ring buffer is the industrial standard for store-and-forward in embedded systems.

Architecture

Pre-allocate a contiguous memory region at startup and divide it into fixed-size pages:

┌────────────────────────────────────────────────┐
│ Buffer Memory (e.g., 512 KB) │
│ │
│ ┌──────┐ ┌──────┐ ┌──────┐ ┌──────┐ ┌──────┐ │
│ │Page 0│ │Page 1│ │Page 2│ │Page 3│ │Page 4│ │
│ │ │ │ │ │ │ │ │ │ │ │
│ └──────┘ └──────┘ └──────┘ └──────┘ └──────┘ │
└────────────────────────────────────────────────┘

Pages cycle through three states:

  1. Free — empty, available for writing
  2. Work — currently being written to by the Modbus polling thread
  3. Used — full, waiting for MQTT delivery

Page Layout

Each page contains multiple messages, packed sequentially:

┌─────────────────────────────────────┐
│ Page Header (struct, ~16 bytes) │
├─────────────────────────────────────┤
│ Message 1: │
│ ├─ Message ID (4 bytes) │
│ ├─ Message Size (4 bytes) │
│ └─ Message Body (variable) │
├─────────────────────────────────────┤
│ Message 2: │
│ ├─ Message ID (4 bytes) │
│ ├─ Message Size (4 bytes) │
│ └─ Message Body (variable) │
├─────────────────────────────────────┤
│ ... (more messages) │
├─────────────────────────────────────┤
│ Free space │
└─────────────────────────────────────┘

The 4-byte message ID field is filled by the MQTT library when the message is published (at QoS 1). The gateway uses this ID to match publish acknowledgments to specific messages.

Write Path

  1. Check if the current work page has enough space for the new message (message size + 8 bytes for ID and size fields).
  2. If yes: write the message, advance the write pointer.
  3. If no: move the work page to the "used" queue, grab a free page as the new work page, and write there.
  4. If no free pages exist: grab the oldest used page (overflow condition). Log a warning — you're losing the oldest buffered data, but preserving the newest.

This overflow strategy is deliberately biased toward fresh data. In industrial monitoring, a temperature reading from 5 minutes ago is far more valuable than one from 3 days ago that was buffered during an outage.

Delivery Path

  1. Take the first page from the "used" queue.
  2. Read the next undelivered message (tracked by a per-page read pointer).
  3. Publish via MQTT at QoS 1.
  4. Wait for PUBACK — don't advance the read pointer until the broker confirms receipt.
  5. On PUBACK: advance the read pointer. If the page is fully delivered, move it back to "free."
  6. On disconnect: stop sending, keep writing. The buffer absorbs the outage.

The wait-for-PUBACK step is critical. Without it, you're fire-and-forgetting into a potentially disconnected socket, and data silently disappears.

Thread Safety

The write path (Modbus polling thread) and delivery path (MQTT thread) operate concurrently on the same buffer. A mutex protects all page state transitions:

  • Moving pages between free/work/used queues
  • Checking available space
  • Advancing read/write pointers
  • Processing delivery acknowledgments

Keep the critical section as small as possible — lock, update pointers, unlock. Never hold the mutex during a Modbus read or MQTT publish; those operations can block for seconds.

Delivery Tracking and Watchdogs

In production, "the MQTT connection is up" doesn't mean data is flowing. The connection can be technically alive (TCP socket open, keepalives passing) while messages silently fail to publish or acknowledge.

Delivery Timestamp Tracking

Track the timestamp of the last successfully delivered message (confirmed by PUBACK). If this timestamp falls more than N minutes behind the current time, something is wrong:

  • The broker may be rejecting messages (payload too large, topic permission denied)
  • The network may be passing keepalives but dropping data packets
  • The MQTT library may be stuck in an internal error state

When the delivery watchdog fires, tear down the entire MQTT connection and reinitialize. It's a heavy-handed recovery, but it's reliable. In industrial systems, a clean restart beats a subtle degradation every time.

Status Telemetry

The gateway should periodically publish its own status message containing:

  • Daemon uptime — how long since last restart
  • System uptime — how long since last boot
  • Buffer state — pages free/used/work, current fill level
  • PLC link state — is the Modbus connection healthy
  • Firmware version — for remote fleet management
  • Token expiration — time remaining on the MQTT auth token

This status message can use JSON even if data messages use binary — it's infrequent (every 30–60 seconds) and readability matters more than compactness for diagnostics.

Bandwidth Math: Real-World Numbers

Let's calculate the actual savings for a typical deployment:

Scenario: 150 tags, polled every 5 seconds, 50% change rate with COV enabled, cellular connection.

JSON Format

  • Average tag JSON: ~60 bytes
  • Tags per poll (with 50% COV): 75
  • Batch overhead: ~50 bytes
  • Total per poll: 75 × 60 + 50 = 4,550 bytes
  • Per minute (12 polls): 54.6 KB
  • Per day: 78.6 MB
  • Per month: 2.36 GB

Binary Format

  • Average tag binary: ~7 bytes
  • Header per batch: 12 bytes
  • Total per poll: 75 × 7 + 12 = 537 bytes
  • Per minute (12 polls): 6.4 KB
  • Per day: 9.3 MB
  • Per month: 279 MB

Savings: 88% reduction — from 2.36 GB to 279 MB. On a $20/month cellular plan with 500 MB included, JSON doesn't fit. Binary does, with headroom.

Add MQTT overhead (topic strings, packet headers) and TLS overhead (~40 bytes per record), and real-world savings are slightly less dramatic but still consistently in the 8–10x range.

Decoding on the Cloud Side

Binary encoding shifts complexity from the edge to the cloud. The decoder must:

  1. Parse the header to extract timestamp, device type, and serial number.
  2. Iterate tag records using the type code to determine value byte widths.
  3. Reconstruct typed values — particularly IEEE 754 floats from their 4-byte binary representation.
  4. Handle partial messages — if a batch was truncated due to buffer overflow, the decoder must fail gracefully on the last incomplete record without losing the valid records before it.

Most cloud platforms (Azure IoT Hub, AWS IoT Core) support custom message decoders that transform binary payloads to JSON for downstream processing. Write the decoder once, and the rest of your analytics pipeline sees standard JSON.

How machineCDN Implements Binary Telemetry

machineCDN's edge daemon uses binary encoding by default for all data telemetry. The implementation includes:

  • Compact binary batching with shared timestamps per group, reducing per-tag overhead to 5–9 bytes depending on data type.
  • Page-based ring buffer with pre-allocated memory, zero runtime allocation, and deliberate overflow behavior that preserves fresh data.
  • Per-message PUBACK tracking with delivery watchdog and automatic connection recycling.
  • Parallel JSON status messages for gateway diagnostics, published on a separate topic at lower frequency.
  • Automatic format negotiation — the cloud ingestion layer detects binary vs. JSON based on the first byte of the payload and routes to the appropriate decoder.

The result: machineCDN gateways routinely operate on 500 MB/month cellular plans, monitoring 200+ tags at 5-second intervals, with full store-and-forward resilience during connectivity outages.

When to Use Binary vs. JSON

Binary encoding isn't always the right choice. Use this decision framework:

CriterionUse BinaryUse JSON
NetworkCellular, satellite, meteredEthernet, WiFi, unmetered
Tag count> 50< 20
Poll interval< 10 seconds> 60 seconds
Gateway CPUConstrained (< 500 MHz)Capable (> 1 GHz)
Debug needsProduction, stableDevelopment, changing
DownstreamCustom decoder availableGeneric tooling needed

For most production industrial deployments — where gateways connect hundreds of tags over cellular and reliability trumps developer convenience — binary encoding is the clear winner. Save JSON for your status messages and the debugging serial port.

Getting Started

If you're designing a binary telemetry format for your own gateway:

  1. Start with the type system. Define your type codes and byte widths. Match them to your PLC's native data types.
  2. Design the header. Include version, device identity, and a shared timestamp. Add a format version byte so you can evolve the format without breaking old decoders.
  3. Build the buffer first. Get store-and-forward working before optimizing the encoding. Data integrity matters more than data compactness.
  4. Write the decoder alongside the encoder. Test with known values. Verify float encoding especially — IEEE 754 byte ordering bugs are silent and devastating.
  5. Measure real bandwidth. Deploy both JSON and binary formats on the same gateway for a week and compare actual data consumption. The numbers will sell the approach to stakeholders who question the added complexity.

Binary encoding is a solved problem in industrial telemetry. The patterns are well-established, the savings are dramatic, and the complexity cost is paid once at design time and amortized across every byte your fleet ever transmits.

Binary Telemetry Encoding for IIoT: Why JSON Is Killing Your Bandwidth [2026]

· 11 min read

If you're sending PLC tag values as JSON from edge gateways to the cloud, you're wasting 80–90% of your bandwidth. On a cellular-connected factory floor with dozens of machines, that's the difference between a $50/month data plan and a $500/month one — and the difference between sub-second telemetry and multi-second lag.

This guide breaks down binary telemetry encoding: how to pack industrial data efficiently at the edge, preserve type fidelity across the wire, and design batch grouping strategies that survive unreliable networks.

Binary telemetry encoding for IIoT edge devices

Condition-Based Monitoring vs Predictive Maintenance: What's the Difference and Which Do You Need?

· 10 min read
MachineCDN Team
Industrial IoT Experts

The terms "condition-based monitoring" (CBM) and "predictive maintenance" (PdM) get thrown around interchangeably in the IIoT world, and that confusion costs manufacturers real money. They're related — PdM is essentially the evolution of CBM — but they're not the same thing, and understanding the difference changes how you implement your maintenance strategy.

Data Normalization for Industrial IoT: Handling Register Formats, Byte Ordering, and Scaling Factors Across PLCs [2026]

· 14 min read

Here's a truth every IIoT engineer discovers the hard way: the hardest part of connecting industrial equipment to the cloud isn't the networking, the security, or the cloud architecture. It's getting a raw register value of 0x4248 from a PLC and knowing whether that means 50.0°C, 16,968 PSI, or the hex representation of half a 32-bit float that needs its companion register before it means anything at all.

Data normalization — the process of transforming raw PLC register values into meaningful engineering units — is the unglamorous foundation that every reliable IIoT system is built on. Get it wrong, and your dashboards display nonsense. Get it subtly wrong, and your analytics quietly produce misleading results for months before anyone notices.

This guide covers the real-world data normalization challenges you'll face when integrating PLCs from different manufacturers, and the patterns that actually work in production.

The Fundamental Problem: Registers Don't Know What They Contain

Industrial protocols like Modbus define a simple data model: 16-bit registers. That's it. A Modbus holding register at address 40001 contains a 16-bit unsigned integer (0–65535). The protocol has no concept of:

  • Whether that value represents temperature, pressure, flow rate, or a status code
  • What engineering units it's in
  • Whether it needs to be scaled (divided by 10? by 100?)
  • Whether it's part of a multi-register value (32-bit integer, IEEE 754 float)
  • What byte order the multi-register value uses

This information lives in manufacturer documentation — usually a PDF that's three firmware versions behind, written by someone who assumed you'd use their proprietary software, and references register addresses using a different numbering convention than your gateway.

Even within a single plant, you'll encounter:

  • Chiller controllers using input registers (function code 4, 30001+ addressing)
  • Temperature controllers using holding registers (function code 3, 40001+ addressing)
  • Older devices using coils (function code 1) for status bits
  • Mixed addressing conventions (some manufacturers start at 0, others at 1)

Modbus Register Types and Function Code Mapping

The first normalization challenge is mapping register addresses to the correct Modbus function code. The traditional Modbus addressing convention uses a 6-digit numbering scheme:

Address RangeRegister TypeFunction CodeAccess
000001–065536CoilsFC 01 (read) / FC 05 (write)Read/Write
100001–165536Discrete InputsFC 02Read Only
300001–365536Input RegistersFC 04Read Only
400001–465536Holding RegistersFC 03 (read) / FC 06/16 (write)Read/Write

In practice, the high-digit prefix determines the function code, and the remaining digits (after subtracting the prefix) determine the actual register address sent in the Modbus PDU:

Address 300201 → Function Code 4, Register Address 201
Address 400006 → Function Code 3, Register Address 6
Address 5 → Function Code 1, Coil Address 5

Common pitfall: Some device manufacturers use "register address" to mean the PDU address (0-based), while others use the traditional Modbus numbering (1-based). Register 40001 in the documentation might mean PDU address 0 or PDU address 1 depending on the manufacturer. Always verify with a Modbus scanner tool before building your configuration.

The Byte Ordering Nightmare

A 16-bit Modbus register stores two bytes. That's unambiguous — the protocol spec defines big-endian (most significant byte first) for individual registers. The problem starts when you need values larger than 16 bits.

32-Bit Integers from Two Registers

A 32-bit value requires two consecutive 16-bit registers. The question is: which register holds the high word?

Consider a 32-bit value of 0x12345678:

Word order Big-Endian (most common):

Register N:   0x1234 (high word)
Register N+1: 0x5678 (low word)
Result: (0x1234 << 16) | 0x5678 = 0x12345678 ✓

Word order Little-Endian:

Register N:   0x5678 (low word)
Register N+1: 0x1234 (high word)
Result: (Register[N+1] << 16) | Register[N] = 0x12345678 ✓

Both are common in practice. When building an edge data collection system, you need to support at least these two variants per device configuration.

IEEE 754 Floating-Point: Where It Gets Ugly

32-bit IEEE 754 floats span two Modbus registers, and the byte ordering permutations multiply. There are four real-world variants:

1. ABCD (Big-Endian / Network Order)

Register N:   0x4248  (bytes A,B)
Register N+1: 0x0000 (bytes C,D)
IEEE 754: 0x42480000 = 50.0

Used by: Most European manufacturers, Honeywell, ABB, many process instruments

2. DCBA (Little-Endian / Byte-Swapped)

Register N:   0x0000  (bytes D,C)
Register N+1: 0x4842 (bytes B,A)
IEEE 754: 0x42480000 = 50.0

Used by: Some legacy Allen-Bradley controllers, older Omron devices

3. BADC (Mid-Big-Endian / Word-Swapped)

Register N:   0x4842  (bytes B,A)
Register N+1: 0x0000 (bytes D,C)
IEEE 754: 0x42480000 = 50.0

Used by: Schneider Electric, Daniel/Emerson flow meters, some Siemens devices

4. CDAB (Mid-Little-Endian)

Register N:   0x0000  (bytes C,D)
Register N+1: 0x4248 (bytes A,B)
IEEE 754: 0x42480000 = 50.0

Used by: Various Asian manufacturers, some OEM controllers

Here's the critical lesson: The libmodbus library (used by many edge gateways and IIoT platforms) provides a modbus_get_float() function that assumes BADC word order — which is not the most common convention. If you use the standard library function on a device that transmits ABCD, you'll get garbage values that are still valid IEEE 754 floats, meaning they won't trigger obvious error conditions. Your dashboard will show readings like 3.14 × 10⁻²⁷ instead of 50.0°C, and if nobody's watching closely, this goes undetected.

Always verify byte ordering with a known test value. Read a temperature sensor that's showing 25°C on its local display, decode the registers with all four byte orderings, and see which one gives you 25.0.

Generic Float Decoding Pattern

A robust normalization engine should accept a byte-order parameter per tag:

# Device configuration example
tags:
- name: "Tank Temperature"
register: 300001
type: float32
byte_order: ABCD # Big-endian (verify with test read!)
unit: "°C"
registers_count: 2

- name: "Flow Rate"
register: 300003
type: float32
byte_order: BADC # Schneider-style mid-big-endian
unit: "L/min"
registers_count: 2

Integer Scaling: The Hidden Conversion

Many PLCs transmit fractional values as scaled integers because integer math is faster and simpler to implement on microcontrollers. Common patterns:

Divide-by-10 Temperature

Register value: 234
Actual temperature: 23.4°C
Scale factor: 0.1

Divide-by-100 Pressure

Register value: 14696
Actual pressure: 146.96 PSI
Scale factor: 0.01

Offset + Scale

Some devices use a linear transformation: engineering_value = (raw * k1) + k2

Register value: 4000
k1 (gain): 0.025
k2 (offset): -50.0
Temperature: (4000 × 0.025) + (-50.0) = 50.0°C

This pattern is common in 4–20 mA analog input modules where the 16-bit ADC value (0–65535) maps to an engineering range:

0     = 4.00 mA  = Range minimum (e.g., 0°C)
65535 = 20.00 mA = Range maximum (e.g., 200°C)

Scale: 200.0 / 65535 = 0.00305
Offset: 0.0

For raw value 32768: 32768 × 0.00305 + 0 ≈ 100.0°C

The trap: Some devices use signed 16-bit integers (int16, range -32768 to +32767) to represent negative values (e.g., freezer temperatures). If your normalization engine treats everything as uint16, negative temperatures will appear as large positive numbers (~65,000+). Always verify whether a register is signed or unsigned.

Bit Extraction from Packed Status Words

Industrial controllers frequently pack multiple boolean status values into a single register. A single 16-bit holding register might contain:

Bit 0: Compressor Running
Bit 1: High Pressure Alarm
Bit 2: Low Pressure Alarm
Bit 3: Pump Running
Bit 4: Defrost Active
Bits 5-7: Operating Mode (3-bit enum)
Bits 8-15: Error Code

To extract individual boolean values from a packed word:

value = (register_value >> shift_count) & mask

For single bits, the mask is 1:

compressor_running = (register >> 0) & 0x01
high_pressure_alarm = (register >> 1) & 0x01

For multi-bit fields:

operating_mode = (register >> 5) & 0x07  // 3-bit mask
error_code = (register >> 8) & 0xFF // 8-bit mask

Why this matters for IIoT: Each extracted bit often needs to be published as an independent data point for alarming, trending, and analytics. A robust data pipeline defines "calculated tags" that derive from a parent register — when the parent register is read, the derived boolean tags are automatically extracted and published.

This approach is more efficient than reading each coil individually. Reading one holding register and extracting 16 bits is one Modbus transaction. Reading 16 individual coils is 16 transactions (or at best, one FC01 read for 16 coils — but many implementations don't optimize this).

Contiguous Register Coalescence

When reading multiple tags from a Modbus device, transaction overhead dominates performance. Each Modbus TCP request carries:

  • TCP/IP overhead: ~54 bytes (headers)
  • Modbus MBAP header: 7 bytes
  • Function code + address: 5 bytes
  • Response overhead: Similar

For a single register read, you're spending ~120 bytes of framing to retrieve 2 bytes of data. This is wildly inefficient.

The optimization: Coalesce reads of contiguous registers into a single transaction. If you need registers 300001 through 300050, issue one Read Input Registers command for 50 registers instead of 50 individual reads.

The coalescence conditions are:

  1. Same function code (can't mix holding and input registers)
  2. Contiguous addresses (no gaps)
  3. Same polling interval (don't slow down a fast-poll tag to batch it with a slow-poll tag)
  4. Within protocol limits (Modbus allows up to 125 registers per read for FC03/FC04)

In practice, the maximum PDU payload is 250 bytes (125 × 16-bit registers), so batches should be capped at ~50 registers to keep response sizes reasonable and avoid fragmenting the IP packet.

Practical batch sizing:

Maximum safe batch: 50 registers
Typical latency per batch: 2-5 ms (Modbus TCP, local network)
Inter-request delay: ~50 ms (prevent bus saturation on Modbus RTU)

When a gap appears in the register map (e.g., you need registers 1-10 and 20-30), you have two choices:

  1. Two separate reads: 10 registers + 10 registers = 2 transactions
  2. One read with gap: 30 registers = 1 transaction (reading 9 registers you don't need)

For gaps of 10 registers or less, reading the gap is usually more efficient than the overhead of a second transaction. For larger gaps, split the reads.

Change Detection and Report-by-Exception

Not every data point changes every poll cycle. A temperature sensor might hold steady at 23.4°C for hours. Publishing identical values every second wastes bandwidth, storage, and processing.

Report-by-exception (RBE) compares each new reading against the last published value:

if new_value != last_published_value:
publish(new_value)
last_published_value = new_value

For integer types, exact comparison works. For floating-point values, use a deadband:

if abs(new_value - last_published_value) > deadband:
publish(new_value)
last_published_value = new_value

Important: Even with RBE, periodically force-publish all values (e.g., every hour) to ensure the IIoT platform has fresh data. Some edge cases can cause stale values:

  • A sensor drifts back to exactly the last published value after changing
  • Network outage causes missed change events
  • Cloud-side data expires or is purged

A well-designed data pipeline resets its "last read" state on an hourly boundary, forcing a full publish of all tags regardless of whether they've changed.

Multi-Protocol Device Detection

In brownfield plants, you often encounter devices that support multiple protocols. The same PLC might respond to both EtherNet/IP (Allen-Bradley AB-EIP) and Modbus TCP on port 502. Your edge gateway needs to determine which protocol the device actually speaks.

A practical detection sequence:

  1. Try EtherNet/IP first: Attempt to read a known tag (like a device type identifier) using the CIP protocol. If successful, you know the device speaks EtherNet/IP and can use tag-based addressing.

  2. Fall back to Modbus TCP: If EtherNet/IP fails (connection refused or timeout), try a Modbus TCP connection on port 502. Read a known device-type register to identify the equipment.

  3. Device-specific addressing: Once the device type is identified, load the correct register map, byte ordering, and scaling configuration for that specific model.

This multi-protocol detection pattern is how platforms like machineCDN handle heterogeneous plant environments — where one production line might have Allen-Bradley Micro800 controllers communicating via EtherNet/IP, while an adjacent chiller system uses Modbus TCP, and both need to feed into the same telemetry pipeline.

Batch Delivery and Wire Efficiency

Once data is normalized, it needs to be efficiently packaged for upstream delivery (typically via MQTT or HTTPS). Sending one MQTT message per data point is wasteful — the MQTT overhead (fixed header, topic, QoS) can exceed the payload size for simple values.

Batching pattern:

  1. Start a collection window (e.g., 60 seconds or until batch size limit is reached)
  2. Group normalized values by timestamp into "groups"
  3. Each group contains all tag values read at that timestamp
  4. When the batch timeout expires or the size limit is reached, serialize and publish the entire batch
{
"device": "chiller-01",
"batch": [
{
"timestamp": 1709292000,
"values": [
{"id": 1, "type": "int16", "value": 234},
{"id": 2, "type": "float", "value": 50.125},
{"id": 6, "type": "bool", "value": true}
]
},
{
"timestamp": 1709292060,
"values": [
{"id": 1, "type": "int16", "value": 237},
{"id": 2, "type": "float", "value": 50.250}
]
}
]
}

For bandwidth-constrained connections (cellular, satellite), consider binary serialization instead of JSON. A binary batch format can reduce payload size by 3–5x compared to JSON, which matters when you're paying per megabyte on a cellular link.

Error Handling and Resilience

Data normalization isn't just about converting values — it's about handling failures gracefully:

Communication Errors

  • Timeout (ETIMEDOUT): Device not responding. Could be network issue or device power failure. Set link state to DOWN, trigger reconnection logic.
  • Connection reset (ECONNRESET): TCP connection dropped. Close and re-establish.
  • Connection refused (ECONNREFUSED): Device not accepting connections. May be in commissioning mode or at connection limit.

Data Quality

  • Read succeeds but value is implausible: A temperature sensor reading -273°C (below absolute zero) or 999.9°C (sensor wiring fault). The normalization layer should flag these with data quality indicators, not silently forward them.
  • Sensor stuck at same value: If a process value hasn't changed in an unusual time period (hours for a temperature, minutes for a vibration sensor), it may indicate a sensor failure rather than a stable process.

Reconnection Strategy

When communication with a device is lost:

  1. Close the connection cleanly (flush buffers, release resources)
  2. Wait before reconnecting (backoff to avoid hammering a failed device)
  3. On reconnection, force-read all tags (the device state may have changed while disconnected)
  4. Re-deliver the link state change event so downstream systems know the device was briefly offline

Practical Normalization Checklist

For every new device you integrate:

  • Identify the protocol (Modbus TCP, Modbus RTU, EtherNet/IP) and connection parameters
  • Obtain the complete register map from the manufacturer
  • Verify addressing convention (0-based vs. 1-based registers)
  • For each tag: determine data type, register count, and byte ordering
  • Test float decoding with a known value (read a sensor showing a known temperature)
  • Determine scaling factors (divide by 10? linear transform?)
  • Identify packed status words and document bit assignments
  • Map contiguous registers for coalescent reads
  • Configure change detection (RBE) with appropriate deadbands
  • Set polling intervals per tag group (fast-changing values vs. slow-changing configuration)
  • Test error scenarios (unplug the device, observe recovery behavior)
  • Validate end-to-end: compare the value on the device's local display to what appears in your cloud dashboard

The Bigger Picture

Data normalization is where the theoretical elegance of IIoT architectures meets the messy reality of installed industrial equipment. Every plant is a museum of different vendors, different decades of technology, and different engineering conventions.

The platforms that succeed in production — like machineCDN — are the ones that invest heavily in this normalization layer. Because once raw register 0x4248 reliably becomes 50.0°C with the correct timestamp, units, and quality metadata, everything downstream — analytics, alarming, machine learning, digital twins — actually works.

It's not glamorous work. But it's the difference between an IIoT proof-of-concept that demos well and a production system that a plant manager trusts.

Best Downtime Tracking Software for Manufacturing in 2026: Stop Losing $260K Per Hour

· 9 min read
MachineCDN Team
Industrial IoT Experts

The average manufacturer loses $260,000 per hour of unplanned downtime. That number comes from Aberdeen Group research, and it hasn't gotten better — if anything, the cost per hour has increased as production lines become more automated and interdependent. Yet most plants still track downtime with clipboards, Excel spreadsheets, and the occasional SCADA alarm log.

Best Energy Monitoring Software for Manufacturing in 2026: Track Consumption, Cut Costs, Hit ESG Targets

· 10 min read
MachineCDN Team
Industrial IoT Experts

Energy costs are the second-largest operating expense for most manufacturers — right behind labor. In 2026, with industrial electricity rates rising 4–8% annually across most markets and ESG reporting requirements tightening, the ability to monitor energy consumption at the machine level has shifted from "nice-to-have" to "operationally critical."

EtherNet/IP and CIP: How Industrial Controllers Actually Communicate [2026 Guide]

· 12 min read

If you've spent time on a plant floor wiring up Allen-Bradley PLCs, you've used EtherNet/IP — whether you realized you were speaking CIP or not. But most engineers treat the protocol like a black box: plug in the cable, configure the scanner, pray the I/O updates arrive on time.

This guide breaks open how EtherNet/IP actually works at the protocol level — the CIP object model, the difference between implicit and explicit messaging, how tag-based addressing resolves data paths, and the real-world timing constraints that catch teams off guard during commissioning.

IIoT for Chemical Manufacturing: How to Monitor Reactors, Distillation Columns, and Process Equipment in Real Time

· 9 min read
MachineCDN Team
Industrial IoT Experts

Chemical manufacturing is one of the most complex — and highest-stakes — environments for industrial IoT deployment. A pharmaceutical plant or specialty chemical facility runs continuous processes where temperature deviations of 2°C, pressure spikes of 5 PSI, or flow rate fluctuations of 0.5 GPM can mean the difference between a quality product and a batch rejection worth $100,000 or more.