Skip to main content

187 posts tagged with "Industrial IoT"

Industrial Internet of Things insights and best practices

View All Tags

Event-Driven Tag Delivery in IIoT: Why Polling Everything at Fixed Intervals Is Wasting Your Bandwidth [2026]

· 11 min read

Event-Driven Tag Detection

Most IIoT deployments start the same way: poll every PLC register every second, serialize all values to JSON, and push everything to the cloud over MQTT. It works — until your cellular data bill arrives, or your broker starts choking on 500,000 messages per day from a single gateway, or you realize that 95% of those messages contain values that haven't changed since the last read.

The reality of industrial data is that most values don't change most of the time. A chiller's tank temperature drifts by a fraction of a degree per minute. A blender's motor state is "running" for 8 hours straight. A conveyor's alarm register reads zero all day — until the instant it doesn't, and that instant matters more than the previous 86,400 identical readings.

This guide covers a smarter approach: event-driven tag delivery, where the edge gateway reads at regular intervals but only transmits when something actually changes — and when something does change, it can trigger reads of related tags for complete context.

The Problem with Fixed-Interval Everything

Let's quantify the waste. Consider a typical industrial chiller with 10 compressor circuits, each exposing 16 process tags (temperatures, pressures, flow rates) and 3 alarm registers:

Tags per circuit:  16 process + 3 alarm = 19 tags
Total tags: 10 circuits × 19 = 190 tags
Poll interval: All at 1 second

At JSON format with timestamp, tag ID, and value, each data point is roughly 50 bytes. Per second, that's:

190 tags × 50 bytes = 9,500 bytes/second
= 570 KB/minute
= 34.2 MB/hour
= 821 MB/day

Over a cellular connection at $5/GB, that's $4.10/day per chiller — just for data that's overwhelmingly identical to what was sent one second ago.

Now let's separate the tags by their actual change frequency:

Tag TypeCountActual Change Frequency% of Total Data
Process temperatures100Every 30-60 seconds52.6%
Process pressures50Every 10-30 seconds26.3%
Flow rates10Every 5-15 seconds5.3%
Alarm bits30~1-5 times per day15.8%

Those 30 alarm registers — 15.8% of your data volume — change roughly 5 times per day. You're transmitting them 86,400 times. That's a 17,280x overhead on alarm data.

The Three Pillars of Event-Driven Delivery

A well-designed edge gateway implements three complementary strategies:

1. Compare-on-Read (Change Detection)

The simplest optimization: after reading a tag value from the PLC, compare it against the last transmitted value. If it hasn't changed, don't send it.

The implementation is straightforward:

# Pseudocode — NOT from any specific codebase
def should_deliver(tag, new_value, new_status):
# Always deliver the first reading
if not tag.has_been_read:
return True

# Always deliver on status change (device went offline/online)
if tag.last_status != new_status:
return True

# Compare values if compare flag is enabled
if tag.compare_enabled:
if tag.last_value != new_value:
return True
return False # Value unchanged, skip

# If compare disabled, always deliver
return True

Which tags should use change detection?

  • Alarm/status registers: Always. These are event-driven by nature — you need the transitions, not the steady state.
  • Digital I/O: Always. Binary values either changed or they didn't.
  • Configuration registers: Always. Software version numbers, setpoints, and device parameters change rarely.
  • Temperatures and pressures: Situational. If the process is stable, most readings are identical. But if you need trending data for analytics, you may want periodic delivery regardless.
  • Counter registers: Never. Counters increment continuously — every reading is "different" — and you need the raw values for accurate rate calculations.

The gotcha with floating-point comparison: Comparing IEEE 754 floats for exact equality is unreliable due to rounding. For float-typed tags, use a deadband:

# Apply deadband for float comparison
def float_changed(old_val, new_val, deadband=0.1):
return abs(new_val - old_val) > deadband

A temperature deadband of 0.1°F means you'll transmit when the temperature moves meaningfully, but ignore sensor noise.

2. Dependent Tags (Contextual Reads)

Here's where event-driven delivery gets powerful. Consider this scenario:

A chiller's compressor status word is a 16-bit register where each bit represents a different state: running, loaded, alarm, lockout, etc. You poll this register every second with change detection enabled. When bit 7 flips from 0 to 1 (alarm condition), you need more than just the status word — you need the discharge pressure, suction temperature, refrigerant level, and superheat at that exact moment to diagnose the alarm.

The solution: dependent tag chains. When a parent tag's value changes, the gateway immediately triggers a forced read of all dependent tags, delivering the complete snapshot:

Parent Tag:    Compressor Status Word (polled every 1s, compare=true)
Dependent Tags:
├── Discharge Pressure (read only when status changes)
├── Suction Temperature (read only when status changes)
├── Refrigerant Liquid Temp (read only when status changes)
├── Superheat (read only when status changes)
└── Subcool (read only when status changes)

In normal operation, the gateway reads only the status word — one register per second per compressor. When the status word changes, it reads 6 registers total and delivers them as a single timestamped group. The result:

  • Steady state: 1 register/second → 50 bytes/second
  • Event triggered: 6 registers at once → 300 bytes (once, at the moment of change)
  • vs. polling everything: 6 registers/second → 300 bytes/second (continuously)

Bandwidth savings: 99.8% during steady state, with zero data loss at the moment that matters.

3. Calculated Tags (Bit-Level Decomposition)

Industrial PLCs often pack multiple boolean signals into a single 16-bit or 32-bit "status word" or "alarm word." Each bit has a specific meaning defined in the PLC program documentation:

Alarm Word (uint16):
Bit 0: High Temperature Alarm
Bit 1: Low Pressure Alarm
Bit 2: Flow Switch Fault
Bit 3: Motor Overload
Bit 4: Sensor Open Circuit
Bit 5: Communication Fault
Bits 6-15: Reserved

A naive approach reads the entire word and sends it to the cloud, leaving the bit-level parsing to the backend. A better approach: the edge gateway decomposes the word into individual boolean tags at read time.

The gateway reads the parent tag (the alarm word), and for each calculated tag, it applies a shift and mask operation to extract the individual bit:

Individual Alarm = (alarm_word >> bit_position) & mask

Each calculated tag gets its own change detection. So when Bit 2 (Flow Switch Fault) transitions from 0 to 1, the gateway transmits only that specific alarm — not the entire word, and not any unchanged bits.

Why this matters at scale: A 10-circuit chiller has 30 alarm registers (3 per circuit), each 16 bits wide. That's 480 individual alarm conditions. Without bit decomposition, a single bit flip in one register transmits all 30 registers (because the polling cycle doesn't know which register changed). With calculated tags, only the one changed boolean is transmitted.

Batching: Grouping Efficiency

Even with change detection, transmitting each changed tag as an individual MQTT message creates excessive overhead. MQTT headers, TLS framing, and TCP acknowledgments add 80-100 bytes of overhead per message. A 50-byte tag value in a 130-byte envelope is 62% overhead.

The solution: time-bounded batching. The gateway accumulates changed tag values into a batch, then transmits the batch when either:

  1. The batch reaches a size threshold (e.g., 4KB of accumulated data)
  2. A time limit expires (e.g., 10-30 seconds since the batch started collecting)

The batch structure groups values by timestamp:

{
"groups": [
{
"ts": 1709335200,
"device_type": 1018,
"serial_number": 2411001,
"values": [
{"id": 1, "values": [245]},
{"id": 6, "values": [187]},
{"id": 7, "values": [42]}
]
}
]
}

Critical exception: alarm tags bypass batching. When a status register changes, you don't want the alarm notification sitting in a batch buffer for 30 seconds. Alarm tags should be marked as do_not_batch — they're serialized and transmitted immediately as individual messages with QoS 1 delivery confirmation.

This creates a two-tier delivery system:

Data TypeDeliveryLatencyBatching
Process valuesChange-detected, batched10-30 secondsYes
Alarm/status bitsChange-detected, immediate<1 secondNo
Periodic valuesTime-based, batched10-60 secondsYes

Binary vs. JSON: The Encoding Decision

The batch payload format has a surprisingly large impact on bandwidth. Consider a batch with 50 tag values:

JSON format:

{"groups":[{"ts":1709335200,"device_type":1018,"serial_number":2411001,"values":[{"id":1,"values":[245]},{"id":2,"values":[187]},...]}]}

Typical size: 2,500-3,000 bytes for 50 values

Binary format:

Header:     1 byte  (magic byte 0xF7)
Group count: 4 bytes
Per group:
Timestamp: 4 bytes
Device type: 2 bytes
Serial number: 4 bytes
Value count: 4 bytes
Per value:
Tag ID: 2 bytes
Status: 1 byte
Value count: 1 byte
Value size: 1 byte (1=bool/int8, 2=int16, 4=int32/float)
Values: 1-4 bytes each

Typical size: 400-600 bytes for 50 values

That's a 5-7x reduction — from 3KB to ~500 bytes per batch. Over cellular, this is transformative. A device that transmits 34 MB/day in JSON drops to 5-7 MB/day in binary, before even accounting for change detection.

The trade-off: binary payloads require a schema-aware decoder on the cloud side. Both the gateway and the backend must agree on the encoding format. In practice, most production IIoT platforms use binary encoding for device-to-cloud telemetry and JSON for cloud-to-device commands (where human readability matters and message volume is low).

The Hourly Reset: Catching Drift

One subtle problem with pure change detection: if a value drifts by tiny increments — each below the comparison threshold — the cloud's cached value can slowly diverge from reality. After hours of accumulated micro-drift, the dashboard shows 72.3°F while the actual temperature is 74.1°F.

The solution: periodic forced reads. Every hour (or at another configurable interval), the gateway resets all "read once" flags and forces a complete read of every tag, delivering all current values regardless of change. This acts as a synchronization pulse that corrects any accumulated drift and confirms that all devices are still online.

The hourly reset typically generates one large batch — a snapshot of all 190 tags — adding roughly 10-15KB once per hour. That's negligible compared to the savings from change detection during the other 3,599 seconds.

Quantifying the Savings

Let's revisit our 10-circuit chiller example with event-driven delivery:

Before (fixed interval, everything at 1s):

190 tags × 86,400 seconds × 50 bytes = 821 MB/day

After (event-driven with change detection):

Process values: 160 tags × avg 2 changes/min × 1440 min × 50 bytes = 23 MB/day
Alarm bits: 30 tags × avg 5 changes/day × 50 bytes = 7.5 KB/day
Hourly resets: 190 tags × 24 resets × 50 bytes = 228 KB/day
Overhead (headers, keepalives): ≈ 2 MB/day
──────────────────────────────────────────────────────
Total: ≈ 25.2 MB/day

With binary encoding instead of JSON:

≈ 25.2 MB/day ÷ 5.5 (binary compression) ≈ 4.6 MB/day

Net reduction: 821 MB → 4.6 MB = 99.4% bandwidth savings.

On a $5/GB cellular plan, that's $4.10/day → $0.02/day per chiller.

Implementation Checklist

If you're building or evaluating an edge gateway for event-driven tag delivery, here's what to look for:

  • Per-tag compare flag — Can you enable/disable change detection per tag?
  • Per-tag polling interval — Can fast-changing and slow-changing tags have different read rates?
  • Dependent tag chains — Can a parent tag's change trigger reads of related tags?
  • Bit-level calculated tags — Can alarm words be decomposed into individual booleans?
  • Bypass batching for alarms — Are alarm tags delivered immediately, bypassing the batch buffer?
  • Binary encoding option — Can the gateway serialize in binary instead of JSON?
  • Periodic forced sync — Does the gateway do hourly (or configurable) full reads?
  • Link state tracking — Is device online/offline status treated as a first-class event?

How machineCDN Handles Event-Driven Delivery

machineCDN's edge gateway implements all of these strategies natively. Every tag in the device configuration carries its own polling interval, change detection flag, and batch/immediate delivery preference. Alarm registers are automatically configured for 1-second polling with change detection and immediate delivery. Process values use configurable intervals with batched transmission. The gateway supports both JSON and compact binary encoding, with automatic store-and-forward buffering that retains data through connectivity outages.

The result: plants running machineCDN gateways over cellular connections typically see 95-99% lower data volumes compared to naive fixed-interval polling — without losing a single alarm event or meaningful process change.


Tired of paying for the same unchanged data point 86,400 times a day? machineCDN delivers only the data that matters — alarms instantly, process values on change, with full periodic sync. See how much bandwidth you can save.

Equipment Failure Analysis in Manufacturing: How IIoT Data Turns Root Cause Investigation from Art to Science

· 9 min read
MachineCDN Team
Industrial IoT Experts

A hydraulic press in your stamping plant fails on a Tuesday afternoon. Your most experienced maintenance technician opens the electrical cabinet, runs some tests, replaces a component, and the machine is back up in four hours. Problem solved? Not really. Without understanding why it failed, you're just waiting for it to happen again — maybe on second shift when that technician isn't there. Equipment failure analysis is the discipline of turning breakdown events into prevention strategies. And IIoT data is transforming it from tribal knowledge into repeatable science.

Generative AI in Manufacturing Operations: What's Real, What's Coming, and What's Just Marketing

· 12 min read
MachineCDN Team
Industrial IoT Experts

Every manufacturing software vendor in 2026 has slapped a "Powered by AI" badge on their product. Generative AI — the technology behind ChatGPT, Claude, and Gemini — has gone from Silicon Valley novelty to enterprise must-have in under three years. But what does generative AI actually do for a plant manager with 200 machines, 47 maintenance work orders, and a 6 AM standup in 20 minutes?

The answer is more nuanced than the marketing suggests but more substantial than skeptics admit. Generative AI isn't going to replace your maintenance engineers. But it might make the difference between your best engineer being effective for 4 hours a day (drowning in data) and 7 hours a day (supported by an AI that organizes, summarizes, and surfaces what matters).

Here's what's real, what's emerging, and what's still vaporware.

Best Hopper Monitoring Software for Manufacturing in 2026: Real-Time Level Tracking for Hoppers, Silos, and Bins

· 8 min read
MachineCDN Team
Industrial IoT Experts

A hopper running empty during production costs more than the material inside it. When a plastics injection molder stops because the hopper ran dry, you lose 15-45 minutes of production time to restart — plus the scrap from the transition. Multiply that across three shifts and 30 machines, and hopper monitoring stops being a nice-to-have. Here's how the best manufacturing IIoT platforms handle hopper, silo, and bin level monitoring in 2026.

How to Build a Predictive Maintenance Dashboard That Your Team Will Actually Use

· 11 min read
MachineCDN Team
Industrial IoT Experts

Most predictive maintenance dashboards fail — not because the underlying technology doesn't work, but because nobody uses them. They get built by data scientists who understand algorithms but don't understand the 6 AM maintenance standup. They display impressive ML model outputs that nobody on the floor knows how to act on.

A great predictive maintenance dashboard isn't a data science project. It's a communication tool. It translates machine data into maintenance decisions.

How to Reduce Scrap Rate in Manufacturing with IIoT: A Practical Guide to Catching Defects Before They Multiply

· 9 min read
MachineCDN Team
Industrial IoT Experts

Scrap is the most visible symptom of a manufacturing process running outside its sweet spot. Every defective part represents wasted material, wasted energy, wasted machine time, and wasted labor. In most manufacturing environments, scrap rates run 2-8% of total production — and in some processes like injection molding, die casting, or pharmaceutical tableting, rates can spike to 15-20% during startup or material changeovers.

The traditional approach to scrap reduction is reactive: inspect finished parts, find defects, trace back to root cause, adjust the process, and hope the fix holds. IIoT flips this model by monitoring process parameters in real time — catching drift toward out-of-spec conditions before the first defective part is produced.

This guide covers practical strategies for using IIoT to reduce scrap rates in discrete manufacturing, with specific techniques for common processes.

How to Set Up Remote PLC Diagnostics: A Practical Guide for Manufacturing Engineers

· 12 min read
MachineCDN Team
Industrial IoT Experts

Your plant's PLCs hold the truth about every machine on the floor — cycle counts, fault codes, temperature readings, pressure levels, motor currents. The problem? That data is trapped. Getting to it requires a truck roll, a laptop, and an engineer standing next to the panel.

Remote PLC diagnostics changes that equation entirely. Instead of dispatching someone every time a machine throws a fault, you can see what's happening from anywhere — your office, your home, or a different plant 500 miles away.

How to Standardize Machine Data Across Multiple Manufacturing Plants

· 11 min read
MachineCDN Team
Industrial IoT Experts

You acquire a second plant. The first plant runs Allen-Bradley PLCs with Ethernet/IP. The new plant has Siemens S7-1500s on PROFINET and a handful of legacy Mitsubishi FX units on Modbus RTU. Both plants make the same products on similar (but not identical) equipment.

Now the VP of Operations asks a simple question: "What's our OEE across both plants?"

And you realize you can't answer it. Not because the data doesn't exist, but because "Motor Temperature" in Plant A is tag N7:15 in degrees Fahrenheit, polled every 2 seconds, while the equivalent reading in Plant B is DB10.DBD4 in degrees Celsius, polled every 10 seconds. They're measuring the same thing, but the data is completely incompatible.

This is the machine data standardization problem, and it kills multi-plant visibility for manufacturers every day.

IIoT for Cement Manufacturing: How to Monitor Kilns, Mills, and Clinker Production in Real Time

· 9 min read
MachineCDN Team
Industrial IoT Experts

Cement manufacturing is one of the most energy-intensive industries on the planet. A single rotary kiln burns through 700-1,000 kcal of thermal energy per kilogram of clinker, raw mills draw 15-25 kWh per ton of raw meal, and finish mills consume another 30-45 kWh per ton of cement. When equipment runs below optimal parameters — even by small margins — the energy waste is staggering.

Yet most cement plants still rely on SCADA screens and shift reports to monitor equipment performance. Operators watch trends on local HMIs, maintenance teams respond to failures reactively, and plant managers get production reports 24-48 hours after the fact.

IIoT is changing this by giving cement manufacturers real-time visibility into kiln temperatures, mill vibrations, bearing conditions, and energy consumption — enabling predictive maintenance, process optimization, and multi-plant fleet management that SCADA alone can't deliver.

IIoT for Textile Manufacturing: How to Monitor Looms, Spinning Frames, and Dyeing Equipment in Real Time

· 8 min read
MachineCDN Team
Industrial IoT Experts

Textile manufacturing is one of the oldest industries on earth — and one of the slowest to digitize. While automotive and aerospace plants have embraced connected factories, many textile mills still rely on operator experience and end-of-roll quality checks to catch problems. But the economics are shifting. With raw material costs rising and labor markets tightening, textile manufacturers who can squeeze 5-10% more efficiency from existing equipment gain a decisive competitive edge. Here's how Industrial IoT is transforming weaving, spinning, dyeing, and finishing operations.