Skip to main content

5G Private Networks for Manufacturing: What They Mean for Industrial IoT in 2026

· 9 min read
MachineCDN Team
Industrial IoT Experts

Every major IIoT conference in 2025 and 2026 has had at least one vendor breathlessly promoting 5G private networks as the future of manufacturing connectivity. "Ultra-reliable low-latency communication! Network slicing! Massive machine-type communication! One million devices per square kilometer!"

The hype is real. But so is the technology — when applied to the right use cases. The problem is that most manufacturers don't need a 5G private network. They need reliable, low-latency connectivity to their PLCs. And for the vast majority of factory IIoT deployments, existing cellular (4G LTE) and industrial Ethernet already deliver that.

Let's separate the genuine use cases from the marketing noise.

AVEVA Pricing in 2026: What Does AVEVA Actually Cost for Manufacturing?

· 9 min read
MachineCDN Team
Industrial IoT Experts

AVEVA (formerly Wonderware, now part of Schneider Electric) has been a dominant force in industrial software for over 30 years. Their products — InTouch HMI, System Platform, Historian, MES, and the newer AVEVA PI System (from the OSIsoft acquisition) — run in thousands of manufacturing plants worldwide. If you've worked in manufacturing for any length of time, you've probably touched an AVEVA product.

But AVEVA's pricing has always been opaque, and the landscape has shifted dramatically since Schneider Electric completed its full acquisition in 2023. Here's what AVEVA actually costs in 2026, what's changed, and whether it still makes sense for your manufacturing operation.

Best Condition Monitoring Software 2026: 10 Platforms for Protecting Manufacturing Equipment

· 10 min read
MachineCDN Team
Industrial IoT Experts

Condition monitoring is the backbone of any serious maintenance strategy. Instead of waiting for equipment to fail or replacing parts on a calendar schedule, condition monitoring tracks the actual health of your machines in real time — vibration, temperature, pressure, current draw, oil quality, and dozens of other parameters that tell you exactly when something needs attention.

The global condition monitoring market reached $3.4 billion in 2025 and is projected to hit $5.2 billion by 2028, according to MarketsandMarkets. Manufacturers are finally moving past reactive maintenance — not because they want to, but because they can't afford not to. With unplanned downtime costing an average of $260,000 per hour in automotive manufacturing and $180,000 per hour in food & beverage, the math is compelling.

C3 AI Pricing in 2026: What Does C3 AI Actually Cost?

· 8 min read
MachineCDN Team
Industrial IoT Experts

If you've tried to get a straight answer on C3 AI pricing, you already know: it's not easy. C3 AI doesn't publish pricing on their website, doesn't offer self-service trials, and requires you to go through a multi-week enterprise sales process before you see a number. For manufacturing engineers and plant managers who just want to know if C3 AI fits the budget, this is frustrating.

How to Build Custom Machine Reports for Manufacturing: A Guide to Data-Driven Production Analysis

· 8 min read
MachineCDN Team
Industrial IoT Experts

Standard canned reports answer the questions your vendor thought to ask. Custom reports answer the questions that actually keep you up at night. When a plant manager needs to know why Machine 14's cycle times drifted 8% last Tuesday between 2pm and 4pm, no pre-built dashboard can help. Here's how modern IIoT platforms enable manufacturing engineers to build custom machine reports — and why this capability separates serious platforms from expensive dashboards.

Dependent Tag Architectures: Building Event-Driven Data Hierarchies in Industrial IoT [2026]

· 10 min read

Most IIoT platforms treat every data point as equal. They poll each tag on a fixed schedule, blast everything to the cloud, and let someone else figure out what matters. That approach works fine when you have ten tags. It collapses when you have ten thousand.

Production-grade edge systems take a fundamentally different approach: they model relationships between tags — parent-child dependencies, calculated values derived from raw registers, and event-driven reads that fire only when upstream conditions change. The result is dramatically less bus traffic, lower latency on the signals that matter, and a data architecture that mirrors how the physical process actually works.

This article is a deep technical guide to building these hierarchical tag architectures from the ground up.

Dependent tag architecture for IIoT

The Problem with Flat Polling

In a traditional SCADA or IIoT setup, the edge gateway maintains a flat list of tags. Each tag has an address and a polling interval:

Tag: Barrel_Temperature    Address: 40001    Interval: 1s
Tag: Screw_Speed Address: 40002 Interval: 1s
Tag: Mold_Pressure Address: 40003 Interval: 1s
Tag: Machine_State Address: 40010 Interval: 1s
Tag: Alarm_Word_1 Address: 40020 Interval: 1s
Tag: Alarm_Word_2 Address: 40021 Interval: 1s

Every second, the gateway reads every tag — regardless of whether anything changed. This creates three problems:

  1. Bus saturation on serial links. A Modbus RTU link at 9600 baud can handle roughly 10–15 register reads per second. With 200 tags at 1-second intervals, you're mathematically guaranteed to fall behind.

  2. Wasted bandwidth to the cloud. If barrel temperature hasn't changed in 30 seconds, you're uploading the same value 30 times. At $0.005 per MQTT message on most cloud IoT services, that adds up.

  3. Missing the events that matter. When everything polls at the same rate, a critical alarm state change gets the same priority as a temperature reading that hasn't moved in an hour.

Introducing Tag Hierarchies

A dependent tag architecture introduces three concepts:

1. Parent-Child Dependencies

A dependent tag is one that only gets read when its parent tag's value changes. Consider a machine status word. When the status word changes from "Running" to "Fault," you want to immediately read all the associated diagnostic registers. When the status word hasn't changed, those diagnostic registers are irrelevant.

# Conceptual configuration
parent_tag:
name: machine_status_word
address: 40010
interval: 1s
compare: true
dependent_tags:
- name: fault_code
address: 40011
- name: fault_timestamp
address: 40012-40013
- name: last_setpoint
address: 40014

When machine_status_word changes, the edge daemon immediately performs a forced read of all three dependent tags and delivers them in the same telemetry group — with the same timestamp. This guarantees temporal coherence: the fault code, timestamp, and last setpoint all share the exact timestamp of the state change that triggered them.

2. Calculated Tags

A calculated tag is a virtual data point derived from a parent tag's raw value through bitwise operations. The most common use case: decoding packed alarm words.

Industrial PLCs frequently pack 16 boolean alarms into a single 16-bit register. Rather than polling 16 separate coil addresses (which requires 16 Modbus transactions), you read one holding register and extract each bit:

Alarm_Word_1 (uint16 at 40020):
Bit 0 → High Temperature Alarm
Bit 1 → Low Pressure Alarm
Bit 2 → Motor Overload
Bit 3 → Emergency Stop Active
...
Bit 15 → Communication Fault

A well-designed edge gateway handles this decomposition at the edge:

parent_tag:
name: alarm_word_1
address: 40020
type: uint16
interval: 1s
compare: true # Only process when value changes
do_not_batch: true # Deliver immediately — don't wait for batch timeout
calculated_tags:
- name: high_temp_alarm
type: bool
shift: 0
mask: 0x01
- name: low_pressure_alarm
type: bool
shift: 1
mask: 0x01
- name: motor_overload
type: bool
shift: 2
mask: 0x01
- name: estop_active
type: bool
shift: 3
mask: 0x01

The beauty of this approach:

  • One Modbus read instead of sixteen
  • Zero cloud processing — the edge already decomposed the alarm word into named boolean tags
  • Change-driven delivery — if the alarm word hasn't changed, nothing gets sent. When bit 2 flips from 0 to 1, only the changed calculated tags get delivered.

3. Comparison-Based Delivery

The compare flag on a tag definition tells the edge daemon to track the last-known value and suppress delivery when the new value matches. This is distinct from a polling interval — the tag still gets read on schedule, but the value only gets delivered when it changes.

This is particularly powerful for:

  • Status words and mode registers that change infrequently
  • Alarm bits where you care about transitions, not steady state
  • Setpoint registers that only change when an operator makes an adjustment

A well-implemented comparison handles type-aware equality. Comparing two float values with bitwise equality is fine for PLC registers (they're IEEE 754 representations read directly from memory — no floating-point arithmetic involved). Comparing two uint16 values is straightforward. The edge daemon should store the raw bytes, not a converted representation.

Register Grouping: The Foundation

Before dependent tags can work efficiently, the underlying polling engine needs contiguous register grouping. This is the practice of combining multiple tags into a single Modbus read request when their addresses are adjacent.

Consider these five tags:

Tag A: addr 40001, type uint16  (1 register)
Tag B: addr 40002, type uint16 (1 register)
Tag C: addr 40003, type float (2 registers)
Tag D: addr 40005, type uint16 (1 register)
Tag E: addr 40010, type uint16 (1 register) ← gap

An intelligent polling engine groups A through D into a single Read Holding Registers call: start address 40001, quantity 5. Tag E starts a new group because there's a 5-register gap.

The grouping rules are:

  1. Same function code. You can't combine holding registers (FC03) with input registers (FC04) in one read.
  2. Contiguous addresses. Any gap breaks the group.
  3. Same polling interval. A tag polling at 1s and a tag polling at 60s shouldn't be in the same group.
  4. Maximum group size. The Modbus spec limits a single read to 125 registers (some devices impose lower limits — 50 is a safe practical maximum).

After the bulk read returns, the edge daemon dispatches individual register values to each tag definition, handling type conversion per tag (uint16, int16, float from two consecutive registers, etc.).

The 32-Bit Float Problem

When a tag spans two Modbus registers (common for 32-bit integers and IEEE 754 floats), the edge daemon must handle word ordering. Some PLCs store the high word first (big-endian), others store the low word first (little-endian). A typical edge system stores the raw register pair and then calls the appropriate conversion:

  • Big-endian (AB CD): value = (register[0] << 16) | register[1]
  • Little-endian (CD AB): value = (register[1] << 16) | register[0]

For IEEE 754 floats, the 32-bit integer is reinterpreted as a floating-point value. Getting this wrong produces garbage data — a common source of "the numbers look random" support tickets.

Architecture: Tying It Together

Here's how a production edge system processes a single polling cycle with dependent tags:

1. Start timestamp group (T = now)
2. For each tag in the poll list:
a. Check if interval has elapsed since last read
b. If not due, skip (but check if it's part of a contiguous group)
c. Read tag (or group of tags) from PLC
d. If compare=true and value unchanged: skip delivery
e. If compare=true and value changed:
i. Deliver value (batched or immediate)
ii. If tag has calculated_tags: compute each one, deliver
iii. If tag has dependent_tags:
- Finalize current batch group
- Force-read all dependent tags (recursive)
- Start new batch group
f. Update last-known value and last-read timestamp
3. Finalize timestamp group

The critical detail is step (e)(iii): when a parent tag triggers a dependent read, the current batch group gets finalized and the dependent tags are read in a forced mode (ignoring their individual interval timers). This ensures the dependent values reflect the state at the moment of the parent's change, not some future polling cycle.

Practical Considerations

On Modbus RTU, the 3.5-character silent interval between frames is mandatory. At 9600 baud with 8N1 encoding, one character takes ~1.04ms, so the minimum inter-frame gap is ~3.64ms. With a typical request frame of 8 bytes and a response frame of 5 + 2*N bytes (for N registers), a single read of 10 registers takes approximately:

Request:    8 bytes × 1.04ms = 8.3ms
Turnaround: ~3.5ms (device processing)
Response: (5 + 20) bytes × 1.04ms = 26ms
Gap: 3.64ms
Total: ~41.4ms per read

This means you can fit roughly 24 read operations per second on a 9600-baud link. If you're polling 150 tags with 1-second intervals, grouping is not optional — it's survival.

Alarm Tag Design

For alarm words, always configure:

  • compare: true — only deliver when an alarm state changes
  • do_not_batch: true — bypass the batch timeout and deliver immediately
  • interval: 1 (1 second) — poll frequently to catch transient alarms

Process variables like temperatures and pressures can safely use longer intervals (30–60 seconds) with compare: false since trending data benefits from regular samples.

Avoiding Circular Dependencies

If Tag A is dependent on Tag B, and Tag B is dependent on Tag A, you'll create an infinite recursion in the read loop. Production systems guard against this by either:

  • Limiting dependency depth (typically 1–2 levels)
  • Tracking a "reading" flag to prevent re-entry
  • Flattening the graph at configuration parse time

Hourly Full-Refresh

Even with change-driven delivery, it's good practice to force-read and deliver all tags at least once per hour. This catches any edge cases where a value changed but the change was missed (e.g., a brief network hiccup that caused a read failure during the exact moment of change). A simple approach: track the hour boundary and reset the "already read" flag on all tags when the hour rolls over.

How machineCDN Handles Tag Hierarchies

machineCDN's edge infrastructure supports all three relationship types natively. When you configure a device in the platform, you define parent-child dependencies, calculated alarm bits, and comparison-based delivery in the device configuration — no custom scripting required.

The platform's edge daemon handles contiguous register grouping automatically, supports both EtherNet/IP and Modbus (TCP and RTU) from the same configuration model, and provides dual-format batch delivery (JSON for debugging, binary for bandwidth efficiency). Alarm tags are delivered immediately outside the batch cycle, ensuring sub-second alert latency even when the batch timeout is set to 30 seconds.

For teams managing fleets of machines across multiple plants, this means the tag architecture you define once gets deployed consistently to every edge gateway — whether it's monitoring a chiller system with 160+ process variables or a simple TCU with 20 tags.

Key Takeaways

  1. Model relationships, not just addresses. Tags have dependencies that mirror the physical process. Your data architecture should reflect that.
  2. Use comparison to suppress noise. A status word that hasn't changed in 6 hours doesn't need 21,600 duplicate deliveries.
  3. Calculated tags eliminate cloud processing. Decompose packed alarm words at the edge — one Modbus read becomes 16 named boolean signals.
  4. Dependent reads guarantee temporal coherence. When a parent changes, all children are read with the same timestamp.
  5. Group contiguous registers ruthlessly. On serial links, the difference between grouped and ungrouped reads is the difference between working and not working.

The flat-list polling model was fine for SCADA systems monitoring 50 tags on a single HMI. For IIoT platforms handling thousands of data points across fleets of machines, hierarchical tag architectures aren't an optimization — they're the foundation.

The Digital Thread in Manufacturing: Connecting Design, Production, and Service Data for Complete Product Traceability

· 10 min read
MachineCDN Team
Industrial IoT Experts

The digital thread is one of those Industry 4.0 concepts that sounds brilliant in a conference keynote and impossibly abstract on the factory floor. The idea is simple: create an unbroken chain of data that connects every stage of a product's lifecycle — from initial design through manufacturing, testing, delivery, and field service. The execution is where things get complicated.

But here's why it matters: without a digital thread, your manufacturing data exists in silos. CAD files live in engineering. Process parameters live in the PLC. Quality records live in the QMS. Field failure data lives in the service CRM. When a customer reports a defect, tracing it back to the root cause means manually stitching together data from four or five different systems — a process that takes days or weeks.

EtherNet/IP and CIP: A Practical Guide for Plant Engineers [2026]

· 11 min read

If you've ever connected to an Allen-Bradley Micro800 or CompactLogix PLC, you've used EtherNet/IP — whether you knew it or not. It's one of the most widely deployed industrial Ethernet protocols in North America, and for good reason: it runs on standard Ethernet hardware, supports TCP/IP natively, and handles everything from high-speed I/O updates to configuration and diagnostics over a single cable.

But EtherNet/IP is more than just "Modbus over Ethernet." Its underlying protocol — the Common Industrial Protocol (CIP) — is a sophisticated object-oriented messaging framework that fundamentally changes how edge devices, gateways, and cloud platforms interact with PLCs.

This guide covers what plant engineers and IIoT architects actually need to know.

Event-Driven Tag Delivery in IIoT: Why Polling Everything at Fixed Intervals Is Wasting Your Bandwidth [2026]

· 11 min read

Event-Driven Tag Detection

Most IIoT deployments start the same way: poll every PLC register every second, serialize all values to JSON, and push everything to the cloud over MQTT. It works — until your cellular data bill arrives, or your broker starts choking on 500,000 messages per day from a single gateway, or you realize that 95% of those messages contain values that haven't changed since the last read.

The reality of industrial data is that most values don't change most of the time. A chiller's tank temperature drifts by a fraction of a degree per minute. A blender's motor state is "running" for 8 hours straight. A conveyor's alarm register reads zero all day — until the instant it doesn't, and that instant matters more than the previous 86,400 identical readings.

This guide covers a smarter approach: event-driven tag delivery, where the edge gateway reads at regular intervals but only transmits when something actually changes — and when something does change, it can trigger reads of related tags for complete context.

The Problem with Fixed-Interval Everything

Let's quantify the waste. Consider a typical industrial chiller with 10 compressor circuits, each exposing 16 process tags (temperatures, pressures, flow rates) and 3 alarm registers:

Tags per circuit:  16 process + 3 alarm = 19 tags
Total tags: 10 circuits × 19 = 190 tags
Poll interval: All at 1 second

At JSON format with timestamp, tag ID, and value, each data point is roughly 50 bytes. Per second, that's:

190 tags × 50 bytes = 9,500 bytes/second
= 570 KB/minute
= 34.2 MB/hour
= 821 MB/day

Over a cellular connection at $5/GB, that's $4.10/day per chiller — just for data that's overwhelmingly identical to what was sent one second ago.

Now let's separate the tags by their actual change frequency:

Tag TypeCountActual Change Frequency% of Total Data
Process temperatures100Every 30-60 seconds52.6%
Process pressures50Every 10-30 seconds26.3%
Flow rates10Every 5-15 seconds5.3%
Alarm bits30~1-5 times per day15.8%

Those 30 alarm registers — 15.8% of your data volume — change roughly 5 times per day. You're transmitting them 86,400 times. That's a 17,280x overhead on alarm data.

The Three Pillars of Event-Driven Delivery

A well-designed edge gateway implements three complementary strategies:

1. Compare-on-Read (Change Detection)

The simplest optimization: after reading a tag value from the PLC, compare it against the last transmitted value. If it hasn't changed, don't send it.

The implementation is straightforward:

# Pseudocode — NOT from any specific codebase
def should_deliver(tag, new_value, new_status):
# Always deliver the first reading
if not tag.has_been_read:
return True

# Always deliver on status change (device went offline/online)
if tag.last_status != new_status:
return True

# Compare values if compare flag is enabled
if tag.compare_enabled:
if tag.last_value != new_value:
return True
return False # Value unchanged, skip

# If compare disabled, always deliver
return True

Which tags should use change detection?

  • Alarm/status registers: Always. These are event-driven by nature — you need the transitions, not the steady state.
  • Digital I/O: Always. Binary values either changed or they didn't.
  • Configuration registers: Always. Software version numbers, setpoints, and device parameters change rarely.
  • Temperatures and pressures: Situational. If the process is stable, most readings are identical. But if you need trending data for analytics, you may want periodic delivery regardless.
  • Counter registers: Never. Counters increment continuously — every reading is "different" — and you need the raw values for accurate rate calculations.

The gotcha with floating-point comparison: Comparing IEEE 754 floats for exact equality is unreliable due to rounding. For float-typed tags, use a deadband:

# Apply deadband for float comparison
def float_changed(old_val, new_val, deadband=0.1):
return abs(new_val - old_val) > deadband

A temperature deadband of 0.1°F means you'll transmit when the temperature moves meaningfully, but ignore sensor noise.

2. Dependent Tags (Contextual Reads)

Here's where event-driven delivery gets powerful. Consider this scenario:

A chiller's compressor status word is a 16-bit register where each bit represents a different state: running, loaded, alarm, lockout, etc. You poll this register every second with change detection enabled. When bit 7 flips from 0 to 1 (alarm condition), you need more than just the status word — you need the discharge pressure, suction temperature, refrigerant level, and superheat at that exact moment to diagnose the alarm.

The solution: dependent tag chains. When a parent tag's value changes, the gateway immediately triggers a forced read of all dependent tags, delivering the complete snapshot:

Parent Tag:    Compressor Status Word (polled every 1s, compare=true)
Dependent Tags:
├── Discharge Pressure (read only when status changes)
├── Suction Temperature (read only when status changes)
├── Refrigerant Liquid Temp (read only when status changes)
├── Superheat (read only when status changes)
└── Subcool (read only when status changes)

In normal operation, the gateway reads only the status word — one register per second per compressor. When the status word changes, it reads 6 registers total and delivers them as a single timestamped group. The result:

  • Steady state: 1 register/second → 50 bytes/second
  • Event triggered: 6 registers at once → 300 bytes (once, at the moment of change)
  • vs. polling everything: 6 registers/second → 300 bytes/second (continuously)

Bandwidth savings: 99.8% during steady state, with zero data loss at the moment that matters.

3. Calculated Tags (Bit-Level Decomposition)

Industrial PLCs often pack multiple boolean signals into a single 16-bit or 32-bit "status word" or "alarm word." Each bit has a specific meaning defined in the PLC program documentation:

Alarm Word (uint16):
Bit 0: High Temperature Alarm
Bit 1: Low Pressure Alarm
Bit 2: Flow Switch Fault
Bit 3: Motor Overload
Bit 4: Sensor Open Circuit
Bit 5: Communication Fault
Bits 6-15: Reserved

A naive approach reads the entire word and sends it to the cloud, leaving the bit-level parsing to the backend. A better approach: the edge gateway decomposes the word into individual boolean tags at read time.

The gateway reads the parent tag (the alarm word), and for each calculated tag, it applies a shift and mask operation to extract the individual bit:

Individual Alarm = (alarm_word >> bit_position) & mask

Each calculated tag gets its own change detection. So when Bit 2 (Flow Switch Fault) transitions from 0 to 1, the gateway transmits only that specific alarm — not the entire word, and not any unchanged bits.

Why this matters at scale: A 10-circuit chiller has 30 alarm registers (3 per circuit), each 16 bits wide. That's 480 individual alarm conditions. Without bit decomposition, a single bit flip in one register transmits all 30 registers (because the polling cycle doesn't know which register changed). With calculated tags, only the one changed boolean is transmitted.

Batching: Grouping Efficiency

Even with change detection, transmitting each changed tag as an individual MQTT message creates excessive overhead. MQTT headers, TLS framing, and TCP acknowledgments add 80-100 bytes of overhead per message. A 50-byte tag value in a 130-byte envelope is 62% overhead.

The solution: time-bounded batching. The gateway accumulates changed tag values into a batch, then transmits the batch when either:

  1. The batch reaches a size threshold (e.g., 4KB of accumulated data)
  2. A time limit expires (e.g., 10-30 seconds since the batch started collecting)

The batch structure groups values by timestamp:

{
"groups": [
{
"ts": 1709335200,
"device_type": 1018,
"serial_number": 2411001,
"values": [
{"id": 1, "values": [245]},
{"id": 6, "values": [187]},
{"id": 7, "values": [42]}
]
}
]
}

Critical exception: alarm tags bypass batching. When a status register changes, you don't want the alarm notification sitting in a batch buffer for 30 seconds. Alarm tags should be marked as do_not_batch — they're serialized and transmitted immediately as individual messages with QoS 1 delivery confirmation.

This creates a two-tier delivery system:

Data TypeDeliveryLatencyBatching
Process valuesChange-detected, batched10-30 secondsYes
Alarm/status bitsChange-detected, immediate<1 secondNo
Periodic valuesTime-based, batched10-60 secondsYes

Binary vs. JSON: The Encoding Decision

The batch payload format has a surprisingly large impact on bandwidth. Consider a batch with 50 tag values:

JSON format:

{"groups":[{"ts":1709335200,"device_type":1018,"serial_number":2411001,"values":[{"id":1,"values":[245]},{"id":2,"values":[187]},...]}]}

Typical size: 2,500-3,000 bytes for 50 values

Binary format:

Header:     1 byte  (magic byte 0xF7)
Group count: 4 bytes
Per group:
Timestamp: 4 bytes
Device type: 2 bytes
Serial number: 4 bytes
Value count: 4 bytes
Per value:
Tag ID: 2 bytes
Status: 1 byte
Value count: 1 byte
Value size: 1 byte (1=bool/int8, 2=int16, 4=int32/float)
Values: 1-4 bytes each

Typical size: 400-600 bytes for 50 values

That's a 5-7x reduction — from 3KB to ~500 bytes per batch. Over cellular, this is transformative. A device that transmits 34 MB/day in JSON drops to 5-7 MB/day in binary, before even accounting for change detection.

The trade-off: binary payloads require a schema-aware decoder on the cloud side. Both the gateway and the backend must agree on the encoding format. In practice, most production IIoT platforms use binary encoding for device-to-cloud telemetry and JSON for cloud-to-device commands (where human readability matters and message volume is low).

The Hourly Reset: Catching Drift

One subtle problem with pure change detection: if a value drifts by tiny increments — each below the comparison threshold — the cloud's cached value can slowly diverge from reality. After hours of accumulated micro-drift, the dashboard shows 72.3°F while the actual temperature is 74.1°F.

The solution: periodic forced reads. Every hour (or at another configurable interval), the gateway resets all "read once" flags and forces a complete read of every tag, delivering all current values regardless of change. This acts as a synchronization pulse that corrects any accumulated drift and confirms that all devices are still online.

The hourly reset typically generates one large batch — a snapshot of all 190 tags — adding roughly 10-15KB once per hour. That's negligible compared to the savings from change detection during the other 3,599 seconds.

Quantifying the Savings

Let's revisit our 10-circuit chiller example with event-driven delivery:

Before (fixed interval, everything at 1s):

190 tags × 86,400 seconds × 50 bytes = 821 MB/day

After (event-driven with change detection):

Process values: 160 tags × avg 2 changes/min × 1440 min × 50 bytes = 23 MB/day
Alarm bits: 30 tags × avg 5 changes/day × 50 bytes = 7.5 KB/day
Hourly resets: 190 tags × 24 resets × 50 bytes = 228 KB/day
Overhead (headers, keepalives): ≈ 2 MB/day
──────────────────────────────────────────────────────
Total: ≈ 25.2 MB/day

With binary encoding instead of JSON:

≈ 25.2 MB/day ÷ 5.5 (binary compression) ≈ 4.6 MB/day

Net reduction: 821 MB → 4.6 MB = 99.4% bandwidth savings.

On a $5/GB cellular plan, that's $4.10/day → $0.02/day per chiller.

Implementation Checklist

If you're building or evaluating an edge gateway for event-driven tag delivery, here's what to look for:

  • Per-tag compare flag — Can you enable/disable change detection per tag?
  • Per-tag polling interval — Can fast-changing and slow-changing tags have different read rates?
  • Dependent tag chains — Can a parent tag's change trigger reads of related tags?
  • Bit-level calculated tags — Can alarm words be decomposed into individual booleans?
  • Bypass batching for alarms — Are alarm tags delivered immediately, bypassing the batch buffer?
  • Binary encoding option — Can the gateway serialize in binary instead of JSON?
  • Periodic forced sync — Does the gateway do hourly (or configurable) full reads?
  • Link state tracking — Is device online/offline status treated as a first-class event?

How machineCDN Handles Event-Driven Delivery

machineCDN's edge gateway implements all of these strategies natively. Every tag in the device configuration carries its own polling interval, change detection flag, and batch/immediate delivery preference. Alarm registers are automatically configured for 1-second polling with change detection and immediate delivery. Process values use configurable intervals with batched transmission. The gateway supports both JSON and compact binary encoding, with automatic store-and-forward buffering that retains data through connectivity outages.

The result: plants running machineCDN gateways over cellular connections typically see 95-99% lower data volumes compared to naive fixed-interval polling — without losing a single alarm event or meaningful process change.


Tired of paying for the same unchanged data point 86,400 times a day? machineCDN delivers only the data that matters — alarms instantly, process values on change, with full periodic sync. See how much bandwidth you can save.

Equipment Failure Analysis in Manufacturing: How IIoT Data Turns Root Cause Investigation from Art to Science

· 9 min read
MachineCDN Team
Industrial IoT Experts

A hydraulic press in your stamping plant fails on a Tuesday afternoon. Your most experienced maintenance technician opens the electrical cabinet, runs some tests, replaces a component, and the machine is back up in four hours. Problem solved? Not really. Without understanding why it failed, you're just waiting for it to happen again — maybe on second shift when that technician isn't there. Equipment failure analysis is the discipline of turning breakdown events into prevention strategies. And IIoT data is transforming it from tribal knowledge into repeatable science.