Skip to main content

EtherNet/IP Implicit vs Explicit Messaging: What Plant Engineers Actually Need to Know [2026]

· 11 min read

EtherNet/IP CIP Protocol Architecture

If you've ever tried to pull real-time data from an Allen-Bradley PLC over EtherNet/IP and found yourself staring at timeouts, missed packets, or inexplicable latency spikes — you've probably run into the implicit vs. explicit messaging divide without realizing it.

EtherNet/IP is one of the most widely deployed industrial Ethernet protocols, yet the nuances of its messaging model trip up even experienced automation engineers. This guide breaks down what actually matters when you're connecting PLCs to edge gateways, SCADA systems, or IIoT platforms like machineCDN.

CIP: The Protocol Inside the Protocol

EtherNet/IP is really just a transport wrapper around the Common Industrial Protocol (CIP). CIP is the application layer that defines how devices discover each other, exchange data, and manage connections. Understanding CIP is understanding EtherNet/IP — everything else is TCP/UDP plumbing.

CIP organizes everything into objects. Every device has a set of objects, each with attributes you can read or write. The key objects you'll encounter:

ObjectClass IDPurpose
Identity0x01Device name, serial number, vendor ID
Message Router0x02Routes CIP requests to the right object
Connection Manager0x06Manages I/O and explicit connections
Assembly0x04Groups data points into input/output assemblies
TCP/IP Interface0xF5Network configuration
Ethernet Link0xF6Link-layer statistics

When your edge gateway reads a tag like capacity_utilization from a Micro800 or CompactLogix PLC, it's ultimately reading an attribute from a CIP object — the protocol just hides this behind a friendlier tag-name interface.

Explicit Messaging: The Request-Response Model

Explicit messaging is CIP's "ask and receive" mode. Your client sends a request over TCP port 44818, the device processes it, and sends a response. It's conceptually identical to an HTTP GET — connected, reliable, and sequential.

How It Actually Works

  1. TCP handshake with the PLC on port 44818
  2. RegisterSession — establishes a CIP session, returns a session handle
  3. SendRRData (Send Request/Reply Data) — wraps your CIP service request
  4. Device processes the request and returns a response in the same TCP connection

For tag reads on Logix-family controllers, the path typically encodes:

  • Protocol type (e.g., ab-eip for Allen-Bradley EtherNet/IP)
  • Gateway IP — the PLC's network address
  • CPU type — Micro800, CompactLogix, ControlLogix, etc.
  • Tag name — the symbolic name of the data point
  • Element size and count — how many bytes per element, how many elements to read

A typical read might look like:

protocol=ab-eip
gateway=192.168.1.50
cpu=compactlogix
name=Temperature_Zone1
elem_size=4
elem_count=1

This tells the stack: "Connect to the CompactLogix at 192.168.1.50, find the tag named Temperature_Zone1, read one 4-byte (32-bit float) element."

Explicit Messaging Characteristics

  • Latency: 2-10ms per request on a quiet network, 20-50ms under load
  • Throughput: Sequential — you can't pipeline requests on a single connection
  • Best for: Configuration reads, diagnostics, infrequent data access
  • Max payload: 504 bytes per CIP service response (can be extended with Large Forward Open)
  • Reliability: TCP-based, guaranteed delivery

The Hidden Cost: Tag Creation Overhead

Here's something that catches people off guard. On Logix controllers, the first time you read a symbolic tag, the controller has to resolve the tag name to an internal address. This resolution can take 5-15ms. Subsequent reads on the same connection are faster because the tag handle is cached.

If your gateway creates and destroys connections frequently (say, on each poll cycle), you're paying this resolution cost every single time. A well-designed gateway keeps connections persistent and caches tag handles across read cycles. This alone can cut your effective read latency by 40-60%.

Implicit Messaging: The Real-Time Streaming Model

Implicit messaging is where EtherNet/IP earns its keep in real-time control. Instead of request-response, data flows continuously via UDP multicast or unicast without the overhead of individual requests.

The Connection Setup

Implicit connections are established through an explicit messaging sequence:

  1. Forward Open request (via TCP) — negotiates the connection parameters
  2. Both sides agree on:
    • RPI (Requested Packet Interval) — how often data is produced, in microseconds
    • Connection path — which assembly objects to bind
    • Transport type — Class 1 (with sequence counting) or Class 3
    • Connection size — max bytes per packet
  3. Once established, data flows via UDP port 2222 at the agreed RPI

RPI: The Most Misunderstood Parameter

The Requested Packet Interval is essentially your sampling rate. Set it too fast and you'll flood the network with redundant data. Set it too slow and you'll miss transient events.

RPI SettingTypical Use CaseNetwork Impact
2msMotion control, servo drives~500 packets/sec per connection
10msFast discrete I/O~100 packets/sec per connection
50msAnalog process values~20 packets/sec per connection
100-500msMonitoring, trendingMinimal
1000ms+Configuration dataNegligible

The golden rule: Your RPI should match your actual process dynamics, not your "just in case" anxiety. A temperature sensor that changes over minutes doesn't need a 10ms RPI — 500ms is plenty.

For IIoT monitoring scenarios, RPIs of 100ms to 1000ms are typically appropriate. You're tracking trends and detecting anomalies, not closing servo loops. Platforms like machineCDN are designed to ingest data at these intervals and apply server-side intelligence — the edge gateway doesn't need millisecond resolution to detect that a motor bearing temperature is trending upward.

Implicit Messaging Characteristics

  • Latency: Deterministic — data arrives every RPI interval (±jitter)
  • Throughput: Concurrent — hundreds of connections can stream simultaneously
  • Best for: Cyclic I/O data, real-time monitoring, control loops
  • Transport: UDP — no retransmission, but sequence numbers detect missed packets
  • Multicast: Multiple consumers can subscribe to the same producer

Scanner/Adapter Architecture

In EtherNet/IP, the device that initiates the implicit connection is the scanner (typically the PLC or an HMI), and the device that responds is the adapter (typically an I/O module, drive, or remote rack).

Why This Matters for Edge Gateways

When you connect an IIoT edge gateway to a PLC, the gateway typically acts as an explicit messaging client — it reaches out and reads tags on demand. It is not acting as a scanner or adapter in the implicit sense.

This is an important architectural distinction:

  • Scanner mode would require the gateway to manage Forward Open connections and consume I/O assemblies — complex, but gives you real-time streaming data
  • Explicit client mode is simpler — poll tags at your desired interval, get responses, publish to the cloud

Most IIoT gateways (including those powering machineCDN deployments) use explicit messaging with intelligent polling. Why? Because:

  1. Simplicity — No need to configure assembly objects on the PLC
  2. Flexibility — You can read any tag by name, not just pre-configured assemblies
  3. Non-intrusion — No modifications to the PLC program required
  4. Sufficient performance — For monitoring (not control), 1-60 second poll intervals are fine

When to Use Implicit Messaging for IIoT

There are cases where implicit messaging makes sense even for monitoring:

  • High tag counts — If you're reading 500+ tags from a single PLC, implicit is more efficient
  • Sub-second requirements — Process alarms that need under 100ms detection
  • Multicast scenarios — Multiple systems need the same data simultaneously
  • Deterministic timing — You need guaranteed delivery intervals for SPC/SQC

Data Types and Byte Ordering

EtherNet/IP inherits CIP's data type system. When reading tags, you need to know the data width:

CIP TypeWidthNotes
BOOL1 byteActually stored as uint8, 0 or 1
INT (SINT)1 byteSigned 8-bit
INT2 bytesSigned 16-bit
DINT4 bytesSigned 32-bit
REAL4 bytesIEEE 754 float
LINT8 bytesSigned 64-bit (ControlLogix only)

Byte order is little-endian for CIP. This trips up engineers coming from Modbus (which is big-endian). If you're bridging between the two protocols, you'll need byte-swap logic at the translation layer.

For array reads, the element size matters for offset calculation. Reading element N of a 32-bit array means the data starts at byte offset N * 4. Getting this wrong produces garbage values that look plausible (they're the right data type, just from the wrong array position), which makes debugging painful.

Connection Timeouts and Keepalive

One of the most common production issues with EtherNet/IP is connection timeout cascades. Here's how they happen:

  1. Network blip causes one packet to be delayed
  2. PLC times out the connection (default: 4x the RPI)
  3. Gateway has to re-register the session and re-read tags
  4. During re-establishment, tag handles are lost — all tag names need re-resolution
  5. While reconnecting, data gaps appear in your historian

Mitigation Strategies

  • Set realistic timeout multipliers. The CIP standard allows up to 255x the RPI as a timeout. For monitoring, use generous timeouts (e.g., 10-30 seconds) rather than tight ones.
  • Implement exponential backoff on reconnection. Hammering a PLC with connection requests during a network event makes things worse.
  • Cache tag handles and attempt to reuse them after reconnection. Some PLCs allow this; others invalidate all handles on session reset.
  • Use a connection watchdog — if no data arrives for N intervals, proactively reconnect rather than waiting for the timeout to expire.
  • Monitor connection statistics at the Ethernet Link object (Class 0xF6) — rising error counters often predict connection failures before they happen.

Practical Performance Benchmarks

Based on real-world deployments across plastics manufacturing, HVAC, and process control:

ScenarioTagsPoll IntervalAvg LatencyCPU Load on PLC
Single gateway, 50 tags501 sec3-5ms/tagUnder 1%
Single gateway, 200 tags2005 sec5-8ms/tag2-3%
Three gateways, 500 tags total50010 sec8-15ms/tag5-8%
One gateway, 50 tags, aggressive50100ms2-4ms/tag3-5%

Key insight: PLC CPU impact scales with request frequency, not tag count. Reading 200 tags in one optimized request every 5 seconds has less impact than reading 10 tags every 100ms.

Tag Grouping Optimization

When reading multiple tags, group them by:

  1. Data type and element count — Same-type tags can sometimes be read more efficiently
  2. Program scope — Tags in the same program/task on the PLC share routing paths
  3. Read interval — Don't poll slow-changing configuration values at the same rate as process variables

A well-optimized gateway might use three polling groups:

  • Fast (1-5 sec): Machine state booleans, alarm bits, running status — values that trigger immediate action
  • Medium (30-60 sec): Process variables — temperatures, pressures, flow rates, RPMs
  • Slow (5-60 min): Configuration and identity — firmware version, serial number, device type

This tiered approach reduces network traffic by 60-80% compared to polling everything at the fastest interval.

Common Pitfalls

1. Forgetting About CPU Type

The CIP service path differs by controller family. A request formatted for CompactLogix won't work on a Micro800, even though both speak EtherNet/IP. Always verify the CPU type during gateway configuration.

2. Array Index Confusion

Some PLCs use zero-based array indexing, others use one-based. If you request MyArray[0] and get an error, try [1]. Better yet, test with known values during commissioning.

3. String Tags

CIP string tags have a length prefix followed by character data. The total allocation might be 82 bytes (2-byte length + 80 characters), but only the first length characters are valid. Reading the raw bytes without parsing the length field gives you garbage padding at the end.

4. Assuming All Controllers Support Symbolic Access

Older SLC 500 and PLC-5 controllers use file-based addressing (e.g., N7:0, F8:3), not symbolic tag names. Your gateway needs to handle both addressing modes.

5. Ignoring Forward Open Limits

Every PLC has a maximum number of concurrent CIP connections (typically 32-128 for CompactLogix, more for ControlLogix). If your gateway, HMI, SCADA, historian, and three other systems all connect simultaneously, you can hit this limit — and the symptom is intermittent connection refusals.

Choosing Your Messaging Strategy

FactorUse ExplicitUse Implicit
Tag countUnder 200 per PLCOver 200 per PLC
Update rate neededOver 500msUnder 500ms
PLC modification allowedNoYes (assembly config)
Multiple consumersNoYes (multicast)
Deterministic timing requiredNoYes
Gateway complexity budgetLowHigh
IIoT monitoring use case✅ Almost alwaysRarely needed

For the vast majority of IIoT monitoring and predictive maintenance scenarios — the use cases machineCDN was built for — explicit messaging with smart polling is the right choice. It's simpler to deploy, doesn't require PLC program changes, and delivers the data fidelity you need for trend analysis and anomaly detection.

What's Next

EtherNet/IP continues to evolve. The Time-Sensitive Networking (TSN) extensions coming in the next revision will blur the line between implicit and explicit messaging by providing deterministic delivery guarantees at the Ethernet layer itself. This will make EtherNet/IP competitive with PROFINET IRT for hard real-time applications — but for monitoring and IIoT, the fundamentals covered here will remain relevant for years to come.


machineCDN connects to EtherNet/IP controllers natively, handling tag resolution, connection management, and data batching so your team can focus on process insights rather than protocol plumbing. Learn more →