Preemption standard enables high priority frames and traffic
A new IEEE standard allows high priority frames to interrupt low priority frames in transmission, and minimizes the latency of high priority traffic. For industrial control systems, it also can further enable the convergence of multiple networks that use differing technologies into a single Ethernet and IP infrastructure.
Traffic generator creates priority and best-effort traffic. Switch A to Switch B sends preemptable traffic. A traffic sniffer is used to validate preemption correctness. The traffic analyser measures overall traffic latency.
A NEW ADDITION TO THE ETHERNET STANDARD, Preemption (IEEE 802.1Qbu/802.3Qbr), from the Institute of Electrical and Electronics Engineers (IEEE) allows a high priority frame to interrupt a low priority frame in transmission, minimizing latencies in the high priority traffic. In Industrial Automation Control System (IACS) applications, preemption can further convergence of multiple networks of differing technologies into a single Ethernet and IP infrastructure, enabling self-organizing plant operations and order controlled production. By highly decreasing the impact of lower priority traffic on important traffic, both types of traffic can be mixed on the same link. This technology could also further allow the spread of Ethernet for in-car networks and replacement of previous in-car networks used for critical control, bringing the autonomous car closer to mass market.
A switch supporting quality of service implements multiple egress queues on each port, placing incoming frames into one of these queues based on each frame′s quality of service tag. When an egress port has finished transmitting a frame, it selects the frame from the highest priority queue with an outgoing frame stored for transmission. Because all of these queues are serviced by a single Media Access Control (MAC), once started, a switch cannot abort or interrupt transmission of a frame, even when a frame becomes available on a far higher priority egress queue.
MAC sublayers share link
At its core, preemption allows two different MAC sublayers to share a single link. The MAC sublayer is responsible for enforcing frame transmission and reception rules for the Ethernet media. This is achieved by the addition of a MAC Merge sublayer below these two MAC sublayers to both direct received traffic to the proper MAC and coordinate the transmitting of frames from both MAC sublayers onto the link that they share. This allows one MAC - an Express MAC - to carry higher priority traffic with a lower maximum latency, while the other MAC - a Preemptable MAC - is used for frames where latency and delay is less of a concern. Frames from the Express MAC are always given priority to the media over other traffic.
If a frame arrives from higher layers at the Express MAC for transmission on the media and a frame is currently being sent from the Preemptable MAC, then the MAC Merge sublayer decides whether to interrupt the frame in progress. If interrupting the frame in progress will still yield valid sized minimum segments for both the current transmission and for the remaining portion of the frame data, then it will interrupt the frame in progress by sending a 4-byte checksum. This will indicate to the link partner that the frame is not complete.
After the minimum recovery period, 96 bit times, the station may then send the frame from the Express MAC. If nothing else is to be transmitted after waiting the minimum recovery period, the continuation of the interrupted frame may be sent. In this way, the effective maximum latency of a link can be reduced for Express traffic as it becomes no longer necessary to wait for longer frames already in progress. Frames of 124 bytes or larger can be preempted depending on the supported minimum of the receiving station.
This technology can also be used to inhibit the Preemptable MAC from beginning transmission, even if no frame is currently ready from the Express MAC. This can be useful if the system is aware that it will soon have a high priority frame to transmit and wants to have it transmitted as soon as it is ready. This can yield an even lower maximum latency in controlled environments, for instance when frames are ready at predictable intervals.
In an IACS, best effort networks are often designed around the modelled worst case delay from message transmission to receipt through the network. The biggest variable in this equation is presence of PC-centric traffic on the network. Quality of Service mitigates significant proportions of this risk through implementation of high priority queues, but there remains a risk of a time critical packet becoming available in an egress queue shortly after the switch starts to service a lower priority packet. At 100MBps the maximum Ethernet frame size is 1,518 bytes with a transmission time including Preamble, SFD and inter packet gap but excluding VLAN tagging, of 123.04µs.
802.3 Ethernet with Preemption disabled (left). 802.3br Ethernet with Preemption enabled (right).
Let′s work through an example. Say a high speed packaging machine with 9 axes where application demanded communication rates are largely a factor of the mechanical bandwidth of gearboxes, transmission belts etc. Each 1ms, a high priority frame of 150bytes (typical in IACS applications) is transmitted. It has a transmission time of 12.64µs (@100MBps) and must be delivered from server to client within a maximum time of 250µs. Its worst case transmission time (through a two layer star network and excluding switch latencies), must be considered to be 135.68µs because of the possibility of a low priority maximum size packet being serviced just before the higher priority packet becomes available.
Not a problem, except that IACS applications are characterized by a very large number of servers connecting to a single client. In this example, how many servers can transmit ′simultaneously′ and still meet their timely delivery requirement? The simple answer is determined by the maximum delivery time minus the worst case interruption all divided by server transmission time, in this case: (250 - 123.04)/12.64. The result is 10.04, but the number of devices must be an integer, so a maximum of 10 devices can be serviced. In this worst case example, the next 750µs has no network traffic.
Now apply preemption, where every 124 bytes the maximum size packet can be pre-empted - i.e. every 11.44µs the packet can be pre-empted. Now the IACS designer only needs to consider a worst case interruption of 11.44µs. The calculation is the same, but the answer is very different: (250-11.44)/12.64 = 18. For the IACS, the effective bandwidth of the network has been increased by 80%. In a linear network, as the number of switches that the packet must go through grows, the cumulative effect and benefit increases.
In this example the benefit to the IACS of scheduling are only incremental because elimination of the last 11.4µs only results in adding 1 whole device to the network. However, looking at the same equation, but from the perspective of the lower priority packet, the worst case delay must assume that all 18 IACS devices transmit simultaneously and all interrupt its progress. The additional delay of waiting to send after eighteen 150 byte frames would be 244.8µs. If this cannot be accepted then additional techniques, like scheduling, must be applied to ensure that start points of transmissions are appropriately sequenced. For data streams, like video streaming, this size of delay will not be visible to the user. Similarly, if there are multiple high priority streams from multiple disciplines traversing the network, then preemption alone may not allow the designer to guarantee all maximum latencies are met and it may be necessary to implement further enhancements like scheduling.
Evaluate validate preempted traffic format. Compare express and preempted traffic latency and jitter.
In this sample Industrial Automation Control System application, moving to Gbps offers a greater reward. It reduces all of the transmission times by a factor of 10, but it does not change the fundamental dynamics of the mechanical system so the application driven packet rates do not change. Applying the same mathematics (maximum delivery time - worst case interruption)/(server transmission time) shows the maximum number of devices that can be on a network without preemption is 188, and with is 196; this is only a 4% improvement.
This migration to Gbps is not practical for many systems; in brownfield, retrofit and high electromagnetic noise environments, preemption (and scheduling) may be far more easily deployable.
A public demonstration showing interoperability and benefits of using preemption was shown in the Avnu Alliance booth at the 2016 IEEE-SA Ethernet & IP @ Automotive Technology Day by three member companies that play roles in the automotive and industrial ecosystem: test tool supplier (Ixia), silicon supplier (Renesas) and conformance test provider (University of New Hampshire Interoperability Lab).
The Avnu Alliance is a community building an ecosystem for diverse applications where precise timing is critical to moving data across today′s crowded networks. The Alliance, in conjunction with other complimentary standards bodies and alliances, drives ecosystems built on open standards in professional AV, automotive, industrial control and consumer industries.
Paul Brooks, Business Development at Rockwell Automation, Peter Scruton, Manager, Embedded Systems Technologies at The University of New Hampshire InterOperability Laboratory (UNH-IOL) and Bogdan Tenea, Product Specialitist at Ixia.