Discover Siemens IWLAN
Industrial Ethernet Book Issue 7 / 27
Request Further Info   Print this Page   Send to a Friend  

Determinism: the full story

Using an overview of Ethernet itself, the author explains how a properly planned and installed Ethernet network can indeed be deterministic. And lots more besides!
By Paul Taylor

Ethernet, as we know it, was invented by Bob Metcalf of Xerox's Palo Alto Research Centre almost 30 years ago. However, it had its roots a few years before that in pioneering work by Norman Abramson at the University of Hawaii in 1967.

Abramson had the task of getting the university mainframe talking to outlying terminals on other islands. A physical cable was out of the question and so Abramson looked at radio. However, he did not have enough frequencies - so some terminals had to share.

With shared radio frequencies, interference occurs and so Abramson had to find a way of regulating transmissions. He decided to use just two frequencies, with rules to dictate when and what a terminal could send. For outbound transmission one frequency would be used and for inbound transmission another. He also developed a system of addresses and replies.

The mainframe would send out a message with its address to one of the terminals. Although all terminals would get it, only the one it was addressed to would actually accept it. Upon receiving the message, the terminal would check that no other terminals were using the frequency. Then it would transmit a receipt which would normally only be received by the mainframe and other nearby terminals. This worked in both directions. But what if two or more terminals transmitted at the same time? Well, then a collision would occur and the mainframe would not receive a message and would not reply. The terminals had a timeout, typically 200 to 1500 nanoseconds, and if they had not received the reply within that period, they would resend the message. A maximum limit was placed on the number of retries before reporting an error. This process is known as CSMA/CD (Carrier Sense, Multiple Access, Collision Detect).

When Bob Metcalf was tasked with connecting Xerox's latest invention - a laser printer - to one of their other inventions - a PC - he came across Abramson's work and, with some re-engineering, was able to transfer it to coaxial cable and make it faster. (The Aloha network had a bandwidth of 4800bps - Metcalf got PARC up to 2.94Mbps). To improve efficiency, he changed the model so that it did not send replies as a way of detecting collisions. Instead, Metcalf looked at the voltage on the cable: when the voltage jumped by a predetermined offset, a collision had occurred. This voltage jump was easily detected and the sending terminal could then retry after a short delay, known as the backoff time. Ethernet was born!

It is at this point that most arguments about the determinism of Ethernet start and finish. The system described above is obviously not deterministic. You could almost never know how long a message was going to take because you had no way of knowing the other traffic on the network. However, Ethernet development has not stood still since 1972 - rather it has increased in pace during recent years.

The speed of Ethernet

One of the main advantages of Ethernet over almost every other network type is its speed. A common phrase in the networking industry at the moment is "fat pipes". This refers to the bandwidth of the connection between two devices. When Microsoft introduced Windows for Workgroups 3.11 in 1993, the coax cable and Network Interface Cards (or NICs) supplied in the box ran at 10Mbps. These days, most office networks will run at 100Mbps or Fast Ethernet. However, the real speed (or the really fat pipes) is between Ethernet switches. These run at 1Gbps. Later this year the IEEE will announce standards for 10Gbps Ethernet. After that, who knows? However, the main point is that Ethernet is a lot faster than any other network. Ask your PLC manufacturer how fast their preferred network is, but be prepared for a low number. What this means for determinism is that as everything is going so quickly, any time delay caused by waiting for another device to finish is almost negligible. We haven't finished yet though?

Figure 1, Ethernet has not slowed down

Improving on bus topologies

Ethernet at PARC ran on very thick, yellow, coax cable and used a bus topology. That is, each device was connected to a long cable run. There were rules about how often devices could be connected to the cable and how long the total cable run could be. With Thick Ethernet (10BASE5) a device could be connected every 2.5m, with a maximum cable run of 500m. For most users this was limiting; better ways had to be found.

The first of these was thin coax cable. Thin Ethernet or Cheapernet (10BASE2) was invented in 1982. The minimum distance between nodes was now down to 0.5m but the total distance was reduced to 185m. Speed was increased to 10Mbps, but if a user accidentally (or deliberately) disconnected or severed the cable the network came to a halt.

The next breakthrough was the use of concentrators or hubs to form a more structured approach called a star topology. StarLAN was invented in 1984, but to achieve the required EMI regulations the speed was throttled back to 1Mbps. As most people saw this as a step backwards, StarLAN was never a big success. StarLAN, however, set the stage for Ethernet as we know it today.

In the Autumn of 1990, the IEEE announced a new standard. IEEE 802.3i defined a star topology network that ran at 10Mbps over category 5 unshielded twisted pair. This was 10BASE-T. All the advantages of using a star configuration were retained along with the speed of Thin Ethernet. No longer would one connection disrupt the entire network. A reliable network had become a reality.

Switches _ half, full and dual duplex

The development of a star topology opened the door for better traffic management. Until then, the network would still only allow one device at a time to talk. With the introduction of switches in 1990, this changed. Switches have an architecture which allows multiple simultaneous transmission paths - a bit like a telephone exchange. This meant that devices were no longer sharing bandwidth, so throughput was improved significantly. This was done by inspecting the header of each frame for its destination address. The switch would then forward that frame to the port on the switch that the destination device was connected to.

Figure 2, With full duplex switching thers is no network contention

How did the switch find this information ? Well, it just had to look at the source address in the frame and then it could construct a table relating ports to devices, called the Learned Address Table. This differed significantly from hubs which just sent the frame out of every port. By building memory buffers into the switch, any port that was in use at the time a frame came in could have the incoming frame stored until that port was free. From this point it's a small step from the original half duplex of early Ethernet to the full duplex of today.

Suddenly, instead of just being able to send or receive, a device could do both simultaneously. Since then, improvements in digital signal processing and a reduction in the cost of DSPs has meant that we can now do full duplex communication on just one pair of copper cables. At the moment, this is only done on Gigabit Ethernet, and only on copper. The standards for full duplex and flow control were finally set in 1997, with standard 802.3x

Accident prevention

There are now improved ways of maintaining the availability of the network too. The first of these is called Spanning Tree (802.1D). This allows the connection of Ethernet switches into a tree structure. Any duplicate paths are then deactivated, and only reactivated when the active one fails. The only problem is the time it takes to reconfigure - approximately 30 seconds. A new version of Spanning Tree is due out soon, called Fast Spanning Tree, which is expected to get this time down to 5 seconds.

From an industrial point of view even this is far too slow. Fortunately two other methods exist. The first is called Link Redundancy and involves double wiring each connection - a lot of work. However, when one path fails, the other is activated almost immediately. This has not been standardised and remains manufacturer specific.

The other method is again proprietary and belongs to Hirschmann. We use a ring structure and get one of the switches or hubs to monitor the integrity of the ring by sending short frames the entire way around. One of the links connected to this Redundancy Monitor switch/hub will be deactivated, but will still pass these special integrity check frames, and only be reactivated when the integrity of the link fails. By this method, we can get reconfiguration times as low as 50ms, which is far more acceptable in an industrial environment. Hirschmann pioneered this at the University of Stuttgart in 1983.


Original Ethernet Frame - maximum length 1518 bytes


New Ethernet Frame - maximum length 1522 bytes


Figure 3, Old and New Frames, showing the insertion of the new TAG field

Quality of Service and Virtual Local Area Networks

The most recent change to Ethernet is due to the demand for transmitting video and audio signals. Video over IP and Voice over IP both require a high quality of service - signals must reach their destination within a set time. Of the two, voice or audio, is far more demanding. (Some still frames in a movie won't harm your enjoyment so much, but a jittering sound track soon will!) Cisco, the office switch and router manufacturer, introduced the concept of 'tagging' to Ethernet frames. This has resulted in the maximum length of an Ethernet frame changing from 1518 bytes to 1522 bytes, not including the preamble and start of frame delimiter (802.1Q and 802.1p, 1998). These extra 4 bytes hold the priority of the frame, as well as other information such as VLAN identity.

Out of these 4 bytes, 3 bits are reserved for User Priority and 12 bits for the VLAN (or Virtual Local Area Network) ID. As the user priority is 3 bits, it has values ranging from 0 to 7. Priority 7 is reserved for network information; 6 is given to audio or voice. Video gets a 5, whilst something like email gets a 0. This means that data waiting in a switch can queue jump!

Original Ethernet Frame _ maximum length 1518 bytes New Ethernet Frame _ maximum length 1522 bytes Figure 3 Old and New Frames, showing the insertion of the new TAG field

VLANs are related to the security of the network. By tagging a device as belonging to a particular VLAN, traffic from other VLANs will not be sent to it. This has enormous consequences for industrial applications, where broadcast traffic from an office network can be stopped from reaching the factory network. It allows one physical network to be split into two virtual networks. The next step is to allow 'leaky' VLANs where a device can belong to two or more VLANs. In other words, the factory manager can pick up his email (from the office network) and monitor his factory network from his desktop PC. At present, there are not many NICs or applications taking advantage of this - but there soon will be!

Conclusion

The phenomenal increase in bandwidth, new redundant topologies, intelligent traffic control, formal Quality of Service, and others developments have let Ethernet prove itself to be evolutionary as well as revolutionary. In particular, the move from a shared network to a switched network and with the introduction of true QoS, the time has come to consider Ethernet deterministic. The final question to ask, as Ethernet migrates from down to the factory floor, through the information layer, automation and control layers, is, "How many different networks do you need?"

Ethernet Addressing
Ethernet uses what is known as the MAC (Media Access Control) address to route frames. Each device has a unique MAC address which is set by the manufacturer. The MAC address is composed of 6 bytes and is usually quoted in hexadecimal e.g. 00:80:63:a3:f4:2d The first three bytes determine the manufacturer (Hirschmann equipment always starts 00:83:63), and are set by the IEEE; the last three are set by the manufacturer. An end device is usually able to relate a MAC address to an IP address by using its ARP (address resolution protocol) table. You can view the one on your PC by typing arp -a at the DOS prompt. A message will always have in its header, the destination MAC address and the source MAC address, although the destination MAC address may be a broadcast address (ie to be received by everybody).

Paul Taylor is Technical Services Manager for Hirschmann Electronics.

References
Switched, Fast and Gigabit Ethernet, Robert Breyer & Sean Riley, MacMillan Technical Publishing
TCP/IP Illustrated, W. Richard Stevens, Addison Wesley Publishing


Source: Industrial Ethernet Book Issue 7 / 27
Request Further Info    Print this Page    Send to a Friend  

Back

Sponsors:
Discover Cisco IoT
SPS IPC DRIVES 2017

Get Social with us:


© 2010-2017 Published by IEB Media GbR · Last Update: 14.12.2017 · 46 User online · Legal Disclaimer · Contact Us