Hirose: Connecting the future
Industrial Ethernet Book Issue 103 / 15
Request Further Info   Print this Page   Send to a Friend  

Industrial web-based computing: is data intelligence finally here?

Fog computing and cloud computing are no longer strictly processing partitioned, which leaves more power in the control center at quite minimal costs. Another key advantage is that no longer are very powerful computers needed, nor is there a need for high levels of human intervention or even monitoring.

DATA INCLUDING TEXT, PICTURES, RAW BITS and bytes that tell humans things has been around for a long time, from the early grunts and cave drawings, to the printed word.

Making sense of it all and to put data to use, on the other hand, takes understanding and processing. Processing needs a brain or a computer to give us information. Why? The amount of data can be huge and to bring it down to a manageable size may lose points of interest, so encapsulating that data in a wider sense with understanding of where the data is from and what it is about helps enormously in forging solutions.

The UK Meteorological Office takes 10 million observations on the weather every day but if you watch a weather forecast the information presented rarely takes more than 5 minutes. How is this done; by targeting the audience and encapsulating the raw data in information packets, packets which have been processed, analysed and organised in order to provide succinct details.

An interpretation of this is much like one word being able to insinuate many ideas; the term "congested road" gives the picture of a traffic jam or could simply be a road containing many cars difficult to pass, would this also lead to frustration and anger? If we add just one further idea of "congested urban road" then just that one extra piece of data puts the visualisation into context and makes clear the thought on the subject.

This leads to the preposition that digital data can provide intelligence. In software the idea of concatenating bits into binary arrays where a 16 bit word contains digital flags indicating many things, can be carried further by defining an object structure of say "Door Operation", a door being opened or closed and a time for how long the door has been opened or closed. Add to the structure who opened the door and then an extrapolation may be made as to why the door was opened. Such situations are seen as data providing intelligence rather than being intelligent in its own right. What if we pass to a centralised core not real time data but extrapolated data on the subject?

First, let us provide some definitions on which we can more easily describe the segmented parts of the subject under discussion.

The cloud

An ethereal place of residence on networks circling the globe, and today, of near orbit, where servers and storage can be found and used for data derived in a more physical world. Provided and patrolled by an invisible though powerful entity, the community at large, it is very much indefinable though effectively easily usable.

Cloud Computing

By having the processing carried out in the cloud by multi-user and powerful computers it alleviates the cost of having a powerful computer in the local environment; optimising machine cost and, to some extent, development costs due to the application frameworks or Applications as a Service, more commonly today referred to as Software as a Service (SaaS), provided by cloud computers.

Such points as regular backups and validated recovery mechanisms are benefits to this method of computing but there are ongoing regular costs to take into account also for such things as bandwidth usage and amount of data storage.


In the Operational Technology (OT) world the fog is so termed as it is, like the climatic condition, is close to the ground or, more precisely, close the real world interface layer at the edge of the industrial topology used to create effective plant networks.

Fog computing

As the name implies, this is processing done on the data but the computing engine is close to the edge or at the point of data collection. Unlike cloud computing this is very much under the control of the application developer and it is usually left to the developer to not only implement it in the system but also to ensure ancillary services such as connectivity and backups are maintained as well.


Akin to language we now look at how data can be visualised more succinctly. The very definition of grammar is placing meaningful words in a correct contextual sense whereby accepted words of the language are used within an accepted boundary, the context. This also provides the implicit facility to have the same word having several different meanings and raises the question as to how we can utilise similar techniques with the processed data both to aid understanding but also ease the necessary processing to achieve optimum usage and efficient deployment in the network environment.

Shrink boundary & extend the area

It would seem incongruous to attempt to take a circle and shrink the circumference but increase the area covered by the circle. Here we now move between the physical world and the metaphysical, or possibly more understandably, the modelled world.

To take as an example a CNC machine which has detectors applied monitoring such data points as the size of the material to be worked on, the speed of the cutters, the temperature of the cutters, the temperature of the material being worked, the position of the cutting head, the power used in moving the head. Those data points are our circle of the world the CNC operates within but what else can we achieve from these?

Obviously this close to the edge the processing needed is required to act in real time but only small applications are needed with the objectivity detailed in small chunks of processing. This can be achieved with small RISC computers such as Moxa′s IA260 or even the UC-8100 series which, in addition, have the ability to communicate wirelessly as required.

The fog computers themselves can receive the real world data in standardised format such as Digital IO or serial packets but having networking connectivity can also receive digital format data such as in Modbus/fieldbus format from serial to Ethernet converters as well as Iologik IO modules, each of which themselves also have variants for native wired Ethernet or wireless capabilities.

Can we optimise anything here to lower the total cost of manufacture? Take the cutter temperature, if we statistically analyse the profile of the material being worked and the temperature of the cutting tool we can then make an assumption as to the temperature of the material. From there we can control the speed of the machine, the cut depth, as well as the optimum wear of the cutting tool for the process being performed and ensure the material tempering is not affected by the cutting process itself. With this method we can then, as you see, achieve a level of condition based monitoring of the tool as well as the material.

In this example we have overcome the seemingly impossible paradox of both removing the need for more sensors (shrinking the boundary) as well as monitoring the whole process more fully (extending the area). In order to do this we need some processing power to apply the necessary algorithms but the question of the moment is, does this apply intelligence to the data as the data points are transformed from explicit data to implicit information?

Clarify the picture

Taking a system as a whole entity we then come to the question of how to define the details of the needed processing and efficiently make use of the system parts. For the most simple of system the partitioning is straightforward to utilise the whole attributes′ abilities to the full or even not to take any heed of them. However, for the more complex systems we must understand in full how each system attribute could be utilised most efficiently. So what are these attributes? A typical list could include the following.

At the fog or edge, the real world monitoring and control data that will be passing through the transport. For this, the cyclic parameter must be determined; the cycle time obviously must be sufficient for system accuracy but also automatic failure detection and aspects needed to overcome such failures.

Any processing that is to be applied in the fog has to be sufficient to achieve the above desired results. File storage and retrieval times, computer bus speeds and ability to be placed automatically in burst modes, processor speed alongside number of pipelines and manipulation of apparent parallelism by kernels.

Latency and jitter imported to system efficiency by all transport parts brought about by the transport protocols in use.

Towards the cloud, the sufficiency of buffering and local storage that can overcome connectivity outages for small time intervals. At the cloud, the level of information that is to be utilised to achieve the desired needs of the first point.

We can now see that there is the possibility to have a distributed management process overseeing the system as we would want much of the data transform to be carried out in the fog. Information rather than raw data is passed up towards the cloud, lessening the required throughput and leaving less to be completed in the cloud which, although having possibly powerful computers may be used by many in a virtual machine environment (to minimise costs) and are more than likely not under our tight control.

Let us take another example, in this case one most have encountered in some degree over time, that of aircraft operations. Today this is a good example of distributed control and monitoring as many countries have, or are migrating their airspace management to, a centralised system. Taking two aspects that are important, we have weather and aircraft position. Aircraft only make profit when they are airborne. As an aside, we often think of the hierarchical view of Command and Control as the real world being at the bottom and the overview being at the top. In this case, it is the aircraft that are physically above the control center so in essence the physical and metaphysical have been reversed.

Weather is reviewed on a long cyclic view, as the weather at height is important, not that at ground level, so satellite data is taken in as well weather monitoring stations. It is the aircraft themselves that could monitor their own weather (by radar and pressure sensors) and pass this as a more local and accurate view to the control centers. The centers can then have not only weather information as a 2D object but also as a 3D model.

Ground-based radars throughout the country are monitoring the airspace and their data can now be joined to provide 3D positional informational which can be used to check the position being fed from other sensors both on the aircraft itself as well as other external aids. By distributing the monitoring between aircraft themselves, air fields and localised positions, the information provided to central air traffic controllers can be seen to be very accurate and ′clean′.

Notice again, as in previous examples that the fog is providing information forward and not just raw data. Information, a joining and contextualisation of several data points, is being used in preference to raw data.

Using a ring topology can automatically correct standard errors and faults in the transport layers.

But what can go wrong?

The mechanism of providing information has one big disadvantage. The closer to an overview picture we move the information, the more diluted the data gets. Here we are looking at the distinction already described between data and information.

The reaction of the fog is far faster than that of the cloud, so even with a catastrophic failure the motor can be stopped in highly distributed control systems.

Failure in a good way

Driving a motor at the correct speed continuously needs accuracy as, in the real world, many controllable and uncontrollable factors can affect the speed; not least of which are power surges, ambient temperature/moisture changes, friction build up etc. However, monitoring the speed over time can give us an indication of the serviceability status of the motor in that answering the question of the current wear level will show when the motor may fail.

We have already discussed that at the edge, or in the fog layer, the processing needs are for accurate, fast computers and introduced RISC computers but Moxa also has Intel Processor based computers which come in several form factors and capable to operate, in some variants in hazardous environments. The V2000A series being targeted at rail transport and DA Series specifically made for the energy generation markets.

Alongside these are Panel PC models as well as straight processing platforms in the marine market. Such points then can be used to aid defining the function partitioning; whether to use the fog to process the data or the cloud. The time needed to get information to the cloud and make the decision as to whether the speed is correct may generally be satisfactory but is it efficient enough to actually introduce speed control? The fog would be more efficient to apply the speed control aspect with the cloud monitoring the serviceability state. There is also a further aspect to this. That of maintaining control even if errors or failures are affecting the system. The reaction of the fog is far faster than that of the cloud so even with a catastrophic failure the motor can be stopped, possibly reducing secondary damage and hence efficiently allowing maintenance to be more cost effective. On the other hand, maintenance can also be made more cost effective if the cloud monitoring the motor′s information can calculate when it is expected to fail and so schedule maintenance at the most appropriate time.

When designing the system, we can also implement mechanisms as ring topology and use standard protocols to automatically correct standard errors and faults in the transport layers. Defined and standardized to make the system rugged, such methods also can allow different devices to interoperate and ensure the rugged platform, and as such, data and information integrity, is maintained.

All layers of the communication and machine control network can use the same basic network technologies.

Failure in a bad way

Always good engineering practice, the designer will obviously cater for most, if not every, failure condition that could be met within the system. Devices can fail, wiring can fail but such events can be catered for within devices and their reporting facilities but today the thing that should be to the forefront of everybody′s mind is cyber-security.

A forced, intentional failure could be approached and caused anywhere in the system if it is open to abuse, the designer should cater for such in the design. The level of security applied is of course going to increase the TCO but when offset against such a potentially harmful failure, which could go undetected for some considerable time, it is more optimum to implement safety and security features than not.

Effective monitoring systems on wide area networks (WAN) can be created by integrating a combination of field devices, wireless data logging and monitoring software.

The story begins

Everyone has recently started discussing cloud and fog computing but in reality they have always, relatively, been there. It is only now that the terms have been given meaning in the system function partitioning that clarity to the uninitiated comes and helps to target the system architecture design decision thought processes.

System design can be seen to be based around a simple derivation; the data that is obtained from the real world, the information the data forms and the use the information is put to. Detailing the transform to information at the edge is where the fog lies. The transform actually clears the fog from the system, easing sight of the overall picture or control needed, allowing optimum use of resources such as lessening the bandwidth needs of information moving towards the cloud as well as aiding the edge peer to peer use of the information derived. Obviously, from such operations we have now formed several aspects of the system layer definition with time and effort to do so making time to market less.

Take for example a water tank fed by streams used to irrigate farmland. It is desired to keep the water tank at a specific level to ensure good pressure to the irrigation system. Function partitioning is by functions looking at items which can be controlled in the fog and items which cannot be controlled locally too well are pushed to the cloud. Water purity and temperature are fed to the cloud but tank level is monitored and controlled in the fog.

It would be pointless having an on/off control for letting the water into the irrigation, far better to have a variable opening which maintains the pressure but controls the efflux amount over time as the amount of influx changes with the level of water in the streams. In such a system the amount of data passed to the cloud is far less than having all the pressure/level/efflux data passed upwards. cloud processing and storage is far less and so are cloud running costs.

Microsoft Azure IoT and OPC UA can work together to provide effective links between the private and public cloud.

Can data be intelligent?

We started this journey asking if data can be intelligent. In most ways the answer has to be no, as to exhibit intelligence processing has to be involved. Intelligence in all guises understood today would seem to point at the need for an understanding of the end to end needs but Artificial Intelligence is based on many conjoined disciplines, not least of which is that of system operation utilising operations that behave akin to a neuron. A data point becomes the data itself, some self-imposed limits, feedback of the amortised data point and its output. Effectively now we appear to be on the cusp of data becoming intelligent in its own right with little processing.

Add to this a data point becoming an information point, where information is passed through a similar ′neuron′, as discussed the raw data is diluted but the information now aids a better overview and gives wiser system usage and control. One of the better points of all this metaphysical understanding is that, with the power of quite small devices today it is no longer the case that the fog computing and cloud computing are strictly processing partitioned. More, it leaves the control center with a newly acquired power at quite minimal costs. No longer are very powerful computers needed nor is there a need for high levels of human intervention or even monitoring.

Look at the vehicle industry today. Cars order their own spare parts to be replaced at the next managed servicing period as well as driving themselves. Intelligent data? Yes. Possibly in a premature state today, but it is definitely present.

Alan Harris, Field Application Engineer, Moxa.

Source: Industrial Ethernet Book Issue 103 / 15
Request Further Info    Print this Page    Send to a Friend  


Discover Cisco IoT
DINSpace fiber optic and Cat 6 patch panels
Siemens iWLAN
Japan IT Week Autumn

Get Social with us:

© 2010-2018 Published by IEB Media GbR · Last Update: 22.10.2018 · 26 User online · Privacy Policy · Contact Us