Siemens: Managed switches SCALANCE X-200
Industrial Ethernet Book Issue 99 / 13
  Print this Page   Send to a Friend  

Reference architecture defines fog computing communications

The OpenFog Reference Architecture release 1.0 defines communications between the cloud and the IoT, 5G and artificial intelligence. It is a critical architecture for today′s connected world as it enables low latency, reliable operation and removes the requirement for persistent cloud connectivity.

FOG COMPUTING IS THE MISSING LINK in the cloud-to-thing continuum. It is a critical architecture for today′s connected world as it enables low latency, reliable operation, and removes the requirement for persistent cloud connectivity to address emerging use cases in Internet of Things (IoT), 5G, Artificial Intelligence (AI), Virtual Reality and Tactile Internet applications.


The fog computing reference architecture solves performance challenges in advanced digital deployments in IoT, 5G and artificial intelligence including the control of performance, latency and network efficiency. The relationship between the cloud and fog computing is that they are on a mutually beneficial, inter-dependent continuum.

Fog computing overview

Fog architectures selectively move compute, storage, communication, control, and decision making closer to the network edge where data is being generated in order to solve the limitations in current infrastructure to enable mission-critical, data-dense use cases.

The OpenFog′s definition of fog computing is "a horizontal, system-level architecture that distributes computing, storage, control and networking functions closer to the users along a cloud-to-thing continuum."

Fog computing is an extension of the traditional cloud-based computing model where implementations of the architecture can reside in multiple layers of a network′s topology. These extensions to the fog retain all the benefits of cloud computing, such as containerization, virtualization, orchestration, manageability, and efficiency.

The fog computing model moves computation from the cloud closer the edge, and potentially right up to the IoT sensors and actuators. The computational, networking, storage and acceleration elements of this new model are known as fog nodes. They comprise a fluid system of connectivity and are not completely fixed to the physical edge.

OpenFog reference architecture

The OpenFog Consortium was founded on the principle that an open fog computing architecture is necessary in today′s increasingly connected world. Through an independently-run open membership ecosystem of industry, end users and universities, we can apply a broad coalition of knowledge to these technical and market challenges. We believe that proprietary or single vendor fog solutions can limit supplier diversity and ecosystems, resulting in a detrimental impact on market adoption, system cost, quality and innovation.

The OpenFog Reference Architecture (OpenFog RA) is a medium- to high-level view of system architectures for fog nodes and networks. It is the result of a broad collaborative effort of its independently-run open membership ecosystem of industry, technology and university/research leaders. It was created to help business leaders, software developers, silicon architects and system designers create and maintain the hardware, software and system elements necessary for fog computing. It enables fog-cloud and fog-fog interfaces.

The OpenFog Reference Architecture is a 162-page document that features an abstract architectural description, providing an in-depth look at the full OpenFog RA.

How fog computing works

Fog computing solves performance challenges in advanced digital deployments in IoT, 5G and artificial intelligence. These include the control of performance, latency and network efficiency. It′s important to note that cloud and fog computing are on a mutually beneficial, inter-dependent continuum.

Fog does not replace the cloud; it works with cloud to enable the requirements of selected use cases. Certain functions are naturally more advantageous to carry out in fog nodes, while others are better suited to cloud. The traditional backend cloud will continue to remain an important part of computing systems as fog computing emerges.

To illustrate how fog computing works, consider an oil pipeline with pressure and flow sensors and control valves. One could transport its sensor readings to the cloud (i.e. using expensive satellite links), analyze the readings in cloud servers to detect abnormal conditions, and send commands back to adjust the positon of the valves.

However, the bandwidth to transport the sensor and actuator data to and from the cloud could cost thousands of dollars per month; those connections could be susceptible to hackers; it may take several hundred milliseconds to react to an abnormal sensor reading, during which time a major leak could spill significant oil; and if the connection to the cloud is down or the cloud is overloaded, control is lost.

In that same scenario, if a hierarchy of local fog nodes is placed near the pipeline, they can connect to sensors and actuators with inexpensive local networking facilities. Fog nodes can add extra security controls, lessening the hacker threat. Fog nodes can react to abnormal conditions in milliseconds, quickly closing valves to greatly reduce the severity of spills.

This example illustrates the advantages of local control in the fog nodes to produce a more robust control system. Moving most of the decision-making functions of this control system to the fog, and only contacting the cloud occasionally to report status or receive commands, creates a superior control system.

The OpenFog RA describes a generic fog platform that is designed to be applicable to any vertical market or application. This architecture is applicable across many different markets including, but not limited to, transportation, agriculture, smart-cities, smart-buildings, healthcare, hospitality, energy and financial services. It provides business value for IoT applications that require real-time decision making, low latency, improved security, and are network-constrained.

Pillars of OpenFog architecture

The OpenFog RA is driven by a set of core principles called pillars. These pillars, depicted in the above figure, form the principals, approach and intention that guided the definition of the reference architecture. They represent the key attributes that a system needs to embody the OpenFog definition of a horizontal, system-level architecture that provides the distribution of computing, storage, control, and networking functions closer to the data source (users, things, etc.) along the cloud-to-thing continuum.

Architecture description

The OpenFog RA description is a composite of perspectives and multiple stakeholder views used to satisfy a given fog computing deployment or scenario. Before going into the lower level details of the view, it is important to first look at the composite architecture description, depicted in the figure above.

The abstract architecture includes perspectives, shown in grey vertical bars on the sides of the architectural description. The perspectives include:

Performance: Low latency is one of the driving reasons to adopt fog architectures. There are multiple requirements and design considerations across multiple stakeholders to ensure this is satisfied. This includes time critical computing, time sensitive networking, network time protocols, etc. It is a cross cutting concern because it has system and deployment scenario impacts.

Security: End-to-end security is critical to the success of all fog computing deployment scenarios. If the underlying silicon is secure, but the upper layer software has security issues (and vice versa) the solution is not secure. Data integrity is a special aspect of security for devices that currently lack adequate security. This includes intentional and unintentional corruption.

Manageability: Managing all aspects of fog deployments, which include RAS, DevOps, etc., is a critical aspect across all layers of a fog computing hierarchy.

Data Analytics and Control: The ability for fog nodes to be autonomous requires localized data analytics coupled with control. The actuation/control needs to occur at the correct tier or location in the hierarchy as dictated by the given scenario. It is not always at the physical edge, but may be at a higher tier.

IT Business and Cross Fog Applications: Multi-vendor ecosystem applications need the ability to migrate and properly operate at any level of a fog deployment′s hierarchy. Applications should be able to span all levels of a deployment to maximize their value.

There are three identified viewpoints in the Architecture description diagram: Software, System, and Node.

Software view: is represented in the top three layers shown in the architecture description, and include Application Services, Application Support, and Node Management (IB) and Software Backplane.

System view: This is represented in the middle layers shown in the architecture description, which include Hardware Virtualization down through the Hardware Platform Infrastructure.

Node view: is represented in the bottom two layers, which includes the Protocol Abstraction Layer and Sensors, Actuators, and Control.


The OpenFog RA description is a composite of perspectives and multiple stakeholder views used to satisfy a given fog computing deployment or scenario.

End-to-end deployment use case

The following example describes an end-to-end use case for Airport Visual Security with outcomes for cloud, the edge and fog. Airport visual security, called surveillance, illustrates the complex, data-intensive demands required for real-time information collection, sharing, analysis, and action.

First, let′s look at the passenger′s journey:

  • Leaves from home and drives to the airport
  • Parks in the long-term parking garage
  • Takes bags to airport security checkpoint
  • Bags are scanned and checked in
  • Checks in through security and proceeds to boarding gate
  • Upon arrival, retrieves bags
  • Proceeds to rental car agency; leaves airport

This travel scenario is without incident. But when one or more threats are entered into this scenario, the visual security requirements become infinitely more complicated. For example:

  • The vehicle entering the airport is stolen
  • The passenger′s name is on a no-fly list
  • The passenger leaves his luggage unattended someplace in the airport
  • The passenger′s luggage doesn′t arrive with the flight
  • The luggage is scanned and loaded on the plane, but it is not picked up by the correct passenger.
  • An imposter steals or switches a boarding pass with another passenger and gets on someone else′s flight.
  • The passenger takes someone else′s luggage at the arrival terminal

Catching these possible threats requires an extensive network of surveillance cameras across the outbound and inbound airports, involving several thousand cameras. Approximately one terabyte of data per camera per day must be transmitted to security personnel or forwarded to local machines for scanning and analysis.

In addition, law enforcement will need data originating from multiple systems about the suspect passenger′s trip, from the point of origination to arrival. Finally, all of the video and data must be integrated with a real-time threat assessment and remediation system.

Cloud and Edge Approaches. In an edge-to-cloud design, every camera (edge device) in the airport transmits directly to the cloud for processing, as well as the other relevant data collected from the passenger′s travel records. While there are advantages to both approaches, the disadvantages can lead the systems susceptible to incidents.


The diagram above illustrates an end-to-end use case for airport visual security with implications for cloud computing, the edge and fog. Airport visual security, called surveillance, creates complex, data-intensive demands required to achieve the needs for real-time information collection, sharing, analysis, and action.

Adherence to OpenFog RA

The OpenFog Consortium intends to partner with standards development organizations and provide detailed requirements to facilitate a deeper level of interoperability. This will take time, as establishing new standards is a lengthy process. Prior to finalization of these detailed standards, the Consortium is laying the groundwork for component level interoperability and certification. Testbeds will prove the validity of the OpenFog RA through adherence to the architectural principles.

Next steps

The OpenFog RA is the first step in creating industry standards for fog computing. It represents an industry commitment toward cooperative, open and inter-operative fog systems to accelerate advanced deployments in smart cities, smart energy, smart transportation, smart healthcare, smart manufacturing and more. Its eight pillars describe requirements to every part of the fog supply chain: component manufacturers, system vendors, software providers, application developers.

Looking forward, the OpenFog Consortium will publish additional details and guidance on this architecture, specify APIs for key interfaces, and work with standards organizations such as IEEE on recommended standards. The OpenFog technical community is working on a suite of follow-on specifications, testbeds which prove the architecture, and new use cases to enable component-level interoperability. Eventually, this work will lead to certification of industry elements and systems, based on compliance to the OpenFog RA.

Technology report by OpenFog Consortium, Architecture Workgroup.


Source: Industrial Ethernet Book Issue 99 / 13
   Print this Page    Send to a Friend  

Back

Sponsors:
Discover Siemens IWLAN

Get Social with us:


© 2010-2017 Published by IEB Media GbR · Last Update: 23.06.2017 · 25 User online · Legal Disclaimer · Contact Us