ICONA: Inter Cluster ONOS Network Application
Matteo Gerola, CREATE-NET
Since the beginning of the Software Defined Networking (SDN) revolution, the control plane reliability, scalability and availability were among the major concerns expressed by Service and Cloud Providers. Existing deployments show that standard IP/MPLS networks natively offer fast recovery in the case of failures. Their main limitation lies in the complexity of the distributed control plane, implemented in the forwarding devices. IP/MPLS networks fall short when it comes to design and implementation of new services that require changes to the distributed control protocols and service logic. The SDN architecture, that splits data and control planes, simplifies the introduction of new services, moving the intelligence from the physical devices to a Network Operating System (NOS), also known as a controller, that is in charge of all the forwarding decisions. The NOS is usually considered logically centralized since it cannot introduce any single point of failure in production environments. Several distributed NOS architectures have been proposed recently to guarantee the proper level of redundancy in the control plane: ONIX , Kandoo , HyperFlow  to name a few.
ONOS (Open Networking Operating System)  provides one of the most promising solutions to properly deal with control plane robustness in SDN. ONOS offers a stable implementation of a distributed NOS and was released recently under a liberal open-source license, supported by a growing community of vendors and operators. In ONOS architecture, a cluster of controllers shares a logically centralized network view: network resources are partitioned and controlled by different ONOS instances in the cluster and resilience to faults is guaranteed by design, with automatic traffic rerouting in case of node or link failure. Despite the distributed architecture, ONOS is designed in order to control the data plane from a single location, even in case of large Wide Area Network (WAN) scenarios. However, as envisioned in the work of Heller et al. , even though a single controller may suffice to guarantee round-trip latencies on the scale of typical mesh restoration target delays (200 msec), this may not be valid for all possible network topologies. Furthermore, ensuring an adequate level of fault tolerance can be guaranteed only if controllers are placed in different locations of the network.
To this purpose, we designed a geographically distributed multi-cluster solution called ICONA (Inter Cluster ONOS Network Application) which is working on top of ONOS. The main objective of ICONA is to increase the robustness to network faults by distributing ONOS clusters in several locations and further, to decrease event-to-response delays in large scale networks. In the rest of the paper we will consider a large scale WAN use-case under the same administrative domain (e.g. single service provider), managed by a single logical control plane. ICONA is a tool that can be deployed on top of ONOS; a first release is available under the Apache 2.0 open source license .
Figure 1: ONOS distributed architecture
ONOS implements a distributed architecture in which multiple controller instances share multiple distributed data stores with eventual consistency. Figure 1 shows the ONOS internal architecture within a cluster of four instances. The entire data plane is managed simultaneously by the whole cluster. With these mechanisms in place, ONOS achieves scalability and resiliency. The Southbound modules manage the physical topology, react to network events and program/configure the devices leveraging different protocols. The Distributed Core is responsible for maintaining the distributed data stores, electing the master controller for each network portion and sharing information with the adjacent layers. The Northbound modules offer an abstraction of the network and the interface for the application to interact and program the NOS. Finally, the Application layer offers a container in which third-party applications can be deployed. ONOS leverages on Apache Karaf , a java OSGi based runtime, which provides a container on to which various components can be deployed. Under Karaf, an application, such as ICONA, can be installed, upgraded, started and stopped at runtime, without interfering other components.
Figure 2: ICONA architecture: the data plane is divided in portions interconnected through so-called inter-cluster links (in red). Each portion is managed by multiple co-located ONOS and ICONA instances, while different clusters are deployed in various data centers.
The basic idea behind ICONA is to partition the Service Provider’s network into several geographical regions, each one managed by a different cluster of ONOS instances. While the management of the network is still under the same administrative domain, a peer-to-peer approach allows a more flexible design. The network architect shall select the number of clusters and their geographical dimension depending on requirements (e.g. leveraging on some of the tools being suggested within the aforementioned work ), without losing any feature offered by ONOS and without worsening the system performance. In this scenario, each ONOS cluster provides both scalability and resiliency to a small geographical region, while several clusters use a publish-subscribe event manager to share topology information, monitor events and operator requests.
The ICONA control plane is geographically distributed among different data centers, each one controlling the adjacent portion of the physical network. Figure 2 shows the aforementioned architecture where each cluster is located in a different data center (e.g. ONOS 1a and ONOS 1b are co-located instances). To offer the same reliability already available in ONOS, the ICONA application runs on top of each instance. Referring on Figure 1, ICONA is deployed in the Application layer, and interacts with ONOS using the Northbound API.
The communication between different ICONA instances is currently based on a multicast event-based messaging system, inspired by Java Messaging System (JMS). Applications (e.g. controllers) can publish a message, which will be distributed to all instances of the application that are registered to JMS. The application is internally divided in three modules (see Figure 2):
- Topology Manager (TM): is responsible to discover the data plane elements, reacts to network events and stores in a persistent local database, the links and devices with the relevant metrics.
- Service Manager (SM): handle all the interaction between the network operators and ICONA.
- Backup Manager (BM): If a failure happens within the same cluster, ONOS takes care of rerouting all the paths involved in the failure. If instead the failure happens between two different clusters, then the ICONA’s BM module will handle the failure.
The ICONA full paper  contains more information on this application and experimental comparison results between ONOS and ICONA.
ICONA (Inter Cluster ONOS Network Application) is a tool built on top of ONOS distributed controller with the aim to enable clustering of ONOS instances across different locations in a Wide Area Network. ICONA is based on a pragmatic approach that inherits ONOS features while improving its responsiveness to network events like link/node failures or congested links that impose the rerouting of a large set of traffic flows. We are collaborating with ON.Lab to evaluate the opportunity to include some of the ideas developed therein as part of the official ONOS release.
 T. Koponen, M. Casado, N. Gude, J. Stribling, L. Poutievski, M. Zhu, R. Ramanathan, Y. Iwata, H. Inoue, T. Hama, and S. Shenker. Onix: A distributed control platform for large-scale production networks. 9th USENIX Conference on Operating Systems Design and Implementation, 2010.
 H. Y. Soheil and Y.Ganjali. Kandoo: a framework for efficient and scalable offloading of control applications. First workshop on Hot topics in in software defined networking, 2012.
 A. Tootoonchian and Y. Ganjali. Hyperflow: a distributed control plane for openflow. 2010 internet network management conference on Research on enterprise networking, 2010.
 P. Berde, M. Gerola, J. Hart, Y. Higuchi, M. Kobayashi, T. Koide, B. Lantz, B. O’Connor, P. Radoslavov, W. Snow, and G. Parulkar. Onos: towards an open, distributed SDN OS. Third workshop on Hot topics in in software defined networking, 2014.
 B. Heller, R. Sherwood, and N. McKeown. The controller placement problem. First workshop on Hot topics in in software defined networking, 2012.
 Icona software - https://github.com/smartinfrastructures/dreamer.
 Apache karaf - http://karaf.apache.org/.
 ICONA full paper - http://arxiv.org/abs/1503.07798
Matteo Gerola is a software architect and senior research engineer at Future Networks unit at CREATE-NET research center. His main research interests focus on SDN, Network Virtualization and OpenFlow. Within CREATE-NET he has been involved in several European projects on SDN, optical technologies, and Future Internet test-beds. He has published in more than 20 International refereed journals and conferences. Along the years, he has participated to events, conferences and workshops as TPC member, author and invited speaker. He is an active member of the Open Network Operating System (ONOS) community, and an ON.Lab alumnus.
Jose M. Verger is a networking industry veteran who has worked in new product development, engineering and product management for Cisco Systems, 3COM, Bell Communications Research (Bellcore), AT&T and multiple successful start-ups such as Sentient Networks, Point Red and Wavezero. Currently Jose is at Verizon focusing on mobile public networks architecture and planning for enterprise services including the virtualization efforts.
Subscribe to IEEE Softwarization
Join our free SDN Technical Community and receive IEEE Softwarization.
Article Contributions Welcomed
If you wish to have an article considered for publication, please contact the Managing Editor at email@example.com.
IEEE Softwarization Editorial Board
Laurent Ciavaglia, Editor-in-Chief
Mohamed Faten Zhani, Managing Editor
TBD, Deputy Managing Editor
Syed Hassan Ahmed
Dr. J. Amudhavel
Atta ur Rehman Khan
Muhammad Maaz Rehan