SDN Architectural Limitations: Towards a Full Software Network Vision

Sven van der Meer, Ericsson NM Labs, Athlone, Ireland; and Eduard Grasa, Fundació i2CAT, Barcelona, Spain

 

In spite of being only a few years old as a technology, Software Defined Networking (SDN) has jumped from research labs to the product portfolio of most of the big networking industry players and a myriad of innovative start-ups. SDN has changed our perception of the network, the same way cloud computing has not long ago done for computing and storage. It has decoupled control software from specialized forwarding hardware by virtualising underlying physical resources and provided (semi-) standardized interfaces to the virtualized hardware resources; allowing for an increased service deployment agility. However, we argue that SDN in its current incarnation is just the beginning of the network softwarisation trend, since it still carries too much heritage from legacy telecommunication network architectures.

Figure 1

Figure 1 SDN Architecture Overview (ONF)

Figure 1 describes the architecture of Software Defined Networks as defined by the Open Networking Foundation (ONF) [1]. SDN architecture is composed by four main entities:

  • Interconnected network elements in the data plane, which communicate between a number of endpoints.
  • One or more SDN controllers in the “controller plane”, each one hosting the functions to control one or more network elements and exposing “network services” to applications via an API.
  • SDN applications that interact with the network services offered by the controller layer.
  • OSS systems that provide supporting management functions and integrate the SDN with the remaining provider infrastructure.

A brief analysis of this picture leads to the conclusion that SDN –as defined by ONF- sees the network as a set of forwarding devices that can be controlled by logically centralized software controllers via well-defined APIs. This view is essentially the same as classical telecommunication networks dating back to telephony, in which a physical network moves data between endpoints of physical devices. Yet we are in the 21st century, where computer networking is part of a global distributed computing platform, providing connectivity and performance to a diverse set of applications with varying requirements.

While the control aspects are now detached from the physical network and can be abstracted out and programmed, the network is essentially presented as a flat collection of physical devices forwarding data between interfaces. In this paradigm, applications are not first-class network citizens, presented with a network endpoint (IP address plus transport port number) and a very limited service choice: Sockets, the standard application-network API, only allow applications to request a reliable or an unreliable service, with no way to express their performance needs such as maximum delay or packet loss. The lack of application names in the network also exacerbates issues such as security, multi-homing and mobility [2].

The structural organization of the “data plane” functions is another limitation of the current SDN approach. Network functions are usually organized in stacks of layers, each one traditionally performing a different function (physical access, medium access control, network addressing and routing and end-to-end transport) implemented by different protocols. However the theoretical simplicity of the functional layering model doesn’t work in practice. Through the years, multiple “enhancements” to the basic model have been made to address scalability and security problems such as: “tunnels”; “layers 2.5” and “virtual layers”, leading to an ever-increasing protocol base [3] which causes network complexity to grow. SDN promises to simplify management of such networks by “virtualizing” the network resources belonging to different layers and offering them to the control functions via simplified models, but does nothing to attack the root of the problem, which is the structure of the “data plane” per se.

In fact, the concepts of “control” and “data” planes are borrowed from telecomm networks; do they help in defining the software networks of the future? Or are they a barrier to completely transforming how we perceive, design, deploy and operate networks? What if we used different concepts to define a network architecture without having to think in hardware constraints? Some people have started doing that under the RINA (Recursive InterNetwork Architecture) effort [4], and the resulting architecture is surprisingly simple and elegant (Figure 2). First of all, networking is a distributed application (if it uses computers what else could it be?), specialized to provide communication services to other applications: networking is Inter Process Communications or IPC1. Second, networking is recursive: layers provide communication services to each other, over a given scope and range, but don’t have fundamentally different functions. Third, all the layers in a network have the same basic functions, but customized to the operating environment for which they have been designed (e.g. a VPN layer vs a network layer vs a layer managing a wireless link). Last, but not least. all the layers provide the same API, which allows applications on top to request communication services to each other by name, specifying the characteristics required by each service (delay, jitter, loss, etc).

Figure 2

Figure 2 Example of the RINA structure

Such a network architecture minimizes the number of protocols required to support all forms of computer networking, thus bounding complexity and facilitating network management; natively supports virtualization, since the number of layers in each network is decided by the network designers, not the architecture; enables the programmability of all networking functions; and completely unifies networking and distributed computing, since the network is just a set of specialized distributed applications that provide IPC. Notice that concepts such as “physical” or “virtual” network elements are irrelevant for the architecture, what matters is how the different functions are organized into layers and how they interact with each other. Which functions will be deployed in specialized silicon/general purpose hardware/Operating Systems become matters of implementation requirements.

Summing up, SDN initiatives to date have opened the door to changing the perception of how we see networks, but have only focused on the “control aspects” of our current “old-telecom-style” computer networks. Although the benefits gained by this transformation alone are large, they are only a small piece of the potential pie. Realizing the full software network vision, being explored by initiatives such as RINA, will be the complete game-changer.

 

1 The quote is from Robert Metcalfe in 1972.

 

References

[1] Open Networking Foundation, “SDN architecture, Issue 1”. ONF TR-502, June 2014.

[2] J. Day. “How in the heck do you lose a layer?” 2011 International conference on the network of the future.

[3] IETF statistics. “RFCs per year”. Available online: http://arkko.com/tools/rfcstats/pubdistr.html

[4] J. Day, I. Matta, K. Mattar. “Networking is IPC: A guiding principle to a better Internet”. ACM CONEXT 2008.

 


 

Dr. Sven van der Meer received his PhD in 2002 from Technical University Berlin. He joined Ericsson in 2011 where he is currently a Master Engineer and team leader in the Ericsson Network Management Labs. Most of his current time is dedicated to designing and building advanced policy systems that can be used to direct the behaviour of complex event management systems. In the past, Sven has worked with Fraunhofer FOKUS, Technical University Berlin and the Telecommunication Software and Systems Group (TSSG) leading teams and projects, consulting partners and customers, and teaching on university level. He is actively involved in the IEEE CNOM community as standing member of programme committees (IFIP/IEEE IM, IEEE/IFIP NOMS amongst others) and in organising successful workshop series (IEEE MACE, IEEE MUCS, IEEE ManFed.Com amongst others). He also contributed to standardisation organisations, namely the OMG and the TM Forum. He has published in more than 100 articles, conference proceedings, books, conference papers and technical reports. He has supervised and evaluated 6 PhD and more than 30 M.Sc. students.

 

Eduard GrasaDr. Eduard Grasa graduated in Telecommunication Engineering at the Technical University of Catalonia (UPC, July 2004) and got his Ph.D. (UPC, February 2009). In 2003 he joined the Optical Communications Group (GCO), where he did his thesis on software architectures for the management of virtual networks in collaboration with i2CAT, which he joined in 2008. He has participated in several national and international research projects. His current interests are focused on the Recursive Internetwork Architecture (RINA), a clean-slate internetwork architecture proposed by John Day. He was the technical lead of the FP7 IRATI project, where a RINA Prototype for Linux over Ethernet was researched; and is currently the technical lead of the FP7 PRISTINE project, investigating the programmability and distributed management aspects of RINA.

 

Editor:

Neil DaviesNeil Davies is an expert in resolving the practical and theoretical challenges of large scale distributed and high-performance computing. He is a computer scientist, mathematician and hands-on software developer who builds both rigorously engineered working systems and scalable demonstrators of new computing and networking concepts. His interests center around scalability effects in large distributed systems, their operational quality, and how to manage their degradation gracefully under saturation and in adverse operational conditions. This has lead to recent work with Ofcom on scalability and traffic management in national infrastructures.

Throughout his 20-year career at the University of Bristol he was involved with early developments in networking, its protocols and their implementations. During this time he collaborated with organizations such as NATS, Nuclear Electric, HSE, ST Microelectronics and CERN on issues relating to scalable performance and operational safety. He was also technical lead on several large EU Framework collaborations relating to high performance switching. Mentoring PhD candidates is a particular interest; Neil has worked with CERN students on the performance aspects of data acquisition for the ATLAS experiment, and has ongoing collaborative relationships with other institutions.

 


 

Subscribe to IEEE Softwarization

Join our free SDN Technical Community and receive IEEE Softwarization.

Subscribe Now

 


Article Contributions Welcomed

Download IEEE Softwarization Editorial Guidelines for Authors (PDF, 122 KB)

If you wish to have an article considered for publication, please contact the Managing Editor at sdn-editor@ieee.org.

 


Past Issues

January 2018

December 2017

September 2017

July 2017

May 2017

March 2017

January 2017

November 2016

September 2016

July 2016

May 2016

March 2016

January 2016

November 2015


IEEE Softwarization Editorial Board

Laurent Ciavaglia, Editor-in-Chief
Shashikant Patil, Managing Editor
Mohamed Faten Zhani, Deputy Managing Editor
Syed Hassan Ahmed
Dr. J. Amudhavel
Francesco Benedetto
Korhan Cengiz
Noel Crespi
Neil Davies
Eliezer Dekel
Eileen Healy
Chris Hrivnak
Atta ur Rehman Khan
Marie-Paule Odini
Shashikant Patil
Kostas Pentikousis
Luca Prete
Mubashir Rehmani
Stefano Salsano
Elio Salvadori
Nadir Shah
Alexandros Stavdas
Jose Verger