Computer Networks: the new basic part 2

Posted by

This is the part where we start to get a little further from the conventional, or better said “traditional” way of learning about computer networks. In this point, usually we would start learning about the equipments that compose a computer network or the transmission medium that we can use, like copper cables, optical fiber and the radiofrequency spectrum.

But let’s take a look at those in a new perspective: the Software-Defined Networking.

How did the traditional network work?

When we talk about the traditional network, we have some equipments that compose this kind of network with well-defined roles in this system. Those are:

Network Hubs
  • Hubs: even though not used anymore, Hubs are actually splicing devices in the transmission medium, which is basically based in copper cables, like the coaxial cable or the Unshielded Twisted Pair (UTP) and it’s derivations (Foiled Twisted Pair, FTP; Shielded Twisted Pair, STP) that could be spliced electronically to send the same information through all the connectors connected to any of it’s ports. Take note that they particularly work in the first layer of the OSI model: the Physical Layer.
  • Switches or Bridges: those are a lot better than Hubs, because they work in the second layer of the OSI model, the Link Layer. They’re capable of remembering where each end host is connected (through it’s MAC Address. or the network interface Physical Address) through a table called Forward Information Base (FIB, or sometimes called MAC Address Table) and forward the packets received through the network directly to those intented to, without having to send it through all its ports. So, whats the basic difference between a Switch and a Bridge? they both work with the same information and do the same function, but Switches’ decision making are hardware-based, that means that the FIB and the algorithms for forwarding are stored in their Application Specific Integrated Circuit (ASIC), while Bridges needs a software implementation to control it’s hardware in a manner to forward content from one port to another, so it’s software-based.
  • Routers: Routers are the next layer’s equipment, working in the Network Layer. Routers are responsible for making decisions about how to create routes to other networks in the internet, so they need a more complex working logic than just remembering where an address came from. Routers use IP addresses ranges to fill it’s Routing Information Base (RIB, or sometimes called Routing Table) to know where to forward packets to when it receives a IP packet in one of its ports. Routers uses Routing Algorithms for distributing it’s routing information across a network or across the whole internet, with protocols like RIP, OSPF and BGP.

Ok, so what’s new?

Now we enter in a new paradigm of networking: the Software-Defined Networking (SDN) paradigm. The main idea here is: your devices won’t be having specific roles in a network and with specific hardcoded-in-hardware functions: they will be generic network devices, usually called white-boxes, where you’ll define their role in the architecture of the network you’re building, and they will be as efficient as a specific-function component would be.

This is possible due to a principle called Disaggregation. The main idea here is: equipments like Switches and Routers are built in two levels: the Data Plane Level (DP), where we process the packets and forward them across the ports, and the Control Plane Level (CP), where we make the decisions and the configurations of the equipment to reflect the actions in the DP. Once we separate those two parts, we are Disaggregating the equipment in a way we can make both layers better: our equipment will be simply focused in forwarding, while the “intelligence” in how to forward could be stored somewhere else.

Traditional x Software-defined Networks

The first try in implementing this idea was the OpenFlow protocol. The idea in this protocol is: the switch will be able to receive some commands on how to treat flows based in the packet’s headers info, and those commands would be sent by a central controller in the network.

Let’s say for example, I have two hosts connected to one switch and I want to make them able to communicate. In a traditional network, the switch would receive a packet from Host1, learn it’s MAC Address, save it to it’s FIB and them send this same packet to all other ports. Host2 would receive this packet because it was sent to it’s port, it would then reply to Host1 using it’s own MAC Address as the Sender (source) and Host1’s MAC Address as the Receiver (destination) and send it to the switch. The switch would then learn Host2’s MAC Address from the packet and send it back only to Host1, since it already has it’s address in it’s FIB. This way, the communication would work.

Example Topology

Now, in an OpenFlow switch, the switch won’t have this hability to learn. Instead of that, it will have the capability to communicate with a central controller and receive it’s instructions on how to forward traffic. Also, it has the capability of forwarding the traffic to the controller to make it analyze what to do with the packet, then send it back together with the instructions. In this scenario, the Switch receives Host1’s packet and send it to the controller. The controller software will analyze the packet received and determine that the switch will learn that Host1 is at it’s first port and send the packet through all other ports. Host2 will receive the packet and reply it to Host1 and the switch will again forward it packet to the controller. But since it already know that Host1 is at the first port, it will also send the packet out to Host1. The controller will then analyze the received packet and will send an instruction to the switch to learn that Host2 is at the second port. This way the communication will work.

At first sight it may seen that there’s no advantage in having this controller and the switch could do all the work by itself. But in this scenario where we are only working with one switch, it’s not so interesting as it would be if we were working with several switches interconnected between Host1 and Host2. Since the controller would have the overview of the whole topology, it can simply send several instructions at once to all the switches in the topology, making it quicker to stabilize and forward traffic throughout the network.

That’s all for now, folks!

In the next series of computer networks we will be talking in deeper details about this kind of paradigm for creating networks. It’ll be different from what you’re used learning in traditional computer networks classes =)


Leave a Reply

Your email address will not be published. Required fields are marked *