This is one of my easiest topic to talk about, base of my graduation, specialization and MSc course. Of course, since it’s the first post of the topic, I won’t be getting straight to advance topics, but I’ll give the first foundamental principles for anyone interested in learning about this topic.
Even though it may not be very “didatic”, I won’t be covering the computer networks history, since I believe it’s shouldn’t be the cornerstone of this topic, but instead we should go straight to what makes more sense when we talk about the probable future of computer networks: software communication.
If you’re interested about the history of computer networks, there’s plenty of material around the web, just take a look at the ARPANET =)
Why do we need computer networks?
Let’s take an approach similar to what I did in the “Information Security” introductory post: why is it important for us to learn about computer networks and how does it affect our lives? First of all, if you’re reading this post, then you’re obviously using a computer network to get to where this information is. I, myself, have just no idea where the bits that compose this topic is stored, and it’s probably not even in just one place (I do hope so), but somehow you managed to get here to read it, and I managed to get here to write it, right?
So, basically several computers are connected between them with copper or optic fibers, maybe some with radiofrequency and, why not, some with quantum entanglement? The main point is: there’s a path to get wherever you need to get, and if you know how to find this path, you can get there. This is a basic concept in computer networks: there are several networks connected to other networks with a lot of different technologies that works together to get you anywhere.
Still, there’s something that the new generation of IT professionals need to know by heart and take this in account when learning or working with networks: There’s software and firmware communicating in every layer of the communication standard that makes possible the working of the network: the OSI Model, which we will cover soon.
Hosts and core network
A computer network is composed of 2 types of computation roles interconnected through a medium: the Hosts and the core network. Hosts are the end devices in the communication context, the ones which will be providing informations or requesting them, like desktops, laptops, cellphones, TVs and a lot of other devices. Core network is composed by the equipments responsible for transporting these requests and answers between the end hosts, by the use of switches, routers, gateways, modems and other equipments.
Those hosts can be organized between themselves in two architectures: the Client-Server architecture and the Peer-to-peer or ad-hoc architecture. In the first, the computers have two defined roles: one of them is the Server, responsible for serving information through the use of softwares called Service Daemons. The other are the computers that request a service provided, called the Clients. In a Peer-to-Peer (P2P) or Ad-Hoc architecture, there’s no clear definition of those roles, so every host can be, at the same time, a Server and a Client, exchanging information between them through requests and replies. Those are the called Peers.
The OSI Model
If you ever studied about computer networks, you probably learnt about a theorical model that determines the whole workflow of how computers will interact between them to make a so hard working task become possible. The idea here is: I’ll divide everything I need to make the communication work into layers, with specific responsabilities, and I’ll make it so that each layer can solve its intented problem without impacting every other layer in the model. That’s the basic Idea.
The Physical Layer is responsible for transferring the binary representation of the information through the physical medium, may it be, as told before, wireless, copper, fiber, or even quantum entanglement. It makes sure that the medium is adequate and that the signaling can reach its destination.
The Data Link Layer is responsible for addressing the delivery of those bits in a local network, where the physical interconnection of the computers are near and identified by their physical address. This layer usually provides some error correcting approach to prevent data loss.
The Network Layer is responsible for another layer of addressing and delivery of data, but this time, the machine physical identification isn’t the important part, but the logical addressing it can dynamically receive. Also, this layer is responsible for interconnecting networks with other networks, through the Routing function.
The Transport Layer is responsible for the end-to-end communication between the software in each Host. This is the layer where we approach the identification of the software and the control and reliability functions that will make sure we deliver the information to the final process.
At last, the Session, Presentation and Application Layers, which are usually applied just as one single layer, is responsible for determining the functions and the coding and decoding of the information the softwares will be talking. If you’d make an analogy, this is where we define the language we would be speaking between two people, also thinking about the semantic and the syntax of the words used.
That’s all for now, folks!
Even though I want to give a different approach for teaching computer networks, there’s some basic concepts that’s not possible to avoid covering just like everyone else does it. In the next parts of this topic we’ll start having some different point-of-views from what we usually see in the most famous textbooks.
Thanks for reading up to here and hope you enjoyed =) any questions, feel free to use the comments session.