That OSI model refuses to die

By on 12 Dec 2023

Category: Tech matters

Tags: ,

3 Comments

Blog home

If you work in networking in any capacity and haven’t seen this passionate article yet, I strongly recommend you read it. It’s by Robert Graham, published as a Google doc instance called ‘OSI Deprogrammer, Re-conceptualizing cyberspace‘ in September 2023.

Graham’s thesis is that the OSI model has to die. (Note, that’s his surname. He’s one of those people who has two first names as his full name. I’ve never met Robert, so he’s Mr Graham to me.)

Traditionally, the OSI model can be thought of as a roadmap for how different parts of a computer network talk to each other. It’s got seven layers, each doing its own thing:

  1. Physical Layer: Handles the hardware like cables and network cards.
  2. Data Link Layer: Makes sure there’s a solid link between connected devices and takes care of errors. Defines the ‘frame’ the largest unit that the physical link can use, and identifiers on the link.
  3. Network Layer: Sends data between different networks using the frame from the link layer and link layer identifiers.
  4. Transport Layer: Manages the flow of data between devices and fixes errors.
  5. Session Layer: Sets up and maintains connections between applications using one or more underlying transports.
  6. Presentation Layer: Makes sure data is readable and secure.
  7. Application Layer: Where software applications interact with the network.

Graham describes the OSI model as ‘wrong’ on many levels, including:

  • It’s based on archaic, mainframe-era concepts of strict functional separation across the procedure call stack, which he feels no longer apply.
  • It lacks subtlety, when in fact functions in a network can cross contexts. Calls can be exposed into the transport layer (TCP, UDP) to affect how the network and link layers (IP versions 4 or 6, and Ethernet or optical networks, respectively) behave concerning the session.
  • It imposes a straitjacket on thinking about which parts of a network exchange take place locally, which take place end-to-end, and the respective roles of the client and server throughout this process.

I don’t entirely agree with his analysis. I have to admit I am biased having worked in this space myself from 1982 (on early implementations of the OSI Transport classes 1, 2, and 3, written as a finite state machine in PL/1 on a VAX VMS system in the UK) and again in 1985/6 (working in UCL-CS in London, on what became the ISODE system designed and implemented under the aegis of Marshall Rose) and then latterly working in the applications space on the X.400 and X.500 systems (email and directory services respectively). So, I have far too much skin both ‘in the game’ and ‘left on the roadside’ from working in the space repeatedly.

The QUIC protocol, preserving the connection state and operating above UDP, is well described as a session layer. The state it preserves is the session-specific information to make an application agile as the network underneath changes. This is precisely what some of the function(s) identified for the session layer in the OSI model are. Likewise, TLS performs functions to model session layer behaviours.

Despite Graham’s assertions, I do not believe the physical-link-network abstractions can be rejected because the concepts are so strongly mapped to how they’re understood. It’s important to remember that a model is not just a book of rules, it’s a basis for mutual understanding and further discussion.

However, Graham’s article is well-reasoned, passionately argued, explosive, and fascinating. I recommend anyone interested in the idea of a network, and the way networks are represented as concepts (that he explicitly addresses at length), read and reflect on the ideas it contains.

Graham states in the abstract:

[The OSI model] is not just a lie, but unhelpful. It needs to be removed everywhere, except as a historic footnote about 1970s mainframes.

OSI Deprogrammer, Re-conceptualizing cyberspace

What do you think?

Rate this article

The views expressed by the authors of this blog are their own and do not necessarily reflect the views of APNIC. Please note a Code of Conduct applies to this blog.

3 Comments

  1. Peter Dawe

    There is SO MUCH WRONG WITH THIS PAPER it will take days to list them.

    The biggest is the author’s failure to understand the word “MODEL”. The seven layers are a MODEL for networks. We use models to aid the understanding of a complex system, mainly by simplifying aspects to allow people to understand.

    His apparent “alternative” in my view OBFUSCATES functions .

    Perhaps the paper’s most dangerous problem is that it fails to highlight where any layered model fails, where there is a need for one layer to communicate with a non-adjacent layer.

    Abstraction layers have served computing well for decades. The use of the seven layer model to define abstraction layers has been a boon. Indeed, many of the modern problems of networks have been caused by failure of engineers to maintain clear abstraction levels (IPv6???)

    If you read this paper, read it with a highly critical eye!

    Reply
  2. George Michaelson Post author

    I agree that the paper must be read with a critical eye.

    But it’s an interesting take on things that historically have been viewed differently by OSI and Internet protocol proponents.

    As somebody who started in one camp in the 1980s and moved sideways into the other, I’ve always felt the best role of the model is as a tool for discourse, more than an implementation framework. I say this having worked on a project that implemented the 7-layer model as strict procedure call boundaries, with all the costs that implies.

    It’s a polemic. He argues passionately for a position. I don’t have to agree with it, to think it is worth reading!

    Reply

Leave a Reply

Your email address will not be published. Required fields are marked *

Top