On the drive down Middlefield Road into Redwood City, the complete lack of trees is astounding. I have a vague recollection that the town had a history of logging, which would account for both the name and the absence of Sequoioideae.
Palo Alto was named after a tree too. Spanish for “the tall tree,” El Palo Alto is a 1,000-year-old Coastal Redwood that stood almost 50 meters high off San Francisquito Creek. Like many of us, it’s getting shorter with age: in 1955, 60 feet of the top had to be lopped off. The lowering water table would’ve killed it otherwise.
It was to Palo Alto that I’d smuggled one of Adaptive’s Sun workstations to finish my port. In the spare bedroom of our apartment, I got back into the swing of the porting session using an MVME147 bucket and two tape drives.
It was time to turn to my next problem: networking.
I needed a TCP/IP stack to sit atop the AMD LANCE Ethernet port. Besides talking to the management system, the TCP protocol would go between STM/18’s to set up DS1 and DS3 paths through the user’s network of STMs.
My first thought was Berkeley Unix, so I grabbed a copy of the Net/1 release of the BSD network stack. I’d first worked with BSD on a VAX750 running 4.2 back in the early 80s. This was well-known territory, even though the socket didn’t feel quite right from a Unix point of view (it didn’t match the namespace), and the interface was mysterious around the edges (what if I set more of those parameter bits?).
The whole world is glued together with TCP today, but in 1990 it was still new to most of the public. In fact, the protocol wasn’t completely dry yet. A year earlier, TCP’s original specification, the Request For Comments (RFC), had required an update for clarity. Mosaic, the first widely popular web browser, wouldn’t be shipped until the end of ‘93.
The network technology we use today started as the ARPAnet back in the ’60s. The US Defense Department built a nationwide network out of 50,000 bits per second dedicated lines that linked network nodes called IMPs. The nodes connected to the different hosts at the various universities and research facilities across the country so scientists could transfer files. FTP and Telnet date back to those pre-TCP days.
While the ARPAnet was a switched packet-passing network, it had an issue with scaling. The trouble started in 1976 when researchers began to interconnect networks. The central technology of the ARPAnet, the NCP protocol, couldn’t solidly glue together networks that it didn’t have intimate knowledge of. A single mesh of IMPs worked great. Plug different meshes together and things fell apart.
One of the inventors of ARPAnet technology, Vinton Cerf, joined Robert Kahn to figure out a solution. NCP was out. On January 1, 1983, the entire ARPAnet switched to TCP/IP and the Internet was born.
All this work predated the OSI protocol model. In that framework, jobs that are required for transferring data are divided into layers. Those layers are built by getting bytes from one machine to another at the bottom of the stack, then going up the layers to the top of the stack--all the way to where you’re reading this blog.
Our friends Cerf and Kahn were working on this together in the mid-’70s, almost a decade before the OSI model emerged as a spec in 1984. I haven’t dug into it, but I’ve always assumed their experience is what helped folks realize that we needed to divide the problem into layers.
They unsuccessfully tried to send data across various networks using a single layer. Then they realized they actually had another issue: because there’s no way of knowing if the next network possibly drops packets or not, they had to assume that it can. Therefore they needed some sort of acknowledgment that the data made it to the other end of the conversation. Tracking accepted data was the important part, not accepted packets, and it needed to be done at a higher level.
There can be a mesh of many different networks between two computers, all passing data that goes in one side and out the other to yet another network. Doing this doesn’t require knowing anything about the conversations themselves, just where the packets need to go. The passing of the packets is done by best effort; they can be dropped if memory gets tight, if there’s a bit error on the line connecting the interior nodes in a particular network, or if a lot of conversations are sending to a single host. This work is one of internetworking, not the accounting for the transmission of data.
The brilliant insight was to break the two functions into two different protocols: TCP and IP.
The IP’s job was to get the packets closer to where they wanted to end up. They used the 32-bit addresses we know and love today as those four decimal numbers separated by three dots.
The IP part also had to account for networks that couldn’t handle large messages. If a packet that is 1,500 bytes showed up on the old ARPAnet, it had a problem since the ARPAnet could only move about 1,000 bytes or so. IP solved this problem by chopping the packet into fragments, giving each fragment its own header, and sending the pieces along.
Keeping track of the data in transit was the job of the TCP protocol just above IP. It assigned a serial number to each byte that moved across the connection and defined a way to establish a relationship with the destination TCP service so the remote would send acknowledgments back.
TCP used IP to send its messages. Systems like the IMPs of ARPAnet would forward each IP message closer to the destination. Today, IMPs have been replaced by what we know as routers. When the IP message arrives at the destination host, it’s passed up to the TCP layer.
There were other layers that Cerf and Kahn identified in addition to TCP and IP. Below IP there had to be some way to get the IP message between machines. Today it’s known as the data link layer, or layer 2. In fact, because that layer had two parts--one for governing the format and behavior of messages and one for the electrical and components part (things like the actual plugs and wires themselves)--there was a physical layer under the data link layer. In most technologies, these two layers come as a matched set. That’s certainly true for Ethernet, which I was using for my Unix port.
For some jobs, there had to be other stuff on top of TCP. Telnet and FTP, for example, were revamped slightly for use over TCP/IP. It was years before HTTP, the transport for the web, was invented. In the OSI model, there are three layers above the TCP layer. From TCP up there is session, presentation, and application. In reality, there is often only the application layer, such as the Simple Mail Transport Protocol still used today.
For my project, the management software running on another Sun workstation would need to use TCP to talk to me. Given that, I decided that I may as well use TCP/IP to talk between the STM/18s. I’d already written a driver for the AMD LANCE chip on my VME board. The question was, how to get TCP/IP. The first step was NET/1.
I began by compiling parts of it to get a feel for its size. There were a few files that made up the routines that dealt with the user end of the stack, something called sockets. Then there was some glue code to deal with the buffers that held messages, mbuf in BSD. One group of files made up the TCP code, and another group was the IP layer. BSD called the layer below IP the interface layer or inf. There was a common code for that too.
I compiled them into dot-oh (.o) object files, did a size command to find out how many bytes of instructions and data there were, and used a tiny awk(1) program to total it all up.
To my surprise, the size of the Net/1 stack was larger than the entire rest of the kernel put together. There was a great deal of code there to learn. Knowing its ins and outs would be a requirement to debug problems, and, if needed, tune the performance.
I’ve never just shoehorned a bunch of opaque code into a project. That would be kind of like writing a book by cutting and pasting snippets from different web pages. It would the writer’s definition of the word “hack.” We call it a kludge.
For what we were doing, the Net/1 seemed to be too much. The thought of outweighing the rest of the kernel with network code didn’t feel right.
While pondering what to do next, I started reading the specifications for the protocols. IP was documented in RFC 791. Since the government funded all this development, and they have a love for both acronyms and requesting things, the series of Requests for Proposal turned into Requests for Comments, or RFCs. After comments, an STD (which stood for standard) document would declare some RFCs to be the standard specification. RFC 791 seemed easy enough.
I then read the longer RFC 793: the Transmission Control Protocol spec. It shared some boilerplate code at the front, then went into the spec-ish language required to be unambiguous outlining how connections would be made, the flow of packets could be controlled, and the system of acknowledgments could resend lost packets. It all seemed pretty clear, including the diagram of a state machine to handle opening and closing connections.
Then I reached the section that described event processing. It was a clear description of how to actually implement the protocol. It looked easy enough to translate the words into code. I could use Dennis Ritchie’s streams concept to glue the protocol together and get something that was smaller, easier, and more Unix-like than the BSD stuff.
I decided I’d write the TCP/IP from scratch.
Guess what my management had to say about my decision?