The gift of language, that sophisticated sequence of syllables, or groups of letters--a shout-out to those anonymous yet famous Phoenicians who started substituting pictograms of words that sounded like the syllables they were hearing, giving us our flexible alphabet--those groups of letters that form syllables, which combined to form words.
The words are arranged into sequence to create one or more ideas, ideas listed, joined, or terminated by that latecomer to language, punctuation. (Another shout-out to Saint Jerome, who started using indentation to give readers of his translation of the Latin Bible hints as to how to read the text. These hints were turned into punctuation by the non-Latin speaking Anglo-Saxons as marks to aid the reader.)
Like sentences, network frames carry meaning in sequences of things that are themselves sequences of other things.
Ethernet frames are network sentences, each carrying a parcel of data, just as a sentence contains a parcel of ideas. The bytes are the words, the bits the letters. And like sentences, they are almost pure thought stuff, the force of pressure the electrons are exerting, a mere accident of implementation.
This can be seen in our mind’s eye when we think about the original Ethernet technology. Seen from afar, the Ethernet frame is a long rectangle showing some vague details at first then more as we slowly zoom closer in.
At the front, there are some alternating ones and zeros, the preamble, seven bytes worth. Fifty-six bits that gives the receiver a chance to sync up, to zero in on the timing of by the sender. On our 10 MbE example, the preamble looks like a 10 Mhz tone.
Then comes the magic value, 11010101 (0xD5). That hop-skip bit at the front makes clear the first bit of the bytes in the stream of ones and zeros. It’s also the sign post for the receiver that the content part of the frame is about to begin.
The next 48 bits are the destination address, the “Mail To” of the frame. Since, at least in principle, every receiver on the segment can hear the frame, the destination address is read by each receiver to see if the receiver should bother the computer by bringing the frame on board.
Following the destination address is the source address, the 48 bit station identifier of the frame originator, the from address on this digital envelope.
The rest of the bits are really data for the host, at least from the network access controller’s point of view. The next sixteen bits, which the Ethernet standard call the “type,” tells the software running on the host which protocol software should get the frame. Example values are IP (0x800), ARP (0x806), or ATA-over-Ethernet (0x88a2).
The bytes following that are the business of the protocols above the Ethernet drive in the host. This is the largest field of the frame, ranging from a minimum of 60 bytes to 9,000. For a long time the maximum value was 1,500, but faster networks made it more desirable to send more data in a frame, reducing the per frame overhead. But I get ahead of myself.
The minimum size has to do with collision detection. The minimum of length is the shortest time that a packet can live and still detect collisions if the sender is on one end and the receiver is on the other. The length of Ethernet cables is determined by this value.
But there’s a tiny bit more for the Ethernet interface to do, a tiny bit more frame to process. When a frame is being sent, the voltage on the cable is constantly changing. Manchester encoding has a change in the voltage for every bit. Negative to positive is a zero, and positive to negative is a one. So, there are voltage changes constantly. When the voltage stops changing, we know that the packet has ended. (Newer Ethernet technologies use a different encoding, such as 10GbE’s 64/66. In these, there are symbols for start of frame and end of frame.)
As each bit of the frame arrives it is pushed through a magic circuit, the frame check sequence logic. What’s left in register of the magic circuit at the end of the frame is going to be the residue of this logic. If there are no errors, if each bit reaches the receiver just as it left the sender, the residue should be a zero. This is because the last 32 bits is the value created by a similar circuit in the sender whose register was initialized to zero at the start of the frame. When the sender is finished sending the data, she loads the residue from the register in the sending magic circuit onto the output buffer. When the receiver pushed the residue through its magic circuit, it should wind up mathematically back at zero.
If not, there was some bits messed up. We don’t even really know if the frame was for us, since the bad bits might be in the 48 bits of the destination address. Residue not zero? Toss the frame.
We then sit idle for 12 bytes. Quiet. Every other node patiently waiting its turn on the wire. The hectic pace pauses, like some sort of network sabbath, a bit sabbatical of rest. It’s the equivalent of the stop bit in RS-232.
Then, bam! We’re back at it, some station sending a new sequence of alternating ones and zeros, sending the preamble to their message. The cycle repeats.
Such is Ethernet. Or, it was back when I first saw it, with its ten million bits per second thick cable. Today, we have iterated and refined the technology into point-to-point cables with pools of memory in the switches as the exchange media, frames loading into the RAM, waiting for a chance to exit via another port, or in the case of a broadcast frame (or a frame for a unknown station location) all the ports.
But long before it was 10/25/40/100 GbE, there was 100 MbE. Before that was 10MbE. Before that 3Mb. And before that ... it wasn’t.
I’ve told how Ethernet came about before, but it is such a good story I can’t help telling it again. Ethernet was born of necessity, the necessity for a PhD thesis.
Long, long ago--1971
Back in the early 1970’s the great buzz in computing in Boston was the ARPANET, a defense funded collection of what today we would call routers, that connected about a hundred universities and defense contractor’s cool with-it computers with their new cool with-it timesharing systems into a new thing called a packet network.
Leased 56,000 bit dedicated phone lines crisscrossed the country, terminating at small appliances called IMPs and TIPs that let the cool timesharing mainframes talk to each other through the IMPs. Interfaces were built to connect the timesharing system to an IMP.
Into this unique cloud of shinny new ideas, blundering mistakes, brilliant breakthroughs, was a 25 year old Brooklyn kid who had already emerged from MIT with two undergraduate degrees, electrical engineering and industrial management. He had a job funded by project MAC, and was designing and building interface cards to go into some of the cool machines that MIT had that connected them to the ARPANET.
Bob was getting a PhD from Harvard, just up Massachusetts Ave from MIT. For his dissertation, he designed and built an ARPANET interface at Harvard just as he had done at MIT.
Thinking his PhD was an all-but-done deal, he had even taken a job at Xero Palo Alto Research Center, PARC. At PARC, they were thinking way, way out of the box, working on things completely new: workstations, graphics, object oriented programming languages. The workstations were minicomputers designed as personal computers and used bitmapped graphics instead of green letters on black screens. PARC’s workstation the Alto was also using that new pointer thing from that Englebart fellow just down the road at Stanford Research Institute, a thing called a “mouse.” PARC also had a refugee from Xerox corporate R&D, Gary Starkweather, who insisted he wanted to put a laser on one of their Xerox copying machines and create images on paper. No copies, mind you, but new images.
What a great place to work!
But there was trouble in Cambridge Town. Bob’s thesis committee said that building an interface for the ARPANET wasn’t research! Not research! He had already moved to Silicon Valley. He was already ensconced on the brown, grassy hills of Coyote Road. He had already picked out his favorite bean bag chair in the conference room. What do they mean "not research!"
Back in Boston, trying to sleep on a friend’s couch after a long cross-the-country flight from SFO, his mind racing with the unpleasant possible consequences of his thesis committee gone haywire, worried about his job at PARC, a job he was already in love with, Bob found it hard to sleep. And jet lagged from the trip to boot. Any wonder he couldn’t sleep.
He picked up a research paper laying on the coffee table by the sofa. It’s a good bet that reading a research paper would get him to sleep. ALOHA network, huh? What’s this about. Packet network, sounds interesting. Sends the packets using radio. Makes sense, being the project was at the University of Hawaii, radio would make sense.
Accessing a single shared medium, a radio frequency, and each station separated by a reasonably sized chunks of the Pacific Ocean, some of the stations will invariably try to send frames at the same time. How does the receiving stations know that has happened? Bit errors in the frame? Yep. Makes sense.
So, they only get about 18% of the available bandwidth, the rest of the time wasted by multiple sites stepping on each other. But if we slot the times, if we get into a rhythm where we know where everyone knows when to start sending, we can get better use of the bandwidth, like 37%.
Well, Bob thought, it would be more like 100% if somehow the sender could tell when its frame was being stepped on, if it could sense a collision. What if there was some way for the sender to listen to its own transmission and stop sending when it started to hear stuff that didn’t match what it was sending. Or better yet, there was some general way to know when two stations were speaking at the same time.
What if, just say, what if the medium was a copper wire. What if the frames were sent with Manchester encoding, just like tape drives use, with its up and down transitions. Pull the voltage down, and no matter what the data, the average voltage is at a stable value. If so, some is talking, so wait before speaking. If two stations are talking at the same time, the voltage will get more negative! We can sense the collision right away and stop sending!
Now Bob really couldn’t sleep. He had an idea. He had a better way to do a network in his head, a fast new kind of network for local systems, like the Alto at PARC, and Starkweather’s huge laser copier thing.
To his thesis committee he pitched the idea of a multiple access network using a single copper cable with each station attached to the one, long, thin copper wire. Each station would first check to make sure the line was free, then start talking. If, as it transmitted, it detected a collision, it immediately stopped.
It was called Carrier Sense, Multi-Access with Collision Detection. Committee interested, Bob returned to Palo Alto, to the Alto personal workstations, to the bean bag chairs, and created a three megabit version of his network idea. It was used to glue the fancy laser-hacked-on-to-a-copier to the personal workstation with a mouse and bitmapped graphics. One of its first jobs was to connect the workstation with the disk storage in the mainframe down the hall in the machine room. All this using his new network technology.
And he called it Ethernet.
In a Georgia Ceiling
My first exposure to Ethernet, by then a 10 million bits per second yellow snake in the ceiling, was in the mid 1980’s. The coax had the cross section of a 9 mm bullet, and before I could use it, I had to connect my computer to it.
Nothing Simple About Hooking Up
I bang down the hall toward my office with my six foot step ladder. After maneuvering my way around my desk and table, I set up the ladder and trudge up, pushing the ceiling tile back out of the way, and peer into the darkness above my head. There it is! I come down, bang the ladder around some more and then, with my Media Attachment Unix (MAU) box and 3/32 drill in hand, ascend the latter again.
I find the mark on the cable where I’m supposed to attach to it. There’s a mark only every eight feet. Dang. The mark’s over there. Back down the ladder, move it, back up I go with my tools. I clamp the MAU to the yellow snake, tightening it down with a screw driver, the ground contacts piercing into the shielding just under the yellow skin.
Then comes the tricky bit.
I flip over the cable, insert my homemade adaptor over the small round yellow of the cable just visible in the clamp, and start drilling through the adaptor into the cable. Gently, slowly, carefully, I hold the hand drill, gently squeezing the trigger, trying to go as slow as possible. There is only one cable serving the entire company, and I’m drilling into it with a power drill! The drill cuts easily after it breaches the braided shielding on the outside, then, like butter, cuts through the soft polyethylene insulating. Easy does it. Gentleness is what’s required. Then I feel it resist me. It’s hit something! STOP! I’ve hit the copper wire in the center, a conductor less than a tenth of an inch in diameter.
I pause, pull the drill out of the adaptor, and freeze, the drill suspended in my hand as I cock my head, listening carefully for voices. Up and own the hall, I don’t hear anything. Only some low conversations, some of the hardware engineers talking about the matrix backplane. Very good. I don’t seem to have brought down the network for the entire floor down.
If I had drilled too far, broken the copper core of the cable, every software developer sitting at their Sun Workstations, all happily networking to the file server, would have let out a catalog of expletives. Attaching a new computer to the network risked ones popularity in the lab.
I quickly drop in the vampire tap, a pointy conductor that screws into the clamp, then assemble the rest of the MAU box to the clamp. I attach the end of the stiff, 15 conductor AUI cable to the MAU, replace the ceiling tile, and shimmy back down the ladder. I click the other end of the AUI cable into the back of my computer.
Nothing to it!
That was back in 1987. Today, I walk down the hall from my office, high atop the 100+ year old downtown Athens sky scraper, the first in Athens in 1908 at seven floors, to the machine room. I go to the back of the rack with the Media Array, take a SFP+ Direct Attach copper cable and slide one end of it into the Arista switch, then snake the cable over to the back off the Array in a different rack. Clunk. Done. A thousand times the data rate with the simple click of an SFP+ unit.
Ethernet is such a natural for storage, especially today. Point-to-point links, spreading out from today’s high-speed Ethernet switches is really the same technology as found in a SAS cable, or the PCIe traces on the motherboard. Some parameters are adjusted for the difference in lengths; that’s all.
I could see that coming when I designed the ATA-over-Ethernet protocol to ride directly on the Ethernet. It really is the only storage protocol that was designed specifically for Ethernet.
NFS and iSCSI ride on top of TCP/IP, which might be on top of Ethernet, but not necessarily. IP is designed for long distance, multi-hop, so TCP’s timeouts are slow, as they should be for a transcontinental, or even intercontinental, internetworking technology.
Ethernet is still the glue that holds our compute world together. Our machine rooms today contain a single machine, a hierarchy of processors, cores, and network devices, all knitted together using Ethernet as the backplane.
And our Coraid EtherDrive SAN is the natural storage for it.