The new year always brings reflection. As Coraid enters its fifth year of post-VC operation, we have done a lot.
I left the California VC-run Coraid in 2014 with plans to create more technology that would make network storage fast, cheap, and easy to use. Little did I know that less than a year later, I would find myself with my original technology and begin the process of reverting all the code to my original vision.
The technology wasn’t the only new part in my vision - the company was as well. Getting VC funding was my fault, back in 2009. In 2014, I did quite a lot of thinking about the business of doing business. One thing I realized was that I had been given an upbringing that uniquely gave me the background to make business a success. I grew up in an entrepreneurial family, the fifth generation of folks who started and ran their own businesses.
So, back in 2014 I decided to build a company that would stick around, serving its customers well and over the long haul by building good, easy, fast, and cheap technology. I wanted to build things that I would use myself, and build a company that would allow me to do that for the rest of my life.
So, the new Coraid.
If you don’t know about Coraid’s EtherDrive products, let me give you a very short explanation.
Back in 1999, I noticed that servers were just PCs that people had put onto rails and stuck into racks. Lots and lots of racks. I noticed that the disks were bolted internally to these boxes. (Today, everything is hot-inserted.) Back then you had to get out your screwdriver to change a disk.
The biggest issue was that these computers were getting faster and the amount of storage they required was getter larger at a faster rate than the disks were getting larger.
I also noticed that, besides the ratio of server to disk being too rigid, the disks were also held captive. They could only be used by one server, and in order to move a disk and the data stored on it, one had to walk into the room and physically move it.
What was needed was network storage. Since the servers were inexpensive, network storage should be too. The medium for this storage was obvious to me: ubiquitous Ethernet.
Network storage has been around a long time. Fibre Channel was the mainframe-style technology and is still used in the enterprise style application. What was needed was something that didn’t require a large expensive staff to operate, allowed the setting of things like queue depth by hand, didn’t costs gazillions of dollars, could scale-out as much as one needed, and would allow the user to do so incrementally.
I also wanted to create something that never had to be “forklifted,” that is, one’s investment in it would never become obsolete.
The result was the ATA-over-Ethernet (AoE) protocol and the SRX and VSX EtherDrive storage appliances.
The AoE protocol sits right on Ethernet. It doesn’t use TCP or IP or anything else. It’s naturally more secure by not being routable. It can use multiple Ethernet ports on the server, the SRX, and VSX storage appliances.
By not being TCP based, while still being reliable, it can use all available Ethernet links between the server and the storage. This means you get something called port bonding for free. If the server has four ports, it can use all of them to get to the storage.
The other thing you can do with AoE is utilize multiple paths to the storage. If you link some of the server’s Ethernet ports to one switch and other ports to another switch, then do the same with the storage appliance, one of the switches could go down and you could still get to your storage.
AoE is fast. With AoE, the limit is usually the disks in the storage appliance. The AoE protocol is designed to minimize the number of network packets used in the transfer.
AoE is not really new at this point. We first shipped the technology back in 2004. By 2005, drivers for it were in the Linux kernel as it comes out of Kernel.org. Today, we prefer to use commodity NICs that we reprogram to be private Ethernet ports for use with our storage. This allows us to get much better performance out of the server.
The main storage appliance is the Coraid SRX disk array and comes in various numbers of disk bays. The SRX/h2421, for example, is perfect for an all-flash array that can be purchased for a small fraction of what the Brand X vendors charge. For example, if one puts 2TB Samsung drives ($320) into the SRX/h2421 ($1,995), and runs it for five years ($995 / year software) the total cost comes out to 48 TB for 1/2 a penny per month per GB.
And that’s for flash! You should price it with rotating media.
Also, one doesn’t have to buy all the disks at once. They can be bought as needed, spread over a number of months or even years. When an array is full, just buy another one.
There are a lot of neat things to talk about with the SRX array. It scales out with no performance degradation. Each new SRX brings some more memory, network bandwidth, and processing power to each array. Each SRX can have up to 120 Gb of bandwidth. You can have up to 60,000 nodes in a single network, and the 60,000th is as performant and affordable as the first.
But, one of the best things is that Coraid appliances are never obsolete. One can rotate drives out as they age, replacing them with new drives a few at a time. One always gets the benefit of performance improvements and new features of the ongoing SRX software development just by keeping support active.
Users can upgrade their hardware and move their old disks and software license to the new hardware. Coraid customers with the old SRX 3500, for example, can pay only $1,995 and get all new hardware, move their disks over, and have a hardware refresh.
This is the way I would like to use technology.
So, into our fifth year, the new Coraid is back at full force. We’re here to stay.