Blog Archives

Kaminario who?

The name “Kaminario” intrigues me and I don’t know the meaning of it. But there is a nice roll off the tongue until you say it a few times, fast and your tongue get twisted in a jiffy.

Kaminario is one of the few prominent startups in the all-flash storage space, getting USD$15 million Series C funding from big gun VCs of Sequoia and Globespan Capital Partners in 2011. That brought their total to USD$34 million, and also bringing them the attention of storage market.

I am beginning my research into their technology and their product line, the K2 and see why are they special. I am looking for an angle that differentiates them and how they position themselves in the market and why they deserved Series C funding.

Kaminario was founded in 2008, with their headquarters in Boston Massachusetts. They have a strong R&D facility in Israel and looking at their management lineup, they are headed by several personalities with an Israel background.

All this shouldn’t be a problem to many except the fact that Malaysia don’t recognize Israel diplomatically and some companies here, especially the government, might have an issue with the Israeli link. But then again, we have a lot of hypocrites in Malaysian politics and I am not going to there in my blog. It’s a waste of my time.

The key technology is Kaminario’s K2 SPEAR Architecture and it defines a fundamental method to store and retrieve performance-sensitive data. Yes, since this is an all-Flash storage solution, performance numbers, speeds and feeds are the “weapons” to influence prospects with high performance requirements. Kaminario touts their storage solution scales up to 1.5 million IOPS and 16GB/sec throughput and indeed they are fantastic numbers when you compare them with the conventional HDDs based storage platforms. But nowadays, if you are in the all-Flash game, everyone else is touting similar performance numbers as well. So, it is no biggie.

The secret sauce to the Kaminario technology is of course, its architecture – SPEAR. SPEAR stands for Scale-out Performance Storage Architecture. While Kaminario states that their hardware is pretty much off-the-shelf, open industry standard, somehow under the covers, the SPEAR architecture could have incorporate some special, proprietary design in its hardware to maximize the SPEAR technology. Hence, I believe there is a reason why Kaminario chose a blade-based system in the enclosures of its rack. Here’s a look at their hardware offering:

The idea using blades is a good idea because blades offers integrated wiring, consolidation, simple plug-and-play, ease-of-support, N+1 availability and so on. But this will also can put Kaminario in a position of all-blades or nothing. This is something some customers in Malaysia might have to get used to because many would prefer their racks. I could be wrong and let’s hope I am.

Each enclosure houses 16 blades, with N+1 availability. As I am going through Kaminario’s architecture, the word availability is becoming louder, and this could be something Kaminario is differentiating from the rest. Yes, Kaminario has the performance numbers, but Kaminario is also has a high-available (are we talking 6 nines?) architecture inherent within SPEAR. Of course, I have not done enough to compare Kaminario with the rest yet, but right now, availability isn’t something that most all-Flash startups trumpet loudly. I could be wrong but the message will become clearer when I go through my list of all-Flash – SolidFire, PureStorage, Virident, Violin Memory and Texas Memory Systems.

Each of the blades can be either an ioDirector or a DataNode, and they are interconnected internally with 1/10 Gigabit ports, with at least one blade acting as a standby blade to the rest in a logical group of production blades. The 10Gigabit connection are used for “data passing” between the blades for purpose of load-balancing as well as spreading out the availability function for the data. The Gigabit connection is used for management reasons.

In addition to that there is also a Fibre Channel piece that is fronting the K2 to the hosts in the SAN. Yes, this is an FC-SAN storage solution but since there was no mention of iSCSI, the IP-SAN capability is likely not there (yet).

 Here’s a look at the Kaminario SPEAR architecture:

The 2 key components are the ioDirector and the DataNode. A blade can either have a dedicated personality (either ioDirector or DataNode) or it can share both personalities in one blade. Minimum configuration is 2-blades of 2 ioDirectors for redundancy reasons.

The ioDirector is the front-facing piece. It presents to the SAN the K2 block-based LUNs and has the intelligence to dynamically load balance both Reads and Writes and also optimizing its resource utilization. The DataNode plays the role of fetching, storing, and backup and is pretty much the back-end worker.

With this description, there are 2 layers in the SPEAR architecture. And interestingly, while I mentioned that Kaminario is an all-Flash storage player, it actually has HDDs as well. The HDDs do not participate in the primary data serving and serve as containers for backup for the primary data in the SSDs, which can be MLC-Flash or DRAMs. The back-end backup layer comprising of HDDs is what I said earlier about availability. Kaminario is adding data availability as part of its differentiating features.

That’s the hardware layout of SPEAR, but the more important piece is its software, the SPEAR OS. It has 3 patent-pending  capabilities, with not so cool names (which are trademarked).

  1. Automated Data Distribution
  2. Intelligent Parallel I/O Processing
  3. Self Healing Data Availability

The Automated Data Distribution of the SPEAR OS acts as a balancer. It balances the data by dynamically and randomly (in an random equilibrium fashion, I think) to spread out the data over the storage capacity for efficiency, SSD longevity and of course, optimized performance balancing.

The second capability is Intelligent Parallel I/O Processing. The K2 architecture is essentially a storage grid. The internal 10Gigabit interconnects basically ties all nodes (ioDirectors and DataNodes) together in a grid-like fashion for the best possible intra-node communications. The parallelization of the I/O Read and Write requests spreads across the nodes in the storage grid, giving the best average response and service times.

Last but not least is the Self Healing Data Availability, a capability to dynamically reconfigure accessibility to the data in the event of node failure(s). Kaminario claims no single point of failure, which is something I am very interested to know if given a chance to assess the storage a bit deeper. So far, that’s the information I am able to get to.

The Kaminario K2 product line comes in 3 model – D, F, and H.

D is for DRAM only and F is for Flash MLC only. The H model is a combination of both Flash and DRAM SSDs. Here how Kaminario addresses each of the 3 models:

Kaminario is one of the early all-Flash storage systems that has gained recognition in 2011. They have been named a finalist in both Storage Magazine and SearchStorage Storage Product of the Year competitions for 2011. This not only endorses a brand new market for solid state storage systems but validates an entirely new category in the storage networking arena.

Kaminario can be one to watch in 2012 as with others that I plan to review in the coming weeks. The battle for Flash racks is coming!

BTW, Dell is a reseller of Kaminario.

Advertisements

Battle of flash racks coming soon

The battle is probably already here. It has just begun for rack mounted flash-based or DRAM-based (or both) storage systems.

We have read in the news about the launch of EMC’s Project Lightning, and I wrote about it. EMC is already stirring up the competition, aiming its guns at FusionIO. Here’s a slide from EMC comparing their VFCache with FusionIO.

Not to be outdone, NetApp set its motion to douse the razzmatazz of EMC’s Lightning, announcing the future availability of their server-side flash software (no PCIe card) but it will work with major host-based/server-side PCIe Flash cards. (FusionIO, heads up). Ah, in Sun Tsu Art of War, this is called helping your buddy fight the bigger enemy.

NetApp threw some FUDs into the battle zone, claiming that EMC VFCache only supports 300GB while the NetApp flash software will support 2TB, NetApp multiprotocol, and VMware’s VMotion, DRS and HA. (something that VFCache does not support now).

The battle of PCIe has begun.

The next battle will be for the rackmounted flash storage systems or appliance. EMC is following it up with Project Thunder (because thunder comes after lightning), which is a flash-based storage system or appliance. Here’s a look at EMC’s preliminary information on Project Thunder.

And here’s how EMC is positioning different storage tiers in the following diagram below (courtesy of VirtualGeek), being glued together by EMC FAST (Fully Automated Storage Tiering) technology.

But EMC is not alone, as there are already several prominent start-ups out there, already offering flash-based, rackmount storage systems.

In the battle ring, there is Kaminario K2 with the SPEAR (Scale-out Performance Storage Architecture), Violin Memory with Violin Switched Memory (VXM) architecture, Purestorage Purity Operating Environment and SolidFire’s Element OS, just to name a few. Of course, we should never discount the grand daddy of all flash-based storage – Texas Memory Systems RamSAN.

The whole motion of competition in this new arena is starting all over again and it’s exciting for me. There is so much to learn about newer, more innovative architecture and I intend to share more of these players in the coming blog entries. It is time to take notice because the SSDs are dropping in price, FAST! And in 2012, I strongly believe that this is the next battle of the storage players, both established and start-ups.

Let the battle begin!

 

All SSDs storage array? There’s more than meets the eye at Pure Storage

Wow, after an entire week off with the holidays, I am back and excited about the many happenings in the storage world.

One of the more prominent news was the announcement of Pure Storage launching its enterprise storage array build entirely with flash-based solid state drives. In addition to that, there were other start-ups who were also offering SSDs storage arrays. The likes of Nimbus Data, Avere, Violin Memory Systems all made the news as well as the grand daddy of solid state storage arrays, Texas Memory Systems.

The first thing that came to my mind was, “Wow, this is great because this will push down the $/GB of SSDs closer to the range of $/GB for spinning disks”. But then skepticism crept in and I thought, “Do we really need an entire enterprise storage array of SSDs? That’s going to cost the world”.

At the same time, we in the storage industry knows that no piece of data are alike. They can be large, small, random, sequential, accessed frequently or infrequently and so on. It is obviously better to tier the storage, using SSDs for Tier 0, 10K/15K RPM spinning HDDs for Tier 1, SATA for Tier 2 and perhaps tape for the archive tier. I was already tempted to write my pessimism on Pure Storage when something interesting caught my attention.

Besides the usual marketing jive of sub-milliseconds, predictable latency, green messaging, global inline deduplication and compression and built-in data integrity into its Purity Operating Environment (POE), I was very surprised to find the team behind Pure Storage. Here’s their line-up

  • Scott Dietzen, CEO – starting from principal technologist of Transarc (sold to IBM), principal architect of Web Logic (sold to BEA Systems), CTO of BEA (sold to Oracle), CTO of Zimbra (sold to Yahoo! and then to VMware)
  • John “Coz” Colgrove, Founder & CTO – Veritas Fellow, CTO of Symantec Data Management group, principal architect of Veritas Volume Manager (VxVM) and Veritas File System (VxFS) and holder of 70 patents
  • John Hayes, Founder & Chief Architect – formerly of  Yahoo! office of Chief Technologist
  • Bob Wood, VP of Engineering – Formerly NetApp’s VP of File System Engineering,
  • Michael Cornwell, Director of Technology & Strategy – formerly the lead technologist of Sun Microsystems’ Sun Storage F5100 Flash Array and also Quantum’s storage architect for their storage telemetry, VTL and DXi solutions
  • Ko Yamamoto, VP of System Engineering – previously NetApp’s director of platform engineering, Quantum DXi director of hardware engineering, and also key contributor to 4-generations of Tandem NonStop technology

In addition to that, there are 3 key individual investors worth mentioning

  • Diane Green – Founder of VMware and former CEO
  • Dr. Mendel Rosenblum – Founder and former Chief Scientist and creator of VMware
  • Frank Slootman – formerly CEO of Data Domain (acquired by EMC)

All these industry big guns are flocking to Pure Storage for a reason and it looks to me that Pure Storage ain’t your ordinary, run-of-the-mill enterprise storage company. There’s definitely more than meet the eye.

On top of the enterprise storage array platform is Pure Storage’s Purity Operating Environment (POE). POE focuses on 3 key storage services which are

  • High Performance Data Reduction
  • Mission Critical Reliability
  • Predictable Sub-millisecond Performance

After going through the deep-dive videos by Pure Storage’s CTO, John Colgrove, they are very much banking the success of their solution around SSDs. Everything that they have done is based on SSDs.  For example, in order to achieve a larger capacity as well as a much cheaper $/GB, the data reduction techniques in global deduplication, high compression and also fine grained thin provision of 512 bytes are used. By trading off IOPS (which SSDs have plenty since they are several times faster than conventional spinning disks), a larger usable capacity is achieved.

In their RAID 3D, they also incorporated several high reliability techniques and data integrity algorithm that are specifically for SSDs. One note that was mentioned was that traditional RAID and especially the parity-based RAID levels were designed in the beginning to protect against an entire device failure. However, in SSDs, the failure does not necessarily occur in the entire device. Because of the way SSDs are built, the failure hotspots tend to happen at the much more granular bit level of the SSDs. The erase-then-write techniques that are inherent in NAND Flash SSDs causes the bit error rate (BER) of the SSD device to go up as the device ages. Therefore, it is more likely to get a read/write error from within the SSDs memory itself rather than having the entire SSD device failing. Pure Storage RAID 3D is meant to address such occurrences of bit errors.

I spoke a bit of storage tiering earlier in this article because every corporation employs storage tiering to be financially responsible. However, John Colgrove’s argument was why tier the storage when there’s plentiful of IOPS and the $/GB is comparable to spinning disks. That is true is when the $/GB of SSDs can match the $/GB of spinning disks. Factors we must also taken into account is the rack-space savings using the smaller profile disks of SSDs, the power-savings costs of SSDs versus conventional HDD-based enterprise storage arrays. In its entirety, there are strong indications that the $/GB of SSD-based systems to match or perhaps lower the $/GB of HDD-based systems. And since the IOPS requirement levels of present-day applications have not demanded super-high IOPS and multi-core processing is cheap, there’s plenty of head-room for Pure Storage and other similar enterprise storage array companies to grow.

The tides are changing for the storage industry and it is good to see a start-up like Pure Storage boldly coming forth to announce their backing for SSDs. It’s good for the consumer and good for the industry. But more importantly, they are driving innovations to rethink of how we build storage arrays. I am looking forward to more things to come.

Solid State Drives … are they reliable?

There’s been a lot of questions about Solid State Drives (SSD), aka Enterprise Flash Drives (EFD) by some vendors. Are they less reliable than our 10K or 15K RPM hard disk drives (HDDs)? I was asked this question in the middle of the stage when I was presenting the topic of Green Storage 3 weeks ago.

Well, the usual answer from the typical techie is … “It depends”.

We all fear the unknown and given the limited knowledge we have about SSDs (they are fairly new in the enterprise storage market), we tend to be drawn more to the negatives than the positives of what SSDs are and what they can be. I, for one, believe that SSDs have more positives and over time, we will grow to accept that this is all part of what the IT evolution. IT has always evolved into something better, stronger, faster, more reliable and so on. As famously quoted by Jeff Goldblum’s character Dr. Ian Malcolm, in the movie Jurassic Park I, “Life finds a way …”, IT will always find a way to be just that.

SSDs are typically categorized into MLCs (multi-level cells) and SLCs (single-level cells). They have typically predictable life expectancy ranging from tens of thousands of writes to more than a million writes per drive. This, by no means, is a measure of reliability of the SSDs versus the HDDs. However, SSD controllers and drives employ various techniques to enhance the durability of the drives. A common method is to balance the I/O accesses to the disk block to adapt the I/O usage patterns which can prolong the lifespan of the disk blocks (and subsequently the drives itself) and also ensure performance of the drive does not lag since the I/O is more “spread-out” in the drive. This is known as “wear-leveling” algorithm.

Most SSDs proposed by enterprise storage vendors are MLCs to meet the market price per IOP/$/GB demand because SLC are definitely more expensive for higher durability. Also MLCs have higher BER (bit-error-rate) and it is known than MLCs have 1 BER per 10,000 writes while SLCs have 1 BER per 100,000 writes.

But the advantage of SSDs clearly outweigh HDDs. Fast access (much lower latency) is one of the main advantages. Higher IOPS is another one. SSDs can provide from several thousand IOPS to more than 1 million IOPS when compared to enterprise HDDs. A typical 7,200 RPM SATA drive has less than 120 IOPS while a 15,000 RPM Fibre Channel or SAS drive ranges from 130-200 IOPS. That IOPS advantage is definitely a vast differentiator when comparing SSDs and HDDs.

We are also seeing both drive-format and card-format SSDs in the market. The drive-format type are typically in the 2.5″ and 3.5″ profile and they tend to fit into enterprise storage systems as “disk drives”. They are known to provide capacity. On the other hand, there are also card-format type of SSDs, that fit into a PCIe card that is inserted into host systems. These tend to address the performance requirement of systems and applications. The well known PCIe vendors are Fusion-IO which is in the high-end performance market and NetApp which peddles the PAM (Performance Access Module) card in its filers. The PAM card has been renamed as FlashCache. Rumour has it that EMC will be coming out with a similar solution soon.

Another to note is that SSDs can be read-biased or write-biased. Most SSDs in the market tend to be more read-biased, published with high read IOPS, not write IOPS. Therefore, we have to be prudent to know what out there. This means that some solution, such as the NetApp FlashCache, is more suitable for heavy-read I/O rather than writes I/O. The FlashCache addresses a large segment of the enterprise market because most applications are heavy on reads than writes.

SSDs have been positioned as Tier 0 layer in the Automated Storage Tiering segment of Enterprise Storage. Vendors such as Dell Compellent, HP 3PAR and also EMC FAST2 position themselves with enhanced tiering techniques to automated LUN and sub-LUN tiering and customers have been lapping up this feature like little puppies.

However, an up-and-coming segment for SSDs usage is positioning the SSDs as extended read or write cache to the existing memory of the systems. NetApp’s Flashcache is a PCIe solution that is basically an extended read cache. An interesting feature of Oracle Solaris ZFS called Hybrid Storage Pool allows the creation of read and write cache using SSDs. The Sun fellas even come up with cool names – ReadZilla and LogZilla – for this Hybrid Storage Pool features.

Basically, I have poured out what I know about SSDs (so far) and I intend to learn more about it. SNIA (Storage Networking Industry Association) has a Technical Working Group for Solid State Storage. I advise the readers to check it out.