Daily Archives: November 9, 2011

Falconstor – soaring to 7th heaven

I was invited to Falconstor version 7.0 launch to the media this morning at Sunway Resort Hotel.

I must admit that I am a fan of Falconstor from a business perspective because they have nifty solutions. Many big boys OEMed Falconstor’s VTL solutions such as EMC with its CDL (CLARiiON Disk Library) and Sun Microsystems virtual tape library solutions. Things have been changing. There are still OEM partnerships with HDS (with Falconstor VTL and FDS solutions), HP (with Falconstor NSS solution) and a few others, but Falconstor has been taking up a more aggressive stance with their new business model. They are definitely more direct with their approach and hence, it is high time we in the industry recognize Falconstor’s prowess.

The launch today is Falconstor version 7.0 suite of data recovery and storage enhancement solutions. Note that while the topic of their solutions were on data protection, I used data recovery, simply because the true objective of their solutions are on data recovery, doing what matters most to business – RECOVERY.

Falconstor version 7.0 family of products is divided into 3 pillars

  • Storage Virtualization – with Falconstor Network Storage Server (NSS)
  • Backup & Recovery – with Falconstor Continuous Data Protector (CDP)
  • Deduplication – with Falconstor Virtual Tape Library (VTL) and File-Interface Deduplication System (FDS)
NSS virtualizes heterogeneous storage platforms and sits in between the application servers, or virtualized servers. It simplifies disparate storage platforms by consolidating volumes and provides features such as thin provisioning and snapshots. In the new version, NSS now supports up to 1,000 snapshots per volume from the previous number of 255 snapshots. That is a 4x increase as the demand for data protection is greater than ever. This allows the protection granularity to be in the minutes, well meeting the RPO (Recovery Point Objectives) standard of the most demanding customers.
The NSS also replicates the snapshots to a secondary NSS platform at a DR to extend the company’s data resiliency and improves the business continuance factor for the organization.
In a revamp new algorithm in version 7.0, the Microscan technology used in the replication technology is now more potent and higher in performance. For the uninformed, Microscan, as quoted in the datasheet is:
MicroScan™, a patented FalconStor technology, minimizes the
amount of data transmitted by eliminating redundancies at the
application and file system layers. Rather than arbitrarily
transmitting entire blocks or pages (as is typical of other
replication solutions), MicroScan technology maps, identifies, and
transmits only unique disk drive sectors (512 bytes), reducing
network traffic by as much as 95%, in turn reducing remote
bandwidth requirements.

Another very strong feature of the NSS is the RecoverTrac, which is an automated DR technology. In business, business continuity and disaster recovery usually go hand-in-hand. Unfortunately, triggering either BC or DR or both is an expensive and resource-consuming exercise. But organizations have to prepare and therefore, a proper DR process must be tested and tested again.

I am a certified Business Continuity Planner, so I am fully aware of the beauty RecoverTrac brings to the organization. The ability to test non-intrusive, simulated DR, and find out the weak points of recovery is crucial and RecoverTrac brings that confidence of DR testing to the table. Furthermore, well-tested automated DR processes also eliminates human errors in DR recovery. And RecoverTrac also has the ability to track the logical relationships between different applications and computing resource, making this technology an invaluable tool in the DR coordinator’s arsenal.

The diagram below shows the NSS solution:

And NSS touts to be one true any storage platform to any storage platform over any protocol replication solution. Most vendors will have either FC or iSCSI or NAS protocols but I believe so far, only Falconstor offers all protocols in one solution.

Item #2 in the upgrade list is Falconstor’s CDP solution. Continuous Data Protection (CDP) is a very interesting area in data protection. CDP provides almost near-zero RTO/RPO solution on disk, and yet not many people are aware of the power of CDP.

About 5-6 years ago, CDP was hot and there were many start-ups in this area. Companies such Kashya (bought by EMC to become RecoverPoint), Mendocino, Revivio (gobbled up by Symantec) and StoneFly have either gone belly up or gobbled up by bigger boys in the industry. Only a few remained, and Falconstor CDP is one of the true survivors in this area.

CDP should be given more credit because there are always demand for very granular data protection. In fact, I sincerely believe that both CDP, snapshots and snapshot replication are the real flagships of data protection today and the future because data protection using the traditional backup method, in a periodic and less frequent manner, is no longer adequate. And the fact that backup is generating more and more data to keep is truly not helping.

Falconstor CDP has the HyperTrac™ Backup Accelerator (HyperTrac) works in conjunction with FalconStor Continuous Data Protector (CDP) and FalconStor Network Storage Server (NSS) to increase tape backup speed, eliminate backup windows, and offload processing from application servers. A quick glimpse of HyperTrac technology is shown below:

In the Deduplication pillar, there were upgrades to both Falconstor VTL and Falconstor FDS. As I said earlier, CDP, snapshots and replication of the snapshot are already becoming the data protection of this new generation of storage solutions. Coupled with deduplication, data protection is made more significant because it makes smart noodles to keep one copy of the same old files, over and over again.

Falconstor File-Interface Deduplication Systems (FDS) addresses the requirement to storage more effectively, efficiently, economically. Its Single Instance Repository (SIR) technology has now been enhanced as a global deduplication repository, giving it the ability to truly store a single copy of the object. Previously, FDS was not able to recognize duplicated objects in a different controller. FDS also has improved its algorithms, driving performance up to 30TB/hour and is able to deliver a higher deduplication ratio.

In addition to the NAS interface, the FDS solution now has a tighter integration with the Symantec Open Storage Technology (OST) protocol.

The Falconstor VTL is widely OEM by many partners and remains one of the most popular VTL solutions in the market. VTL is also enhanced significantly in this upgrade and not surprisingly, the VTL solution from Falconstor is strengthened by its near-seamless integration with the other solutions in their stable. The VTL solution now supports up to 1 petabyte usable capacity.

Falconstor has always been very focused in the backup and data recovery space and has done well favourably with Gartner. In January of 2011, Gartner has release their Magic Quadrant report for Enterprise Disk-based Backup and Recovery, and Falconstor was positioned as one of the Visionaries in this space. Below is the magic quadrant:

As their business model changes to a more direct approach, it won’t be long before you seen Falconstor move into the Leader quadrant. They will be soaring, like a Falcon.

Advertisements

Performance benchmarks – the games that we play

First of all, congratulations to NetApp for beating EMC Isilon in the latest SPECSfs2008 benchmark for NFS IOPS. The news is everywhere and here’s one here.

EMC Isilon was blowing its horns several months ago when it  hit 1,112,705 IOPS recorded from a 140-node S200 cluster with 3,360 disk drives and a overall response time of 2.54 msecs. Last week, NetApp became top dog, pounding its chest with 1,512,784 IOPS on a 24 x FAS6240 nodes  with an overall response time of 1.53msecs. There were 1,728 450GB, 15,000rpm disk drives and the FAS6240s were fitted with Flash Cache.

And with each benchmark that you and I have seen before and after, we will see every storage vendors trying to best the other and if they did, their horns will be blaring, the fireworks are out and they will pounding their chests like Tarzan, saying “Who’s your daddy?” The euphoria usually doesn’t last long as performance records are broken all the time.

However, the performance benchmark results are not to be taken in verbatim because they are not true representations of real life, production environment. 2 years ago, the magazine, the defunct Byte and Switch (which now is part of Network Computing), did a 9-year study on File Systems and Storage Benchmarking. In a very interesting manner, it revealed that a lot of times, benchmarks results are merely reduced to single graphs which has little information about the details of how the benchmark was conducted, how long the benchmark took and so on.

The paper, published by Avishay Traeger and Erez Zadok from Stony Brook University and Nikolai Joukov and Charles P. Wright from the IBM T.J. Watson Research Center entitled, “A Nine Year Study of File System and Storage Benchmarking” studied 415 file systems from 106 published results and the article quoted:

Based on this examination the paper makes some very interesting observations and 
conclusions that are, in many ways, very critical of the way “research” papers have 
been written about storage and file systems.

 

Therefore, the paper highlighted the way the benchmark was done and the way the benchmark results were reported and judging by the strong title (It was titled “Lies, Damn Lies and File Systems Benchmarks”) of the online article that reviewed the study, benchmarks are not the pictures that says a thousand words.

Be it TPC-C, SPC1 or SPECSfs benchmarks, I have gone through some interesting experiences myself, and there are certain tricks of the trade, just like in a magic show. Some of the very common ones I come across are

  • Short stroking – a method to format a drive so that only the outer sectors of the disk platter are used to store data. This practice is done in I/O-intensive environments to increase performance.
  • Shortened test – performance tests that run for several minutes to achieve the numbers rather than prolonged periods (which mimics real life)
  • Reporting aggregated numbers – Note the number of nodes or controllers used to achieve the numbers. It is not ONE controller than can achieve the numbers, but an aggregated performance results factored by the number of controllers
Hence, to get to the published benchmark numbers in real life is usually not practical and very expensive. But unfortunately, customers are less educated about the way benchmarks are performed and published. We, as storage professionals, have to disseminate this information.
Ok, this sounds oxymoronic because if I am working for NetApp, why would I tell the truth that could actually hurt NetApp sales? But I don’t work for NetApp now and I think it is important for me do my duty to share more information. Either way, many people switch jobs every now and then, and so if you want to keep your reputation, be honest up front. It could save you a lot of work.