Category Archives: Amazon
I like the way Amazon is building their Cloud Computing services. Amazon Web Services (AWS) is certainly on track to become the most powerful Cloud Computing company in the world. In fact, AWS might already is. But they are certainly not resting on their laurels when they launched 2 new services in as many weeks – Amazon DynamoDB (last week) and Amazon Storage Gateway (this week).
I am particularly interested in the Amazon Storage Gateway, because it is addressing one of the biggest fears of Cloud Computing head-on. A lot of large corporations are still adamant to keep their data on-premise where it is private and secure. Many large corporations are still very skeptical about it even though Cloud Computing is changing the IT landscape in a massive way. The barrier to entry for large corporations is not something easy, but Amazon is adapting to get more IT divisions and departments to try out Cloud Computing in a less disruptive way.
The new service, is really about data storage and data backup for large corporations. This is important because large corporations have plenty of requirements for data storage and data to be backed up. And as we know, a large portion of the data stored does not need to be transactional or to be accessed frequently. This set of data is usually less frequently used, for archiving or regulatory compliance reasons, particular in the banking and healthcare industry.
In the data backup operations, the reason data is backed up is to provide a data recovery mechanism when a disaster strikes. Large corporations back up tons of data every day, weeks or month and this data only has value when there is a situation that requires data relevance, data immediacy or data recovery. Otherwise, it is just plenty of data taking up storage space, be it on disk or on tape.
Both data storage and data backup cost a lot of money, both CAPEX and OPEX. In CAPEX, you are constantly pressured to buy more storage to store the ever growing data. This leads to greater management and administration costs, both contributing heavily into OPEX costs. And I have not included the OPEX costs of floor space, power and cooling, people (training, salary, time and so on) typically adding up to 3-5x the operations costs relative to the capital investments. Such a model of IT operations related to storage cannot continue forever, and storage in the Cloud offers an alternative.
These 2 scenarios – data storage and data backup – are exactly the type of market AWS is targeting. In order to simplify and pacify large corporations, AWS introduced the Amazon Storage Gateway, that eases the large corporations to take some of their IT storage operations to the Cloud in the form of Amazon S3.
The video below shows the Amazon Storage Gateway:
The Amazon Storage Gateway is a piece of software “appliance” that is installed on-premise in the large corporation’s data center. It seamlessly integrates into the LAN and provides a SSL (Secure Socket Layer) connection to the Amazon S3. The data being transferred to the S3 is also encrypted with AES (Advanced Encryption Standard) 256-bit. Both SSL and AES-256 can give customers a sense of security and AWS claims that the implementation meets the data storage and data recovery standards used in the banking and healthcare industries.
The data storage and backup service regularly protects the customer’s data in snapshots, and giving the customer a rapid recovery platform should the customer experienced on-premise data corruption or data disruption. At the same time, the snapshot copies in the Amazon S3 can also be uploaded into Amazon EBS (Elastic Block Store) and testing or development environments can be evaluated and testing with Amazon EC2 (Elastic Compute Cloud). The simplicity of sharing and combining different Amazon services will no doubt, give customers a peace of mind, easing their adoption of Cloud Computing with AWS.
This new service starts with a 60-day free trial and moving on to a USD$125.00 (about Malaysian Ringgit $400.00) per gateway per month subscription fee. The data storage (inclusive of the backup service), costs only 14 cents per gigabyte per month. For 1TB of data, that is approximately MYR$450 per month. Therefore, minus the initial setup costs, that comes to a total of MYR$850 per month, slightly over MYR$10,000 per year.
At this point, I like to relate an experience I had a year ago when implementing a so-called private cloud for an oil-and-gas customers in KL. They were using the HP EVS (Electronic Vaulting Service) to an undisclosed HP data center hosting site in the Klang Valley. The HP EVS, which was an OEM of Asigra, was not an easy solution to implement but what was more perplexing was the fact that the customer had a poor understanding of what would be the objectives and their 5-year plan in keeping with the data protected.
When the first 3-4TB data storage and backup were almost used up, the customer asked for a quotation for an additional 1TB of the EVS solution. The subscription for 1TB was MYR$70,000 per year. That is 7x time more than the AWS MYR$10,000 per year cost! I have to salute the HP sales rep. It must have been a damn good convincing sell!
In the long run, the customer could be better off running their storage and backup on-premise with their HP EVA4400 and adding an additional of 1TB (and hiring another IT administrator) would have cost a whole lot less.
Amazon Web Services has already operating in Singapore for the past 2 years, and I am sure they are eyeing Malaysia as their regional market. Unless and until Malaysian companies offering Cloud Services know to use economies-of-scale to capitalize the Cloud Computing market, AWS is always going to be a big threat to CSP companies in Malaysia and a boon of any companies seeking cloud computing services anywhere in the world.
I urge customers in Malaysia to start questioning their so-called Cloud Service Providers if they can do what AWS is doing. I have low confidence of what the most local “cloud computing” companies can deliver right now. I hope they stop window dressing their service offerings and start giving real cloud computing services to customers. And for customers, you must continue to research and find out more which cloud services meet your business objectives. Don’t be flashed by the fancy jargons or technical idealism thrown at you. Always, always find out more because your business cost is at stake. Don’t be like the customer who paid MYR$70,000 for 1TB per year.
AWS is always innovating and the Amazon Storage Gateway is just another easy-to-adopt step in their quest for world domination.
My research on file systems brought me to an very interesting piece of article. It is titled “Dynamo: Amazon’s Highly Available Key-Value Store” dated 2007.
Yes, this is an internal storage systems designed and developed in Amazon to scale and support Amazon Web Services (AWS). It is a very complex piece of technology and the paper is highly technical (not for the faint of heart). And of all places, Amazon is probably the last place you think you would find such smart technology, but it’s true. AWS engineers are slowly revealing the many of their innovations (think Amazon Silk browser technology).
And it appears that many of the latest cloud-based computing and services companies such as Amazon, Google and many others have been developing new methods of storing data objects. These methods are very different from the traditional methods of storing data, and many are no longer adopting the relational database model (RDBMS) to scale their business.
The traditional 3-tier architecture often adopted by web-based (before the advent of “cloud”), is evolving. As shown in the diagram below:
the foundation tier is usually a relational database (or a distributed relational database), communicating with the back-end storage (usually a SAN).
All that is changing because the relational database model is not keeping up with the tremendous pace of the proliferation of web-based and cloud-based objects or unstructured data. As explained by Alex Iskold, a writer of ReadWriteWeb, there are scalability issues with the conventional relational database.
Before I get to the scalability issues mentioned in the above diagram, let me set the floor for discussion.
For theoretical schoolers of relational database, the term ACID defines and guarantees the transactional reliability of relational databases. ACID stands for Atomicity, Consistency, Isolation and Durability. According to Wikipedia, “transactions provide an “all-or-nothing” proposition, stating that each work-unit performed in a database must either complete in its entirety or have no effect whatsoever. Further, the system must isolate each transaction from other transactions, results must conform to existing constraints in the database, and transactions that complete successfully must get written to durable storage.”
ACID has been the cornerstone of relational database from the very beginning. But as the demands of greater scalability and greater distribution of data, all 4 components of ACID – Atomicity, Consistency, Isolation, Durability – can no longer hold true. Hence, the CAP Theorem.
CAP Theorem (aka Brewer’s Theorem) stands for Consistency, Availability and Partition Tolerance. In the ACM (Association of Computing Machinery) conference in 2000, Eric Brewer of University of California, Berkeley delivered the theorem. It states that it is impossible for a distributed computer system (or a database system) to simultaneously guarantee all 3 components – Consistency, Availability and Partition Tolerance.
Therefore, as the database systems become more and more distributed in cyberspace, the ACID theorem begins to break down. All 4 components of ACID cannot be guaranteed simultaneously anymore as the database systems begin to become more and more distributed.
So when we get back to the diagram, both the concepts on left and right – Master/Slave OR Multiple Peers – will put a tremendous strain on the single, non-distributed relational database.
New data models are surfacing to handling the very distributed data sets. Distributed object-based “file systems” and NoSQL type of databases are some of the unconventional data storage “systems” that are beginning to surface as viable alternatives to the relational database method in cyberspace. And one of them is the Amazon Dynamo Storage System. (ADSS)
ADSS is a highly available, Amazon-proprietary key-value distributed data store. ADSS has both the properties of distributed hash table and a database and it is used internally to power various Cloud Services in Amazon Web Services (AWS).
It behaves like a relational database where it stores data objects to be retrieved. However, the data objects are not stored in a table format of a conventional relational database. Instead, the data is stored in a distributed hash table and data content or value is retrieved with a key, hence a key-value data model.
- Physical nodes are thought of as identical and organized into a ring.
- Virtual nodes are created by the system and mapped onto physical nodes, so that hardware can be swapped for maintenance and failure.
- The partitioning algorithm is one of the most complicated pieces of the system, it specifies which nodes will store a given object.
- The partitioning mechanism automatically scales as nodes enter and leave the system.
- Every object is asynchronously replicated to N nodes.
- The updates to the system occur asynchronously and may result in multiple copies of the object in the system with slightly different states.
- The discrepancies in the system are reconciled after a period of time, ensuring eventual consistency.
- Any node in the system can be issued a put or get request for any key
The Dynamo architecture addresses the CAP Theorem well. It is highly available, where nodes, either physical or virtual, can be easily swapped without affected the storage services. It is also high performance, nodes (again physical or virtual) can be added to boost the performance. The high performance and highly available components addresses the “A” piece of CAP.
Its distributed nature also allows it to scale to billions and billions of data objects and hence meets the “P” requirement of CAP. The Partitioning Tolerance is definitely there.
However, as stated by CAP Theorem, you can’t have all 3 happening at the same time. Therefore, the “C” or Consistency piece of CAP has to be compromised. That is why Dynamo has been labeled an “eventually consistency” storage system.
As data is stored into ADSS, the changes of the data is propogated and will be asynchronously replicated to other nodes in the system, eventually making all the data objects and its value consistent. However, given the speed of things in cyberspace and the nature of most Cloud Computing services, the consistency piece could be difficult to accomplish and that is OK because in most of the transactions that are distributed, inconsistency is acceptable.
So that’s a bit about the Amazon Dynamo. Alas, we may never get our grubby hands on this piece of cool data storage and management technology, but knowing that Dynamo is powering AWS and its business is an eye-opener for us into the realm of a new technology evolution.
Steve Jobs was great with what he has done, but when it comes to Cloud Computing, Jeff Bezos of Amazon is the one. And I believe the Amazon Web Services (AWS) is bigger than Apple’s iCloud, in this present time and the future. Why do I say that knowing that the Apple fan boys could be using me as target practice? Because I believe what Amazon is doing is the future of Cloud Computing. Jeff Bezos is a true visionary.
One thing we have to note is that we play different roles when it comes to Cloud Computing. There are Cloud Service Providers (CSP) and there are enterprise subscribers. On a personal level, there are CSPs that cater for consumer-level type of services and there are subscribers of this kind as well. The diagram below shows the needs from an enterprise perspective, for both providers and subscribers.
Also we recognize Amazon from a less enterprise perspective, and they are probably better known for their engagement at the consumer level. But what Amazon is brewing could already be what Cloud Computing should be and I don’t think Apple iCloud is quite there yet.
Amazon Web Services cater for the enterprise and the IT crowd, providing both Platform-as-a-Service (PaaS) and Infrastructure-as-a-Service (IaaS) through its delectable offerings of the
- Elastic Compute Cloud (EC2)
- Simple Storage Service (S3)
- Elastic Block Store (EBS)
- Elastic Beanstalk
- many more
But the recent announcement of Kindle Fire, their USD$199 Android-based gadget, was to me, the final piece to Amazon’s Phase I jigsaw – the move to conquer the Cloud Computing space. I read somewhere that USD$199 Kindle Fire actually costs about USD$201.XX to manufacture. Apple’s iPad costs USD$499. So Amazon is making a loss for each gadget they sell. So what! It’s no big deal.
Let me share with you this table that will rattle your thinking a little bit. Remember this: Cloud Computing is defined as a “utility”. Cloud Computing is about services, content.
I hope the table is convincing you enough to say that the device or the gadget doesn’t matter. Yes, Apple and Amazon have different visions when it comes to Cloud Computing, but if you take some time to analyze the comparison, Amazon does not lock you into buying expensive (but very good) hardware, unlike Apple.
Take for instance the last point. Apple promotes downloaded media while Amazon uses streamed media. If you think about it, that what Cloud Computing should be because the services and the contents are utility. Amazon is providing services and content as a utility. Apple’s thinking is more old-school, still very much the PC-era type of mentality. You have to download the applications onto your gadget before you can use it.
Even the Amazon Silk browser concept is more revolutionary that Apple’s Safari. The Silk browser splits some of the processing in the Amazon Cloud, taking advantage of the power of the Amazon Cloud to do the processing for the user. Here’s a little video about Amazon Silk browser.
The Apple Safari is still very PC-centric, where most of the Web content has to be downloaded onto the browser to be viewed and processed. No doubt the Amazon Silk also download contents, but some of the processing such as read-ahead, applet-processing functions have been moved to Amazon Cloud. That’s changing our paradigm. That’s Cloud Computing. And iCloud does not have anything like that yet.
Someone once told me that Cloud is about economics. How incredibly true! It is about having the lowest costs to both providers and consumers. It’s about bringing a motherload of contents that can be delivered to you on the network. Amazon has tons of digital books, music, movies, TV and computing power to sell to you. And they are doing it at a responsible pace, with low margins. With low margins, the barrier of entry is lower, which in turn accelerates the Cloud Computing adoption. And Amazon is very good at that. Heck, they are selling their Kindle Fire at a loss.
Jeff Bezos has stressed that what they are doing is long term, much longer term than most. To me, Jeff Bezos is the better visionary of Cloud Computing. I am sorry but the reality is Steve Jobs wants high margins from the gadgets they sell to you. That is Apple’s vision for you.
Photo courtesy of Wired magazine.