GoCloud Hosted Desktop News Roundup #17

Amazon unveil Redshift, 28% of cloud providers have disaster recovery, big data and cloud

Amazon unveiled their new RedShift data warehouse service this week at a keynote presentation as a push to provide enterprises with an option for overcoming data migration challenges.

"We're hoping it'll be faster performance-wise," said Ivan Jurado, chief technology architect and general manager for marketing analytics company M-Sights.

It’s thought that RedShift will provide an affordable cloud model for smaller businesses and those who are just starting out. However, those with an infrastructure already setup are unlikely to want to move all of their data to Amazon, experts point out.

"It will be good for smaller guys and those who can start from scratch, maybe, but if you have a large investment in a data warehouse, you're not going to move it all to Amazon -- you already have sunk costs into that infrastructure," said Giedrius Praspaliauskas, senior systems architect for a consulting company based on the West Coast.

Another analyst said that it depends on where enterprises are placed in the “procurement cycle”, depending on whether they are coming to the end of an existing service’s life.

"You also have to look at the skills of your workforce," said Tony Witherspoon, senior solutions consultant for a consulting company. "I definitely wouldn't start with data warehousing as an entry point to the cloud -- you have to pick the low-hanging fruit first."

“RedShift can offer ten times the performance of traditional data warehouses,” according to internal testing carried out by Amazon.

"They haven't released the methodology for the benchmarks, so it's hard to make a comparison a DBA is going to take very seriously," said Carl Brooks, analyst with 451 Research. "But the claimed cost delta is so big that it will be attractive anyway. This is the exact same value prop as EC2."

To find out more about Amazon’s ongoing cloud plans, visit the AWS Summit 2012 for more information.

28% of cloud providers have app disaster recovery plans

Just 28% of cloud providers have application disaster recovery plans set up, a new report has found. Whilst many have hardware recovery and backup plans, it’s thought that the “logistical barriers to cloud-based resilience include regulatory compliance, cost and required changes to how applications are designed,” the report points out.

“At the end of the day, everything fails … [and] you need to build for that failure,” said Jeremy Przygode, an Amazon Web Services (AWS) reseller.

He went on to say that if enterprises want to “bake resiliency” into apps depends not only on the application itself, but how much enterprises are willing to spend on ensuring resiliency and the complexity of the app.

Newly developed applications have often been built with the cloud firmly in mind and it’s these that ensure that cloud outages don’t mean disaster. According to eCommerce website Decide, when Amazon’s servers went down in June it didn’t really cause a problem “because we’re geographically distributed, and we’re set up to handle issues as long as it’s not across all of Amazon,” said CTO Kate Matsudara.

“When the Amazon outage happened, we did get paged and notified, but when we saw that, we just added more capacity in another zone, and then that was it,” she said. “It was very easy and very common—these things happen in the cloud, so you need to design for that and prepare for it.”

“There are a bunch of different design patterns that you can use,” she said. “The one that’s coming to mind is a circuit-breaker pattern, where you have the idea of a downstream dependency, and set your software up so if it’s not there, you’re able to still give updates to the users.”

However, whilst it seems that AWS works well for companies such as web designers and clued-up eCommerce sites, it seems that enterprises will need to cross “multiple clouds,” experts say.

“Any cloud app that needs resiliency should run in any cloud and not be tied to a specific cloud,” said Edward Haletky, CEO of The Virtualization Practice LLC.

He went on to say that cross-cloud resiliency is a technical possibility due to common programming languages and network virtualization.

“If I’m talking [about the] application, I can design a Java app or a PERL app or PHP or even C app to cross multiple networks using physical or virtual VPNs to string my Layer 2 network together,” he said. “It is possible to do it, but the thing is, do you want to, and can you handle the expense of doing it?”

Big data and the cloud

Two of the biggest buzzwords in IT this year have been cloud computing and big data, however, many people are struggling to see how the two fit together and don’t quite get the meaning of big data at all.

Recently, Brain Lent and Ivan Sucharski of Medio spoke about how the two biggest trends of 2012 affect each other and what it all means. According to Lent, he would define big data as being on such a “scale that you can't have a single department effectively manage it.”

Lent went on to say that the “combination of cloud computing and big data is going to become more practical, simply because of the efficiencies of scale. He went on to say that for IT departments, managing the volume of data would be difficult.”

With so little people skilled in big data science, it’s difficult to obtain business value; however: “if you think about that as a commodity and a limited resource, the question becomes, 'How do you centralize that into a cloud-based environment so everyone can get the value, but you don't have to have that person on-premises?”

“There's elasticity, too. When you're running analytical models, you want them to run as fast as possible, but you don't need to run them every 10 minutes. So you need as many machines as you can get for the next half hour.”

“And then they're idle for 23 and a half hours, until the next computing cycle. In a cloud situation, you've got the flexibility of that elasticity without the cost of being offline 95% of the time,” said Ivan Sucharski.