Tuesday, January 28, 2014

Are you irritated at the fact that LogMeIn gave virtually no notice as to doing away with their free service?  Consumers as much as small businesses were impacted if they did not subscribe to LogMeIn Central or Pro.  Here are some alternatives that may be of value and definitely worth looking into.

As always, please don't hesitate to contact TPUServices with questions! We're here to help!

~The TPUServices, LLC Team

 

5 alternatives to LogMeIn Free for remote PC access

  • Jan 28, 2014 3:00 AM
  • print

LogMeIn Free is gone, but don’t panic: You can find alternative remote-access tools that cost the same low price of nothing at all. Whether you need to access a document, collaborate with a colleague, or support several PCs, try one of these free tools to get back into the game.
  
TeamViewer 

 

I’ve been using TeamViewer for years to help out family and friends, and it has always been reliable. Simply download the program from the company’s website, and then install it (or run it without installation, if you desire) on both of the PCs you want to connect. During installation, you can also set the program for unattended control.
tv1
TeamViewer gives you easy, secure remote access to multiple computers.
For ad hoc use, simply run the program and log in from the controlling computer. The two components will connect, and up will pop a window containing the desktop of the computer to be controlled. TeamViewer installs as both a server and a client, so you can use it to take control or to allow control.
TeamViewer 9’s cooler features include the ability to open multiple remote sessions in tabs (as in a browser), cut and paste between computers via the clipboard, and drag and drop files from your desktop to the remote desktop. It’s a mature, stable, practical tool for anyone’s remote-control needs. Note that you’ll get the occasional message about upgrading to the pay version if you use TeamViewer regularly to connect to a lot of different PCs. You’re on your honor for that one.

Windows Remote Desktop


Although Windows Remote Desktop doesn’t support true screen-sharing (the screen of the controlled computer goes black instead of staying live) the way services such as Join.me and TeamViewer do, this built-in tool is free and fast, and it allows complete remote control over PCs. There’s even Microsoft Remote Desktop for the Mac, so you can remotely access your more artistic acquaintances’ Apple products.
rdp1










Don’t underestimate the power of Windows’ built-in remote-connectivity tool.
The basic concept behind Windows Remote Desktop is to let users control their office computer remotely so that they can work from home. Hence, although all versions of Windows (Basic, Home, and so on) can establish a Remote Desktop connection and control a PC, only the Professional, Business, and Ultimate versions of Windows can be controlled.
As most office computers are one among many on a network, you need to have the office router tweaked to forward a port (3389) to the PC you want to control. You can edit the Registry to allow control of more than one PC by adding more ports, but that’s a very techie task.
Windows Remote Desktop works great once you’ve set it up, but if you want to control multiple PCs on a regular basis, the next option might be better for you.

VNC

 

VNC, or Virtual Network Computing, isn’t itself a product, but an open-source remote-control and display technology that’s implemented by Tight VNC (free), Ultra VNC (free) and RealVNC (free and pay), among other parties. VNC isn’t hard to use, but it’s not as simple as Join.me and TeamViewer, which don’t require user knowledge of IP addresses.
vnc7













VNC is a good option if you need to control multiple PCs regularly.
To use VNC, install it on both the PCs you want to connect and then set them to listening. To control another PC, simply open the VNC viewer (client), enter the PC’s IP address, and have at it. You may also have to open port 5900 on your firewall and router, and to direct said port to the PC you want to control.
You can use VNC to connect to multiple PCs behind a public IP by opening and using more ports. Most VNC implementations install both the server and viewer software by default, so (as with TeamViewer) you can control in either direction.
Though it’s a tad difficult to set up, VNC is cross-platform (Windows, Mac, Linux), and it works extremely well once installed.

Join.me

 

Join.me is a meeting service (free and pay) from LogMeIn that also provides remote control. It’s convenient for impromptu support in that all you need on the controlling PC is a Web browser. The user with the computer that will host the meeting (and offer control) simply surfs to the Join.me site, selects Start Meeting, and downloads a file.
joinme1














Meeting service Join.me also offers remote access—all you need is a Web browser.
After running said file, the meeting originator passes the provided nine-digit passcode to the user or users on the other end, who in turn enter the passcode in the Join Meeting field on the Join.me homepage. The meeting originator’s desktop will appear in the browser. Once remote control is granted, you can chat, send files, and more. Easy-peasy, but note that Join.me isn’t suited for unattended remote control, which makes it only a partial replacement for LogMeIn.

WebEx Free


Most users think of WebEx as a tool for multiuser boardroom meetings, but it’s also perfectly suitable for small-scale, live (not unattended) remote control and support. WebEx works a little differently from Join.me in that installing software is required at both ends, but that’s a relatively painless procedure.
webex1












WebEx: Not just for multiuser meetings.
Once users have joined the meeting, initially they can only view the originator’s desktop, but the originator can make another person the presenter, pass control over the mouse and keyboard, and share files, chat, and utilize webcams for face-to-face interaction. There’s a bit of a learning curve if you stray from the main features (available from the usual drop-down panel at the top of the display), but overall WebEx is quite easy to use. 


Don’t get spoofed

 

Because of the popularity of remote-control and remote-meeting services, the Web is rife with spoof sites (those that look very much like the correct one, but aren’t) that will attempt to lure you in if you don’t type the URL correctly. Downloading software from these sites can be dangerous to your computer’s health, as well as to your wallet. Sometimes the bad guys will try to sell you support.
The correct site addresses for the services I’ve mentioned are:
Thanks to the growth in distributed and mobile workforces, the ability to access and control a PC remotely is a must for workers and IT administrators alike. That’s why we’ll all miss LogMeIn Free. But if you really love one of these free alternatives, consider throwing a few bucks to the developer. Who knows: Your contribution could help to keep the program going for everyone.


Sunday, January 19, 2014

The Internet of Things has been in technology news fairly often as of late, with the most recent being an article referencing an internet actionable refrigerator. What this article looks at is whether the addition of internet actionable items can pose additional problems to an already busy network and its administrators, or if it is just a preview of the opportunities that can be had. 

~The TPUServices™, LLC Team

Sumo Logic: Is the Internet of Things a problem or an opportunity?

Summary: IT administrators have many challenges simply monitoring and managing today's complex web of workloads, servers, and clients. What happens when vehicles, copiers, security devices, smartphones, tablets, and who know what else appear on the network?

By for Virtually Speaking |

Sanjay Sarathy, CMO of Sumo Logic, stopped by to talk about the Internet of Things and whether adding more devices to an already complex network environment is going to become an overwhelming problem for companies to better understand their infrastructure and make it serve the company's needs better. 

The key challenge, Sarathy believes, is providing tools that will produce actionable insight to someone who doesn't know the right questions to ask or the right things to examine. I tend to agree. 

When so much is happening, how can IT determine what is normal?

Sarathy points out the one of the best ways to help IT staff learn what is happening- what devices are interacting with other, what levels of performance are normal, and what are "anomalies"- is to gather up the operational data in the device's log files, learn from what is found in those files, and determine what the operational baseline is for that infrastructure. Once the baseline is established, anything out of the ordinary can be flagged and an alert be sent to the IT staff. He pointed out the lag files are always a source of truth- that is a place to learn what the facts are in any given situation. 

Sarathy, of course, mentioned that is part of what his company's technology does. Machine data analytics combined with sophisticated "machine intelligence" makes it possible to define what is normal in a given environment, can shine a needed light on what's happening, what's normal and what's unusual. 

How does this differ form what others offer"

When asked how Sumo Logic compares to others touting the same things- such as BMC, CA, ExtraHop, HP, IBM, Netscout, New Relic, Opnet, Prelert, Splunk, Zenoss, and a number of others- Sarathy replied that machine learning is a good start. Unless companies spend a great deal of time creating rules that define what normal operation looks like in their own IT infrastructure, it is difficult to determine all by themselves what is normal. 

Sumo Logic, he said, goes beyond that by combining machine learning with human interaction. That is, Sumo Logic determines what is happening by quickly scanning the operational logs, IT staff are sent an alert. When IT staff respond to an alert, they are creating the rules for future alerts. This, Sarathy pointed out, is quite different from making the staff define rules before a machine intelligence system can operate. 

IT staff can simply tell the system that the set of operations seen are normal and no future alerts are needed or that something is really wrong and that higher levels of alerts should be generated. 

Snapshot analysis

When I speak with IT administrators, network operators and the like, I almost always learn that they are doing their best to deal with an ever- changing, ever-expanding environment. They depend upon sophisticated tools to keep up with their environment. Tools, such as those offered by Sumo Logic and its competitors, are vital today and are likely to be even more important once every little thing lives on the net and communicates status, makes requests, and generates more operational data.  




Dan Kusnetzky

About Daniel Kusnetzky, a reformed software engineer and product manager, founded Kusnetzky Group LLC in 2006. He's literally written the book on virtualization and often comments on cloud computing, mobility and systems software. In his spare time, he's also the managing partner of Lux Sonus LLC, an investment firm.

 

Tuesday, January 14, 2014

SDN will never happen, says VMware exec

This article is about SDN (software defined networks), what its capabilities are,and a look at why it’s so controversial. There is currently a paradigm shift in the technology world, going from hardware based to virtual environment capabilities. This is a huge shift which will take some getting used to for most, as it eliminates the use for physical hardware, such as switches and routers, which, up until now, have been the only option. This leads to the inevitable question- where will your next investments lay- in physical hardware or in the virtual networking of the future? Give TPUServices™, LLC a call today and let us help you navigate the technology of the future!

SDN will never happen, says VMware exec 


So says the head of VMware's network virtualization business unit

By John Dix, Network World 

January 13, 2014 06:33 AM ET















Steve Mullaney

Network World - Rack and stack that
network and then walk away and leave it alone. VMware's NSX technology will provide all the control necessary going forward, says Steve Mullaney, senior vice president and general manager of VMware’s Networking & Security Business Unit. 


In a wide ranging interview with Network World Editor in Chief John Dix, Mullaney outlines the company’s vision of software controlled networks, challenges other Software Defined Networking
visions, including Cisco’s ACI initiative, and outlines how the company will roll out higher layer network services. Mullaney claims the company is winning big accounts that will be made public this year, and that 2015 will see an explosion in adoption.


Describe the problem you’re trying to solve.

I think IT shops are looking at Amazon and Google and Facebook and saying, “We need to be more like them.” A primary driver is an agility requirement. IT has realized that, while it can do wonderful things on the compute side in terms of spinning up servers in
seconds, the operational model of networking is still very manual, very static, very brittle. That’s the primary problem we’re solving.
 

Along with that comes operational efficiency. At those big data center innovators one guy manages two or three orders of magnitude more servers than the guy in the average IT shop. So there’s an efficiency thing from an operational perspective which ultimately relates to OpEx savings. And then on the CapEx side there is the same thing. People are asking, “How can I generalize my infrastructure and have commonality so I can ultimately be more like the Googles and the Amazons?”

What NSX does is it says, the way to get to that Promised Land is through what we call a software-defined data center. We’ve seen the huge transformational characteristics of server virtualization, but we need to virtualize the entire infrastructure, and that means the network as well. The network is the key enabler for that SDDC vision.


Even though VMware endorses the software defined data center concept the company goes out of its way to avoid
describing its network approach as Software Defined Networking. Why?


I guess I’m not really sure what SDN means because it means so many things to so many people. I think of it in terms of the small “s,” small “d,” small “n” meaning. Do you believe the future of
the data center will be more defined by software than hardware? Yes, I do. Therefore I am an sdn, small letters, and advocate. It’s a philosophy to me. It’s not a thing.  


And so yes, I believe software will define it. I believe the way to get that is through network virtualization, where you decouple your software from the underlying physical infrastructure. We think of the physical network as a fabric, the back plane. Its job is to
forward packets from Point A to Point B. We will tell you what to do with that packet, and then you just need to forward it. I’ve completely taken the intelligence, other than forwarding, out of the physical infrastructure and put it in software. And then, through software, can create the illusion of a fully functional network with complex services all in software.

Basically what VMware did on the server side is safely reproduce the x86 environments in software, and now we’re doing that on the network side with network virtualization. And once you’ve done that it’s all programmatically controlled through APIs such that you can create logical networks, you can attach VMs, you can apply services, and you can do all kinds of wonderful things in software. And then when you’re done you hit a button and boom, everything goes back into the resource pool. 


So that to me is software-defined networking with small letters. It has nothing to do with controlling physical switches and using OpenFlow to control those switches. All of this is done, again, with the philosophy of virtualization, which is decouple. That’s the key word. You’re decoupled from the physical infrastructure. 

The key is not to have to touch the physical infrastructure. Leave it alone and do what you do as an augmentation. Make that physical infrastructure better without touching it. Some of the network people have kind of bastardized what SDN means. They say, “Well, since I’m a physical network company, SDN must therefore mean software control of all of my physical switches.” No. That’s like a better CLI. It’s interesting, but it’s not actually what people need. What they need is network virtualization and being decoupled from the physical infrastructure, because the whole point is to not
to have to touch it. 


For companies that go the other route and end up with some physical SDN controllers, will those controllers be able to interact with your controllers?

Absolutely. We’ve talked publicly about things we’re doing with HP. HP’s SDN controller will control their physical
hardware and we’ll do some federation with them. And if somebody wanted to control their physical infrastructure -- I can’t think of any reason why they would want to, but if they did -- we’d say great. Go for it. We are very complementary to that.


You folks are talking about rolling out various upper Layer network services in software. Expand on that a bit. 

Firewall is a perfect example. All of our firewall intelligence is at the edge of the network, either in the vSwitch or in a top-of-rack physical switch. And then the distribution and core, the physical part of the data center network, just looks like an L3 network that forwards packets, and that’s it. You rack it once, you wire it once, and you never touch it again.

So we build effectively what is a distributed scale-out version of a firewall. There’s a little piece of firewalling at every vSwitch. And as you add more compute nodes you add more firewalling capability, and when you move VMs around that firewalling capability moves around with it.


That’s really good for East-West firewalling between servers within the data center. The big firewall vendors tend to have big honking boxes at the North-South end of the data center. Well, guess what?
The bad guys are everywhere. Yes, you still need the North-South gateway firewall, but a lot of companies now are saying they need East-West firewalling, but to build that with physical appliances would be incredibly expensive. And that approach is also very static and brittle in the sense that you have to decide how much capacity you need at the beginning and build up a DMZ, and then if you surpass that capacity you have to go build another one, which will take months and is expensive.


Compare that to doing it in a network virtualization way. As I grow I’m adding more firewalling capacity and it’s in software so there are no more appliances to buy. And because it’s built into the kernel of the hypervisor, it’s incredibly high performance. And so now I
can build effectively what becomes on-demand DMZs, DMZs that will scale out as my application needs scale, and I don’t have to buy a whole bunch of CapEx equipment up front. I get to do it very much more efficiently and then, as things change in the data center, as VMs move around, all of my firewall policies move along with it.


It’s very much an incremental opportunity that the current firewall vendors just really can’t satisfy. They’re not, per se, losing out on an opportunity. It’s an opportunity that only really VMware is going to be able to get. And then what we do with folks like Palo Alto
Networks, who we recently partnered with, is map through their management interfaces to integrate policies such that it will work together with the devices they have as well as our distributed firewall. So I view it as a complementary thing.


Besides firewalls, what other kind of services will you offer?

Load balancing, for one. Customers say, “I’ve got a lot of affinity for F5. You guys need to integrate with them.” We’ve announced a partnership with F5, but we haven’t announced the level of things
we’re doing, but is very similar to Palo Alto. Over time you’re going to see us become this network virtualization platform that will integrate with partners.


Let’s switch to comparing and contrasting your approach to that being pursued by Cisco. How do you sum that up?

At the highest level there are things we completely agree on and then there are things we are in complete disagreement about.
 

We agree on the problem. We agree on the benefit. So basically when Cisco came out with their ACI launch it was really good from our perspective because they validated everything we’ve been saying for years. And from a customer perspective the thing you’re looking for, before any market is going to cross from the early adopters to the mainstream is consistency of the problem statement and the benefit. 

Cisco came out and said everything VMware has been saying is absolutely right. The network is the problem. We need operational efficiency and we need to deliver this agility. We need to be able to deliver applications faster. We need to be more like the Amazons of the world. Beautiful. So now a customer hears the exact same thing from us and Cisco. So now the customer says, “Great. I’ve got two alternatives.” 

But how we go about it couldn’t be any more different. It’s the complete opposite. We believe in the software-defined data center. We believe in the power of virtualization to enable that. We believe in the power of decoupling software from the physical infrastructure. 

Cisco came out and said, “We believe in the hardware-defined data center. We believe in the power of ASICs. We believe in the power of coupling the software to the hardware. We believe in coupling the software not just to any hardware but to our hardware. And oh, by the way, it is also our new hardware so you will need to rip out your existing infrastructure and replace it.”


So it’s very different. It effectively boils down to a profession of faith. What do you, as a customer, believe in? Do you believe in the power of software, that the power of virtualization is going to lead you to the Promised Land? Or do you believe in coupling to hardware and new ASICs?  

And you know what; there will be people that will believe in that. Cisco has been their partner for 25 years and has served them well. Right? But you look through the history of IT, most of
the time decoupling in abstractions in software wins out. And I think we’re starting to see that with the early adopters. What’s exciting is people are picking their architectures right now. It’s happening. This is why Cisco had to come out and announce now, even though their products aren’t available for a year. Because they saw architecture decisions being made.


"Cisco’s ACI, guess what that says? “Oh, no. You’ve got to buy new hardware. You’re going to rip all that out and you’re going to put in the new hardware with the ACIchip.” That ain’t going to go over well. Trust me." 

Another truism in the history of IT is the need to evolve what you already have. Given the huge amount of dollars invested in network infrastructure, no one is going to rip it all out and start
anew.


Absolutely. Which is why our story is so much better. A lot of people have Cisco, and you know what I tell them? They have great products. Keep them. You don’t need to rip them out. Customers
want a solution that is disruptive in its benefits but non-disruptive in its deployment. We can help them do what they want to do but with their existing infrastructure.


You will probably protest, but there is a lot of industry chatter about the inherent limitations in your overlay approach. What are those limitations in your view?


If you look at what Cisco has done, it’s a very similar architecture. They do exactly what we do; they use overlays, but they used proprietary headers in VXLAN and they tie it to their physical hardware. I get what they’re doing. They make money when they sell hardware so they have to tie it to the physical hardware. We look at it and say, “Not necessarily.” I think it’s good to give the customer choice. 


OK, but you didn’t really answer the question about the limitations of the overlay approach. For example, you say rack and stack and leave it and we’ll do the rest, but you still have
infrastructure provisioning and optimization and management issues to deal with, which capital letter Software Defined Networking promises to address. 


I’ve been in networking for 25 years and I can tell you that vision will never happen. People will talk about that for another five years and then they’ll grow tired of it. Watch. That will never happen because it’s not needed. I mean, one of the things is there will be connections where there need to be connections and there will be interfaces between the overlay and the underlay, but all that is needed is a loose coupling. It does not need to be a hard coupling. 

People talk about elephant flows and mice flows, where an elephant flow is a long-lasting big flow that can stomp on smaller flows, the mice flows, and make for a bad SLA for those mice flows, and say you need a tight coupling of the overlay and the underlay for that reason.

Hogwash. From inside the hypervisor we have a much better way to actually highlight those elephant and mice flows, and then we signal to the physical infrastructure, “This is an elephant flow, this is a mice flow, go do what you need to do.” And we’ll be able to have that coupling not just for one set of hardware, but for everybody, whether it’s Arista or Brocade or Dell, HP, Juniper, etc. We’ll be able to work with anyone and actually do that handoff between the overlay and the underlay. So you can go through every single one of those examples and show that a generalized solution and a loose coupling is actually as good or better and gives you the flexibility of choice.

How do you do traffic engineering across the whole network though, if you’re trapped in your world?

If you look at the management and the visibility of networking now, it’s horrendous. So through network virtualization you actually improve the visibility because of our location in the hypervisor. As soon as everything went virtual, the physical network lost visibility because it wasn’t in the right spot. The edge of the network has moved into the server, so you have to have a control point inside the vSwitch as a No.1 starting point. And honestly, once you own that point, you have way more context about what’s going on and what applications are being used and response time and everything else like that compared to if you’re just looking at headers inside the physical network. When you’re looking at a packet inside the network you don’t have a lot of context. 

How do the Amazons and the Facebooks and the Googles of the world do it today? They build software-defined data centers.
They buy generalized physical infrastructure (most of them actually build their own), and they create high performance L3 switch fabrics that do one thing and one thing only -- they switch packets in a non-blocking manner from Point A to Point B. That’s what the network is going to become in data centers. So you rack it once, you wire it once, and you never touch it again. That's what's going to happen.


A lot of people trot out those companies as examples, but they’re such rarified environments that they have precious little to do with real-world data centers.

You’re right. No one else is like them, and they’re specialized environments. But what if you could get close to that type of operational model?  Can you build a generalized IT infrastructure that gets you closer to how those guys build infrastructure? That’s what VMware does. That’s what we’re going to enable people to do. And it is a journey, and you’ve got to be able to leverage existing infrastructure and then take baby steps along the path, because you can’t just rip and replace. That’s what we do. That’s what virtualization does. That’s why it’s so exciting.

Do you have any limitations in terms of what you can achieve across multiple data centers? 

Right now most people are focused just inside the data center. But absolutely what we look at is a system view of VM-to-VM inside the data center and across data centers. So linking into MPLS backbones and then popping out the other side, creating a logical network that has VMs in one data center as well as VMs in the other that look like they’re in the same logical network. That absolutely is what you’re going to be able to get with network virtualization. And not just your other data centers, but external data centers that you use for disaster recovery and things like that. 

What does all this greatness cost the user? How do you price your stuff?

It’s priced per port. That’s how networking people are used to buying. When you buy physical network gear you may buy it as a box, but basically you divide it out by 16 or 12 or whatever number ports, so you’ve paid per port. The good news on this is you’re only paying for what you use, so you’re not fixed to some increment of 48 ports or whatever it happens to be. However many virtual ports you are using, that’s what you pay for. Then as you grow you pay more.

So I’ve already paid for my physical
network, now I have to pay more?


The thing is it’s making your physical infrastructure better. It was the same with server virtualization. You already bought the server, so why are you buying server virtualization? Well, because you want to make that server better. You want to make it better in terms of CapEx. You want to make it better in terms of OpEx. So it’s the same thing with the network. 

You already bought a physical network and paid X for it. That’s a sunk cost. But now when your favorite network vendor comes in saying you need to upgrade, because of me you can tell him “No thank you. I think the gear I have now is perfect. It’s all I need. In fact, I can delay that upgrade for another three to four years. Thank you very much.”

We’ve had many customers look at this as a CapEx deferment. They had budgeted a massive CapEx upgrade to get this type of
functionality, but now they don’t need to do that and they’re putting their money into software instead of the physical infrastructure, and this is a hell of a lot easier than ripping and replacing my gear, and cheaper.


Do you have any reference points to show what kind of success you’re having?

You’re going to start seeing a lot more customer wins. People are making these architectural decisions now and we’re winning them. So we’re going to start marching these people out. 

And from a revenue perspective, we have told financial analysts that we’ll be material from a VMware perspective in 2015. We
have customers in production. We’re doing revenue now, lots of it, but when you’re part of a $7-billion-per-year company, what is material? Right now the important thing is winning those architectural decisions. And I’m talking top financial companies, top service providers, top media companies and the leading enterprises. 2014 is when we’re going to trajectory out across the chasm. It’s going to happen.


By John Dix, Network World 
January 13, 2014 06:33 AM ET

Monday, January 13, 2014

Is a cloud solution part of your disaster recovery plan or is it already integrated into your disaster recovery plan? As part of your disaster recovery plan, consider the question "what am I going to do if my cloud provider goes out of business?".  The article below will help address some of those issues and, as always, TPUServices™, LLC is here to assist you in navigating those scenarios.
 
Cloud's worst-case scenario: What to do if your provider goes belly up


The best time to prepare for getting data out of the cloud is before you put it in there


Brandon Butler, Network World
January 08, 2014 06:50 PM ET



















Network World - Last September customers of storage provider Nirvanix got what could be worst-case scenario news for a cloud user: The company was going out of business and they had to get data out, fast.

Customers scrambled to transfer data from Nirvanix’s facilities to other cloud providers or back on to their own premises. “Some folks made it, others didn’t,” says Kent Christensen, a consultant at Datalink, which helped a handful of clients move data out of the now-defunct cloud provider.


Nirvanix wasn’t the first, and it likely will not be the last cloud provider to go belly up. Megacloud, a provider of free and paid online storage without warning or explanation suddenly went dark two months after Nirvanix’s bombshell dropped. Other companies have phased out products they once offered customers for cloud storage: Symantec’s Backup Exec.cloud, for example is no longer being sold by the company. 


More could be on the way: An analyst at Gartner’s data center conference late last year predicted that one in four cloud providers will be acquired or forced out of business by the end of next year, mostly though merger and acquisition activity.


With all these changes happening in the fast-moving cloud industry, it begs the question: What should users do if their worse-case scenario actually happens and their public IaaS cloud goes dark?


At the most basic level, preparing for your cloud provider to go out of business should start before you even actually use the cloud, says Ahmar Abbas, vice president of global services for DISYS, an IT consultancy. DISYS helps companies create a cloud strategy, and one of the first things to plan before going into the cloud is how to get the data out, at any time. “It all goes back to how businesses historically plan for disaster recovery,” says Abbas.


Typically DISYS will work with customers to classify the applications and data that are being placed in the public cloud and rank them based on criticality to the business. High-value data and applications that are mission critical need the highest levels of availability and are treated differently from low-value data that and organization can live without for a certain period of time. If a business is running a core enterprise app in the cloud that is crucial to the company’s daily operations, it should have a live copy of that app in another location, be it another cloud provider or on the company’s own premises, Abbas says. For testing materials perhaps there backup once a month, or maybe even not at all. 





"It all goes back to how businesses historically plan for disaster recovery."
— Ahmar Abbas, vice president, DISYS






There are other common sense steps users can take. First and foremost, be smart about who you choose to work with. “If you pick an Amazon or an IBM, then the chances of a severe event happening are much diminished,” he says. “They’re not going out of business any time soon.” Just going with a big-name provider isn’t a panacea though. Amazon Web Services and just about every cloud provider has outages and service disruptions, so customers should always prepare for the worst and hope for the best.


Cloud providers offer service-level agreements (SLA) which guarantee a certain amount of uptime. But, users should be careful they are following the SLAs closely to ensure their systems are architected in a way that they can be reimbursed for any downtime from their provider. Some vendors, like Amazon, require multiple Availability Zones within its cloud to be down before an SLA kicks in.


Customers can still find themselves stuck though in a situation like hundreds of users of the Nirvanix platform did though. If a data migration out of a cloud is necessary, Datalink’s Christensen says there are ways to make the process as efficient as possible. One is to reduce the amount of data actually being transferred using data caching, deduplication and WAN optimization tools. Some providers don’t charge for putting data into their cloud, and only charge customers for getting data out, so making that process as efficient as possible is beneficial. Datalink worked with a handful of customers after the Nirvanix bombshell and most were able to recover their data, or were already using Nirvanix as a secondary storage site. Christensen says that’s a common use case for the cloud today: Use it as a backup site so that the original copy of the data is still live somewhere else in case there is an issue in the cloud.


Noah Broadwater, who heads up digital products and technology for the Special Olympics, has been using cloud services for 10 years, so he’s aware of the risks cloud service providers pose. He’s come up with an innovative way to hedge his use of the cloud.


One of the biggest problems customers face if they do have to get data out of a public cloud platform is that the vendor may use a proprietary file storage platform, so even if the user is able to get data out from their defunct cloud provider, the end user may not have the ability to run those applications or data on their internal systems. “If you don’t have the system software to run it on, it doesn’t matter if you have the data, it’s unusable,” he says. Technically data mapping and cleaning can be done, but that’s a long and cumbersome process.

Broadwater’s come up with another way: Before entering into a contract with a provider, Broadwater negotiates an escrow account with the vendor that promises to have the vendor supply the most up to date version of the data software into a locked account.


Broadwater, as the customer, only has access to that account and software if the vendor declares bankruptcy or if the customer is unable to use data stored in the vendor’s cloud. “We can’t touch the code unless those clauses in the contract allow it,” he explains.


Broadwater uses a third party storage provider, in this case Iron Mountain, to hold the machine image of the software for the life of the contract. Vendors have been surprisingly willing to comply, Broadwater says. It is an extra expense in the contract, but it provides full protection without paying for a full secondary backup of all the data, he explains.


There are many ways customers can prepare for the worst-case scenario in the cloud. From simple disaster-recovery best practices, to unique plans of attack – Broadwater says it’s better to be safe than sorry.


Senior Writer Brandon Butler covers cloud computing for Network World and NetworkWorld.com. He can be reached at BButler@nww.com and found on Twitter at @BButlerNWW. Read his Cloud Chronicles here