The Rodney King School of Virtualization is about teaching and cajoling out the benefits of sharing to all of our firms’ technologists.
Can you afford to pay for what you don’t need?
In pre-cloud infrastructure, any given box was sized to run at an under-utilized 10 percent of capacity because headroom is always needed for the unknown levels of peak performance. Multiply that box’s capacity times a room full of like-kind servers and multiple data centers around the world. Aside from having overspent on hardware, software and maintenance, that set up is probably the farthest thing from being environmentally green – yet forcing every application onto a denser, virtualization farm doesn’t solve the problem. Instead, multiple applications run virtually at a peak level of not much more than 10 percent. So the technology of cloud computing has the potential to increase the disciplines of shared standards and allows for more flexible capacity-on-demand operating models.
Tangentially, another unspoken expense is the secret, under-the-desk development labs and boxes with their own lack of eco-friendly carbon footprints. But hold that thought for a minute (it’s worth returning to).
Cloud computing and agility
Let’s get back to the basic building blocks of what is cloud computing: leveraging the technologies of the Internet across the three s’s – server, storage and switch devices– to gain greater levels of uniformity and standards conformity.
Ideally, the cloud is built increasingly on cheaper, throw-away, commodity hardware along with select open-source components allowing for significantly less maintenance overhead. Cloud technology is also about decoupling each application’s awareness from the physical resources it uses. In the old days, the simple act of upgrading or swapping out a leased SAN array could turn into a risk-riddled, expensive, labor intensive, half year project that often included laying down parallel fiber channel cabling to the new SAN. In the cloud, once I/O performance requirements have been determined, storage must be truly virtual. It doesn’t matter where the SAN is located, what brand is deployed or if the storage is relocated. Pre-establishing key performance indicators and monitoring the flexibility and speed that an application truly needs should be sufficient.
Lately, everyone has been talking about “agile” computing with little and big “A.” Agile computing is a lot like that children’s summer picnic game, the egg-and-spoon race. Remember how kids line up on the field with a raw egg on a large spoon and must race down to the sideline and back without scrambling the egg? That’s a lot like software agility. However, if every sprint requires an entire re-work of the physical hardware infrastructure…. That’s more like asking each eight-year-old to race a 50-pound server up and down the field. Not much fun there.
Agility simply doesn’t work for manually-intensive infrastructure builds. What is needed for faster time-to-market is a simpler hardware design with very strict and consistent standards of tools, scaling, data flows, logical access and security. A number of vendors have visual design tools that build out logical connectivity once the architect hits the post-design “go” button. Leading providers of public cloud computing offer a value proposition of exactly that.
Plus, some public cloud services don’t try to lock customers into multi-year obligations, preferring to charge only for what they use and how long they use it. In this way, development teams can focus on the software side of the Agile sprint for rapidly iterating quality code releases ready for testing and feedback.
SOX and good hygiene
A considerable number of business and tech execs complain about the burdens of the Sarbanes-Oxley IT controls. Emerging firms have even threatened to publicly list overseas to avoid the tighter U.S. public company regulations. Yet in reality, do we really need a Congressional mandate to assure our passwords are strong, private and rotated? That our operating systems and anti-virus solutions have implemented patches and are current to the latest known, even the day zero threats? That our backup schemes are appropriate and our ability to confidently restore is known to be ready? Or that our privacy data is encrypted and secure from hacking and intrusion?
In my opinion, the days of magnetic tape as a preferred storage media are numbered. Tape has proven expensive, administratively burdensome to manage retention and rotation schemes and prone to restore failures. Alternatives such as cloud storage and virtual tape are leading the way with innovative on-line backup and storage solutions.
In the future, applications should be optimized to run in a hybrid cloud environment. In doing this, firms will establish clear and crisp technical approaches to privacy, encryption, security, compliance, identity management and logical access controls. Elements that attest to what the SOX folk were after all along on the IT controls.
It’s about time and money
Ultimately, the success of any IT department comes down to issues of time and money. Within a large institution’s billion-dollar IT budget, there should be plenty of room to more intelligently allocate that spend. IT, on one hand, is desperate to remain relevant and delight their colleagues on the other side of the business aisle, yet IT often portrays an arrogance that the business side has nowhere else to go. Or is that comfortable complacency? Are the allocations or shadow bills truly value priced? Would the products and services of the captive IT organization compare well against the open market in a competitive bid? Is time-to-market meeting the customer’s needs?
Plan for hybrid cloud computing
Here is my proposal: Get rid of the all-too-many developer labs and under-the-desk boxes in favor of leveraging a public cloud service for bespoke application development.
Remember to dispose of e-junk in environmentally sound ways and to keep track of carbon emission savings made. There’s savings of more than $750 a year in power and cooling costs alone for every server or switch that’s retired and disposed of.
Most firms have a hard enough time inventory-tracking their data-center assets, let alone developer kits. By removing the development labs and moving to the cloud, the immediate benefit is that developers will experience a level of self service, rapid spinning up and spinning down of their test environment, confidence in running their code from the beginning in a virtualized environment and realistic web-speeds as performance proxies and standardized infrastructure (though they won’t get to pick the latest G7 box from HP or IBM simply because it’s cool).
The IT Operations area of the org chart benefits because they don’t have to pretend to be focused on the less mission-critical needs of the developers over the production systems. Yet the benchmark of expectations will none the less become established as IT Operations deploys the firm’s private cloud for going live later.
Indeed, there is an emerging trend of software solutions that allow a firm to fully leverage this hybrid cloud model as they seamlessly allow for applications to run under either private or public cloud services. Once deployed, the options exist for short-term capacity-on-demand, like month-end processing, available new-product capacity prior to committing large infrastructure outlays, improving regional latency by deploying close to a geographic customer base where you don’t have a data center and improved disaster recovery resiliency.
Is all of this wishful thinking?
Maybe, but at least in this vision of a hybrid cloud future, everyone really does get along.
Larry Landau has worked in financial services for over 28 years and is currently SVP of Technology Operations at Thomson Reuters’ Markets Division. The views expressed here are my own and do not reflect the views of Thomson Reuters.