What is cloud computing?
Cloud computing is all the rage in the media right now. Often it is spoken of as something new, different and exciting even though service bureaus were in use in the 1960s, hosting and manged services providers appeared in the mid 1970s and application service providers have been with us since the late 1980s. What’s really new is a delivery and consumption model rather than a new type of technology.
Cloud computing means the delivery of IT resources (an application, a system service, a application framework or a complete server) over the Internet. This delivery model enables convenient, self-service, pay-as-you-go, on-demand network access to a shared pool of configurable computing resources. These resources can expand or contract as needed to support the customer’s requirements.
Although this is only a new delivery and consumption model, it holds out the promise of changing how organizations acquire computing resources and deploy some or all of their workloads.
Why is this causing trouble?
Quite often the configuration and performance management tools being used today are focused on helping companies manage applications running on one or more physical machines. The presumption is that the machine is in the company’s own data center and running applications and tools purchased by the company. While that approach worked quite well for monolithic applications developed for mainframes or mid-range systems, it no longer works well for modern applications.
Today’s applications are often designed as a collection of multiple application services that have been harnessed together in a way that looks to the end user like a single application. Multiple instances of each application service may be in use in different data centers to enhance performance, scaleability or reliability. These application services may be hosted on physical, virtual or cloud-based systems. Management tools designed for a single physical system are clearly out of their class in trying to deal with such a complex processing environment.
If we tack on to this complex mess the a la carte pricing model used by most cloud service providers, it becomes very difficult for a company’s IT department to track the cost of a cloud-based service offering. Some service providers have different pricing levels for each function including the following:
- The number and power of CPUs put to use
- The amount of memory made available to those CPUs
- The amount and type of storage made available to those CPUs
- How much network bandwidth the workloads uses and even how many gigabytes of data transmitted and received during the normal course of using the workload.
- Only God knows what other fees and charges will be tacked on as well.
It is not at all uncommon for companies to get an unpleasant surprise when the monthly bill arrives.
What's needed to solve this problem?
It is really important to remember that companies need to be able to manage all of the computing resources in use. The tool must be easy to use and present information in a useful fashion.The solution must be able to see and track all of the resources regardless of:
- Where the resource is located
- What type of resource it is (system, memory, storage, network, etc.)
- Whether it is physical, virtual or cloud-based
- Regardless of the operating system or application being used
The tool must be able to take the next step and take the usage information and estimate, in real time, the costs the company is incurring. It must be able to warn administrators when the costs are approaching important thresholds.
“No surprises” has to be built into the design of the perfect solution.
Lots of noise in the environment
Finding this perfect tool is increasingly challenging. There is an amazing amount of “noise” in the environment. Many suppliers of configuration and performance management tools have jumped into the market. Sometimes, all they have done is put new lipstick on an older tool.
Some suppliers have taken an older management framework and have just added a module or two that allows data gathering from virtual and, perhaps, cloud environments. They then make broad claims that they are offering the perfect tool. It is often not at all clear if the tool works in real time or estimates resource utilization and end user cost.
Other suppliers have developed tools that only focus on virtual environments or cloud environments and seem to have forgotten that companies are using a broad mix of technologies to solve their business problems.
Still other suppliers have developed tools that only work in a single data center or for applications running in a single operating environment. These suppliers seem not to understand that today’s data center looks like a computer museum – applications are in use today that were developed in the 1960s, 1970s, 1980s, 1990s and, yes, even built recently.
Although companies need a comprehensive tool that can provide a clear view of the whole forest, these suppliers only offer a machete and a paper map. They tell their customers “you have all the tools you need to make a survey of your entire environment."
Things to Consider
Here are a few things that decision-makers must consider while seeking a comprehensive solution:
- It is important that tools understand today’s complex environment and work fast enough to deal with an ever-changing computing environment.
- Is the product new or is it really a few tweaks made so that the supplier can make broad claims without really dealing with today’s issues?
- Can the supplier offer real customer references? Do these customers agree that the product is the best thing since sliced bread?
- Is support available worldwide?
- Does the product have the ability to assess the real impact of a single specific service?
- Can it manage events and resources in real time?
- Does it offer sophisticated analytics to make nearly invisible problems visible?
- Does it support physical, virtual and cloud resources?
If the answer to these questions is “no,” then it would be wise to continue looking for that ideal solution.
Image Credit: I'm George