We’re quickly approaching an important inflection point in the cloud migration timeline. 451 Research estimates that by the end of next year, company-owned data centers will dip below 50% of primary IT environments as organizations move their IT investments to the cloud. They’re deciding overwhelmingly that they no longer want pets running their critical applications, they want herds of cattle. And, preferably, they want those herds of cattle stored far, far away where they can’t be smelled.
This is despite the volumes of thought leadership blogs that have been written lately stating that cloud hasn’t lived up to its hype, that IT shops will never move completely to cloud, or that cloud spending is out of control. The well-established facts on the ground indicate that while certain legacy applications will remain on owned infrastructure for the foreseeable future, the scale and agility offered by cloud platforms offer competitive and operational advantages that most organizations cannot ignore. The herd is moving.
You know this already, of course, which is why you have a team of experts slavishly dedicated to mapping out the architecture of the next-generation cloud platform that will keep your business at the leading edge of … whatever it is you do. But is your team baking in observability everywhere so that you’ll always know how your investments are performing and get warnings before unexpected issues impact your business?
Cloud isn’t just hosted VMs anymore. Make sure you’re tracking everything.
Once upon a time, migrating to cloud was just you paying someone else to run a hypervisor for you and give you a bunch of VMs. That’s, um, somewhat quaint now. Have you taken a look lately at the sheer size of the cloud service catalogs at AWS, Azure, and GCP? Go ahead and peek. I’ll wait.
Cloud isn’t just hosted compute, storage and networking anymore. It’s databases and data analytics platforms. It’s artificial intelligence and machine learning. It’s containers and serverless functions. (Serverless, in particular, is steadily growing in popularity. Today, 42% of DevOps Pulse respondents use serverless technology, rising 12% from 2017.) And all of these can interact in interesting, emergent and unpredictable ways. Maintaining visibility across all the capabilities of your infrastructure as you develop and deploy new applications becomes a critical requirement of any new cloud architecture.
Cloud services start cheap, but they get expensive fast if you’re not paying attention.
In our household, we cut the cord on cable a number of years ago. We finally got tired of scrolling through hundreds of channels we never watched, replacing tuner boxes and DVRs that never worked right, and endlessly searching for something — anything — good to watch.
So, we halved our cable bill by eliminating TV service and opting for a fatter internet pipe and HD antennas instead. Then we added Netflix and Hulu so we could watch their catalogs of shows on demand. Then we added HBO NOW so we could watch, I don’t even remember, some show we apparently had to watch. Then we added ESPN+ because we live in Austin, and that means Longhorn football. We have a kindergartner, which means we’re constitutionally required to get Disney+ whenever that comes out. Captain Picard is getting his own show? I guess we’re getting CBS All Access. Suddenly, it doesn’t feel like we’re saving much money — and we still spent an hour last night trying to find something to watch.
Cloud can be a little like that. Give your developers access to an infinitely scalable database and compute pool, add time, and watch what happens. Your team needs to keep its eyes on the land and its hands on the reins so the herd doesn’t run amok (and run up your bill). This is why optimizing existing cloud costs continues to be the top cloud initiative identified by IT practitioners in the RightScale State of the Cloud report for the third year in a row.
A new release of the Google Cloud Platform ZenPack.
Many Zenoss clients are on the leading edge of industry cloud migration projects, so we’re always working to give them the tools they need to enjoy success. As stated above, this means ensuring that their IT Ops and DevOps teams always maintain visibility not only on how their cloud infrastructure is operating across the board but also how much it is likely to cost them. And we constantly iterate our ZenPacks for AWS, Azure and Google Cloud to ensure that our clients can maintain observibility across whatever new cloud architectures they’ve chosen to deploy.
Our latest release of the Google Cloud Platform ZenPack continues this trend. Zenoss users running this ZenPack have been able to monitor Google compute instances and Kubernetes clusters for some time, but this latest release offers new capabilities in four key areas:
- It now allows you to monitor the status and performance for Google Cloud Functions (Google’s serverless compute offering) in all regions, providing insight via invocations (e.g., total, OK, timeouts and errors), execution times, active instances, memory utilization, and network egress.
- It supports labels (i.e., tags), which are key value pairs that allow you to organize your GCP resources. You can attach one or many labels to a resource and then use them to identify resources used by specific teams, cost centers, applications, microservices, stages, projects or whatever else you can dream up.
- It provides the ability to view your detailed cloud spend by label, services and region.
- It adds monitoring support for Dataflow, GCP’s fully service for transforming and enriching data in stream (real time) and batch (historical). Our ZenPack provides insight into the performance of these dataflow jobs, monitoring memory use, disk utilization, CPU utilization, and overall system lag.
If you’d like to see any of these capabilities in action, please ask us for a demo!