2018 saw modern enterprises adopting different approaches and tools to predict and eliminate IT outages using data collection and analytics — from modeling and performance data to log, event and application data. The advancements in machine learning and AI will help DevOps and IT Ops push boundaries in 2019, enabling them to collect streaming data in real time and get insights to optimize and improve IT performance. Here are some of the trends we are observing for 2019.
Blurring Lines: Managing Multicloud Infrastructure Seamlessly
With the growing migration toward cloud in the last decade, most organizations running on premises were able to extend their resources into public cloud, with the help of physical infrastructure vendors who provided features promising fully transparent hybrid cloud infrastructure. Even though it wasn’t completely fulfilled, some are showing signs for what’s to come. Google Kubernetes Engine and Nutanix Prism and Beam are examples of how enterprise customers demand a seamless management experience across both on-premises and public cloud infrastructure. This will pave the way for DevOps to easily deploy resources anywhere without the fear of of increasing operational complexity in order to manage this matrix of multicloud and hyperconverged infrastructure.
Growing Focus on Context and Performance
Some of the key drivers for ITOM tools in recent history focused on visualizing service health and delivering IT service assurance. But, given the DevOps disruption recently and the growing scaling challenges of ephemeral services distributed across your infrastructure, performance monitoring for the entire IT stack has become highly sought after. APM vendors tried and failed to deliver end-to-end visibility for users through application-focused monitoring because DevOps teams need context of the full IT stack. Immediate and self-service performance monitoring of the entire IT landscape, with an eye toward providing context around IT events, will replace legacy goals such as service assurance and visibility. This ability to analyze data across domains and potentially uncover trends can provide predictive benefits for companies that deploy machine learning across their entire environment.
Adding Intelligence: Empowering IT Ops With Data Breadth
The “unified” monitoring tools that initially focused on range of monitoring targets are now focused on adding a range of platform capabilities, and using machine learning/AI to get deep understanding of the IT infrastructure has become a popular one. IT Ops is realizing that their tools have limited capabilities and need expansive breath of data to create meaningful machine learning models. This means that adding intelligence on top of logs or events alone is not sufficient to create trustworthy automation. For robust self-healing, these tools must be able to stream all data types from different sources to provide enough context to be trustworthy. It is clear that IT Ops teams need cloud-based tools with the scalability to combine events, model, metrics, logs etc., and only tools with the ability to intelligently measure performance from all forms of collection will become viable mechanisms for autonomous data centers to flourish.