So… you have created a matrix of data vs. target vs. tool, where a target is a target for monitoring, where this might be an application, a server, a switch, storage device or even a rack amongst others. Congratulations, you have a document! Now to show that you can make it work in practice. I am going to cover 2 things that the document must be;
Having a matrix of standards assists the administrative team in offer design, but what if the matrix was published so as to be available to requestors, the document is written at such a level that it is understandable by technical users. Depending on the environmental variability (typically introduced by the lack of availability or adherence to standards) this may mean that there are hundreds, possibly thousands, of discrete products, types and versions, the governance group need to elevate the language of the matrix so that it is understandable by requestors who are non-technical
Although outside the scope of this post it would be prudent to tackle the standards for the most prolific variants first according to the 80:20 rule, and tackle or remove the remainder later. This will significantly reduce the amount of administrative effort required, not just within the monitoring function but within the whole of IT Operations, both within functional areas, and between them.
A one size fits all solution is unlikely to meet the needs of a large enterprise (although it is probably a good fit for SMBs or niche players offering single services) we need to create enough variability in terms of function and cost, but not TOO much variability such that the offerings become too complex to understand.
In other words the governance group and the technical teams responsible for the monitoring need to ‘hide’ as much as possible of the technical detail and environmental variability in terms of specific applications and application and platform versions, even though this is already known in the matrix and may exist during actual deployment. A suggestion would be that the offerings might look similar to below, although this does include some technical detail, with each of the classes applying to each layer
Essentially this becomes a pick-list, or catalog, of monitoring offerings that are available for consumption by requestors and can be combined in any way they choose, and because there is a known amount of effort, and therefore cost associated with each offering this makes it even more attractive to the requestor, as the cost and time become forecastable. The mere fact that is becomes so easy will affect the adoption of the standards.
If you are concerned that the creation of a catalog is in some way an onerous task and requires technology to do it, think clipboard with a piece of paper with check boxes, or if you insist on being ultra-modern a spreadsheet 🙂 either of these is sufficient to start. The beauty is that once they have chosen then you are taking the first steps to include monitoring in a wider ‘service’ catalog, embedding appropriate monitoring with decisions made at the business level, with timely inclusion of monitoring into the service design
The above matrix makes apparent that higher level offerings are dependent on the offerings between them e.g. silver includes all of bronze. But it is worthy of note that for full end-to-end monitoring there are layers within layers, e.g. if systems monitoring includes elements of platform such as database, then it must adopt the standards by which servers are monitored, and in turn this relies on the monitoring of the infrastructure, whether virtual or physical. There are many variances of this which should be self-evident, so I will not dwell on them here.
As an aside, gold offerings in this example would be the most complex to implement and administer, but if these solutions are more frequently requested then based on familiarity they should become easier to maintain and could drop into silver offerings. The same is true for silver becoming bronze. This is exactly how industry leaders operate. AWS create custom solutions based on the specific needs of individual customers, once they become familiar with the solution then they will make them publicly accessible to be requested as standard. This frees them, and you up to produce new ‘gold’ offerings increasing your portfolio and ability to serve your customers effectively.
So what is the governance group on the gook for? I discussed that they are responsible for setting out the available data in Governance, Urgh! As you see here they are the architects of a monitoring catalog and understand the strategy of the monitoring function including the roadmap. They have 2 more roles, the first, as alluded to, is that they are the single point of request for monitoring and the second, closely-linked role is that of a combined demand/change/release manager.
So we now have as single contact from which a requestor can get an authoritative response, the information is available as a simple catalog that is based on standards that have been defined, that are hidden from the requestor, and everyone knows what they are doing. Cool! We are finished, right? We can now onboard applications into monitoring. We COULD. But should we? We have made it easy for the requestor. But do you remember The Problem with Technologists, they are selfish, ask yourself what is in it for me. You want it to be easy for yourself, you have the standards, the configurations are known, so why would you not automate it. Read Let your Tools take the Strain to find out how