Database monitoring protects the performance and uptime of your infrastructure. But are you really getting the most of your database monitoring software? We put together a few best practices to get more ROI from your software.
Monitor the database’s surroundings, including different levels
Even if your sole responsibility is to monitor and maintain databases, think broadly about the scope of your work and what it means to monitor what’s happening in, out of and around the database.
For example, monitor a Microsoft SQL server at several levels: server-level metrics, database-level metrics, and execute a query against the SQL server and monitor the results of that query. With the first two levels, you’re pulling and monitoring metrics coming from the server. But at the third level, you can query the data that the SQL decided isn’t important but that lives on the hard drive. In essence, you’re identifying the data that is important for your application and organization, which allows you to more fully monitor the database and its application.
Utilization changes are another example of an important monitoring piece because it affects database performance by impacting disk capacity and network congestion. By monitoring a fuller picture like CPU and disk utilization, or how much bandwidth is coming in and out of the network card, you get better insight into database latency and find answers to problems more quickly.
Grouping databases for better intelligence
Get better insight into business processes by grouping your database monitoring software. Let’s say you have multiple databases serving a website; group them as a business process instead of monitoring them individually. Next, designate the importance level of each part of the process so that you are appropriately notified if critical pieces go down. For example, you may want to receive an alert when the primary database slows down, but not when the failover databases do (as long as you still have the primary database).
Taking the time to set up these processes properly will be very helpful for monitoring database clusters because it gives you a better view of exactly what’s happening on the network and ensures you’re only getting alerted to truly important information.
Build in another check on back-ups and SLAs
Many servers have automatic back-ups that are rolled up into the software, and you’re supposed to be alerted if something goes wrong with the back-up. But in our experience here at Nagios, that’s not always the case. Sometimes the alert fails to send, so a client will go months without realizing they’re not backing up. Use a third-party monitoring service like Nagios XI to monitor back-ups so that if something goes wrong, you have a sure-fire way of getting notified.
Monitoring with an outside service also helps hold vendors to their SLAs. Rather than rely on the reports from the vendor’s own software, get an objective view of how everything is performing on the network. An objective check on performance is also helpful if you’re considering changing database vendors; perhaps a new vendor tested your database performance and said it’s slow. What’s the definition of slow? With independent monitoring software, you have a trusted view of your databases and everything that’s happening on the network.
Related Reading: 5 Tips on Building a Business Case for Nagios XI
Alert for more than the basics
Beyond the standard alerts that are set up at installation, make sure alerts are flagging things that are especially relevant to your organization. For example, if only a few people interact with a customer database, set up alerts so that if the number of transactions per second suddenly increases, you get a notification. If a type of query is taking longer than usual, get an alert so you can find out immediately what’s causing the delay.
Monitoring specific queries of databases can assist other areas of an organization like sales, inventory and accounting. Consider running queries on things such as billing statements for accounting managers or inventory levels for a warehouse. This cross-functional work strengthens IT’s value within the organization and demonstrates how IT can be a strategic partnership throughout it.
Predict storage needs and index performance
Take advantage of your database monitoring software’s predictive and capacity planning tools. These tools can take your performance data and predict what utilization will look like based on current trends: What’s the file size of your database? When might you hit log data storage limits? If you know when a database will be at capacity, you have the ability to proactively take steps to avoid costly downtime.
Storage and space are issues that can creep up on anyone. Use predictive tools to monitor what’s going on, get alerted about future issues, and stop the small problems before they become big ones.
Broaden your scope for database monitoring software
Database monitoring may seem like a very narrow function, but its impact on the rest of your IT infrastructure makes it extremely important. Get the most out of your monitoring software by tracking what’s happening around the databases, improving alerts, and ultimately reducing performance issues and downtime.