Uptime is a hot topic, especially in a landscape where downtime is so expensive. If you start to research uptime monitoring you’ll find resources that cover mostly external-facing features like ensuring your page is up, your shopping carts are working, and validating your user experience.
An equally important view is looking at your monitoring from an internal infrastructure perspective. Keeping your website up requires that you also keep your internal pathways healthy.
Selecting a monitoring provider can be daunting. What do you actually need to ensure your site and all its elements are truly UP? Uptime monitoring should be a mission-critical tool in your tech stack. Here’s Uptime.com’s list of the top 5 requirements you should expect your monitoring provider to meet.
The most basic level of monitoring is knowing your site is UP (returning the status 200 OK) but often you need to know where something is DOWN, and if you’ve fixed the problem. Features that enable you to test from specific server locations can help you get back online faster. Your site could be UP according to a New York server, and DOWN In Los Angeles, or Down in different locations for different reasons. The devil is in the details, and so is the solution.
Basic uptime monitoring checks like HTTP(S) are great for both public and private use cases, but other important checks – like the ones that determine your SSL certificate expiry – are important to run from private monitoring locations as well, especially if your service is deployed within a private network.
As site security becomes more prevalent, so does the need to monitor. Take home security cameras for example, you don’t just install them on the outside of your house anymore, you monitor your home internally with a full alarm system. Network and performance monitoring is no different.
No two businesses’ monitoring needs are the same, yet all rely on internal infrastructure to perform and stay UP. Private location monitoring is a must that’s often overlooked by businesses and IT teams alike. Downtime caused by internal issues can be as debilitating and damaging as the downtime affecting your public site elements. Web monitoring performance best practices demand covering all areas of your highest activity.
A way to meet a variety of needs is to find a web monitoring partner that supports monitoring from global locations which cover your greatest user activity areas, as well as from private monitoring locations – private probe servers dedicated to your pathways and processes that live behind web application firewalls (WAFs) or load balancers.
External monitoring is important for your public-facing website(s), but you’ll also need the ability to monitor the internal infrastructure and system assets that make everything on your frontend possible.
Private monitoring locations allow you to run checks locally for a 360 degree view of your online infrastructure both externally and behind the curtain; for your sandboxes, employee portals, dev sites, etc. private monitoring is not only about monitoring for uptime and performance, but adding a layer of system security at the server level.
Synthetic monitoring (also known as transaction check monitoring) is the monitoring equivalent to control testing. The aim is to isolate a process or function of your site and create a script that checks that specific piece over and over, reporting with detailed information if any step of that script fails.
Synthetic monitoring is invaluable when it comes to confirming that your revenue generating (or job saving) pathways; checkouts, contact forms, subscriptions etc. are functioning. It is also crucial to configure synthetic monitoring that mimics your team actions; like login pages and administrative processes.
Don’t compromise on feature capabilities. Ask your provider if they support the ability to test synthetic monitoring from your private monitoring locations. Why is this important? With both public and private pathways to monitor, testing a check before it is activated ensures it is functional for your location before it starts counting against your Uptime.
Running from public and private monitoring locations lets you configure transaction checks for your external and internal-facing transaction pathways, all while receiving the console details, element response times, and step or workflow-specific alert information you need to optimize your internal-facing infrastructure.
Subaccounts aren’t just for segmenting large organizations, they can provide additional layers of reporting, limit user access (i.e. for vendors), and control permissions for users depending on their clearance level. Subaccounts can also be provisioned with additional login parameters like 2FA and SSO.
Subaccounts are about individuality. As we mentioned, each business is different. Subaccounts easily create and separate client accounts and monitoring needs, and provide them with their own hierarchy of alerting, escalations, reporting, and permissions. Private Locations that are subaccount-specific help with internal organization, team (or site!)-specific reporting, and network monitoring security.
Something to keep a keen eye on when looking for a monitoring provider is what provisions are in place to help you run optimal monitoring as you scale your business. Subaccounts are a great asset.
Ever wonder how your site is performing through the eyes of your users? You should, there are many under-the-surface elements that are relevant; load times, delayed navigation, and user confusion because of your UI, are items all businesses need to be aware of, but often aren’t. Monitoring the details of a non-functional user experience doesn’t help you.
Real user monitoring (RUM) is the icing on the cake. Getting analytical intel on what browser type/device your users are navigating with, their geography, and the load times they experience is that sweet real-time performance fruit-center that you want layered into your overall monitoring metrics, providing the final layer of visibility on your site efficiency from your internal to external pathways.
For a bonus 6th requirement, grab a buddy. Look for a monitoring provider with a human element that can support you when you’re in the crunch of downtime. Over automation isn’t the answer, downtime can sometimes stem from human error, and the return to uptime can stem from human response.
Good monitoring helps your team reduce their incident response time and works with you to investigate and resolve downtime. The goal is to activate a sustainable, holistic monitoring ecosystem that brings you peace of mind.