Below you will find pages that utilize the taxonomy term “Monitoring”
Sustainable On-Call
I saw a tweet by Charity Majors that got me thinking–
Yes, yes. On call sucks and can destroy your life. I know this. Bored now.On call is a fact of life for anyone who cares about developing high quality software for the long run. So how can we make it not suck?
— Charity Majors (@mipsytipsy) January 31, 2018
On-call is stressful, overwhelmingly the negative type of stress, distress, rather than positive, eustress. I’ve written about the stresses of on-call before–urgency, uncertainty, duration, and expectations. We all know that distress can contribute to burnout, but individually those four factors are fairly benign. Expectations are part of any job. People on oil rigs work 12+ hours a day for two weeks straight. A number of jobs have uncertain and urgent of tasks, such as first responders or doctors. If these components can be managed, why then can on-call be so miserable?
Digging deeper, I came to the conclusion that the worst part of on-call revolves around frequency and volume. Everything we do around improving on-call I believe tries to attack these two causes. Why do these factors impact on-call, and how can they be mitigated?
The Hidden Costs of On-Call: False Alarms
Overcoming Monitoring Alarm Flood
You’ve most likely had 10, 20, 50, or even more alerts hit your inbox or pager in a short span of time or all at once. What do you call this situation?
It turns out, there’s a name for this influx of alerts–“alarm flood”.
“Alarm flood” originates in the power and process industries, but the concept can be applied to any industry. Alarm flood deals with the interaction between humans and computers–specifically more automated alerts than the human element can process, interpret, and correctly respond to. It is the result of multiple small changes, redesigns, and additions to a system over time: Why would you not want to “let the operator know” that a system has changed states?
Alarm flood has been discussed in those industries for at least the past 20 years, but it was formally defined in 2009 in the ANSI/ISA 18.2 Alarm Management Standard as 10 or more alarms in any 10 minute period, per operator.
In tech, the “operator” will be the person on call. In a smaller operation with only one or two engineers on call at a single time, any significant event could turn into a flood, making it difficult for the engineer to identify and address the root causes. How do we fix this flood state and provide better information to our engineers?
Hosted vs Self-Hosted Monitoring
Traditionally, to monitor your servers, your company would set up in-house monitoring with something like Nagios, and that was how things were done.
With the increase in Cloud services, there came an increase in hosted monitoring. If you opt to host on a platform like Heroku or AWS Lambda, you won’t even have a datacenter to self-host on!
Using virtualenv and PYTHONPATH with Datadog
Datadog is a great service I’ve used for monitoring. Since the agent is Python-based it’s very extensible through a collection of pip
installable libraries, but the documentation is limited on how to handle these libraries.
If you use the provided datadog-agent
package, Datadog comes with its own set of embedded applications to monitor your server, including python
for the agent, supervisord
to manage the Datadog processes, and pip
. Since this is all just Python, surely this can lead to something. Can’t we import our own custom libraries in our custom checks? Yes we can.