Monitoring a process for high memory consumption using Monit

I run Pi-hole on an old PogoPlug E02 with a custom compiled dnsmasq (or pihole-FTL, as they now call their customised version of it). Lately I have been noticing my DNS queries becoming slow erratically, and upon further investigation it looked like pihole-FTL has a memory balloon, and it consumes all of the 256 MBs of memory available and starts swapping, bringing everything to an almost standstill.

In comes Monit, a highly configurable process supervisor. This is how I set up monitoring for the errant pihole-FTL process. It checks whether the process consumes more than 100 MB of memory for more than three cycles, and if it does, it restarts it. This has taken care of any sort of manual tinkering I need to do whenever there’s complaints of the internet being slow.

check process pihole-FTL with pidfile /run/pihole-FTL.pid
start program = "/usr/sbin/service pihole-FTL start" with timeout 20 seconds
stop program = "/usr/sbin/service pihole-FTL stop"
if totalmem > 100.0 MB for 3 cycles then restart

PS: Monit has nice commands to check the status of the processes/files/directories, etc. it monitors. monit summary for succinct information, or monit status for more verbose output. Note that you might need to turn on the HTTP API for these to work.

soumik@pi-hole:~# monit summary
Monit 5.20.0 uptime: 32m
┌─────────────────────────────────┬────────────────────────────┬───────────────┐
│ Service Name │ Status │ Type │
├─────────────────────────────────┼────────────────────────────┼───────────────┤
│ pi-hole │ Running │ System │
├─────────────────────────────────┼────────────────────────────┼───────────────┤
│ pihole-FTL │ Running │ Process │
└─────────────────────────────────┴────────────────────────────┴───────────────┘
soumik@pi-hole:~# monit status
Monit 5.20.0 uptime: 32m
Process 'pihole-FTL'
status Running
monitoring status Monitored
monitoring mode active
on reboot start
pid 6363
parent pid 1
uid 999
effective uid 999
gid 999
uptime 22h 51m
threads 6
children 0
cpu 0.2%
cpu total 0.2%
memory 8.6% [20.7 MB]
memory total 8.6% [20.7 MB]
data collected Tue, 26 Feb 2019 18:40:28
System 'pi-hole'
status Running
monitoring status Monitored
monitoring mode active
on reboot start
load average [0.00] [0.00] [0.07]
cpu 0.4%us 0.3%sy 0.3%wa
memory usage 43.1 MB [17.8%]
swap usage 8.2 MB [1.6%]
uptime 1d 20h 37m
boot time Sun, 24 Feb 2019 22:03:33
data collected Tue, 26 Feb 2019 18:40:28

Using parted’s resizepart non-interactively on a busy partition

I had a situation where I needed to spin up a virtual machine from a template, and if the new virtual machine’s disk was larger, I needed to resize the partition and then ‘grow’ the filesystem. Well, as with any thing, that you need to do for more than once, I tried to script it using Ansible and incorporate it into our existing VM provisioning scripts. First step was to figure out command we need to run. Ansible’s parted module is really bare-bones and anemic, and it didn’t have the resizepart command which I wanted, so I had to resort to using the shell module. Ensuring idempotency is difficult with bare shell commands, but we’ll use what we can get.

This is the interactive parted command which will resize partition 2 on /dev/sda.

root@test-vm-18:~# parted /dev/sda
GNU Parted 3.2
Using /dev/sda
Welcome to GNU Parted! Type 'help' to view a list of commands.
(parted) resizepart 2
Warning: Partition /dev/sda2 is being used. Are you sure you want to continue?
Yes/No? Yes
End? [10.7GB]? 100%
(parted) quit
Information: You may need to update /etc/fstab.

Next step is to condense the command into a single line. After looking at the manual and a bit of trial and error, this is the one which works –

parted /dev/sda resizepart 2 yes 100%

The yes is in there in order to automatically choose to continue despite the warnings. This doesn’t work on Ansible, because it still requires user interaction for the warnings. Note that parted also has a scripted mode (with the -s switch) which doesn’t expect any interaction from the user, but that turned out to be buggy in our specific combination of command and options.

The eventual solution was to use an undocumented switch ---pretend-input-tty. Since Ansible uses a TTYless SSH session, and Parted’s non-interactive mode doesn’t work, this was the only way out.

parted ---pretend-input-tty /dev/sda resizepart 2 yes 100%

Saving state information between GitLab CI runs

I had a unique scenario where I had to find out if certain files (in a specific directory) changed in between GitLab CI job runs. One of my original ideas was to run jobs on changes to certain files using only:changes (link). This had two problems – first of all, this would run on every commit regardless of which files were changed/added (even with only:changes, the job would be initiated, but would not run any tasks), and that’s a waste of resources. Second, I needed to find out if certain files had changed periodically (more specifically, every Tuesday and Thursday). I thought I would edit a list of changed files using a job every commit, and then use that list for my scheduled job runs on Tuesdays and Thursdays. And that would mean saving state information somewhere, which Gitlab doesn’t provide for explicitly.

In comes Gitlab CI cache, which is meant for caching dependencies between runs. You can also use it for storing arbitrary files. I initially thought I would store a list of changed files in there, but I figured out I could just store the commit ID of the last run in there and use a clever Git command to find out which files had changed between that commit and HEAD/now.

git diff HEAD $(head -1 $LAST_COMMIT_FILE) --name-only | grep $DIR_NAME

Combining all of that, I came up with this .gitlab-ci.yml

image: alpine:latest

variables:
LAST_COMMIT_FILE: .commit_for_last_run
TARGET_DIR: Important_files #This can be a random literal

send_biweekly_email:
only:
- schedules
cache:
paths:
- $LAST_COMMIT_FILE

before_script:
- if [ ! -w $LAST_COMMIT_FILE ]; then echo $CI_COMMIT_SHA > $LAST_COMMIT_FILE; fi

script:
# Get files changed since the last time this script was run
- export CHANGED_FILES=$(git diff HEAD $(head -1 $LAST_COMMIT_FILE) --name-only | grep $TARGET_DIR)

# If such files exist, do things
- if [ ! -z "$CHANGED_FILES" ]; then #do_things; else echo "No changes between $(head -1 $LAST_COMMIT_FILE) and $CI_COMMIT_SHA."; fi

# Store current commit ID in the last commit storage file
- echo $CI_COMMIT_SHA > $LAST_COMMIT_FILE

$CI_COMMIT_ID is a built-in Gitlab CI variable that has the commit id of the last commit before this job, which is also stored in HEAD, I presume. Note that $TARGET_DIR can be a random string and not necessarily the name of a directory, since we will be grepping for it.

Also note that the cache is provided on a best-effort basis, it’s usually stored locally (where the runner resides) unless you have enabled distributed cache and S3 uploading, so this might be theoretically unreliable. Although I have had no indications of unreliability for more than a month in production. Just in case the cache isn’t retrieved successfully, I’m writing the current commit ID into the $LAST_COMMIT_FILE in case that file doesn’t exist.

Resolving “‘unknown’: unknown terminal type.” error

Last day, after updating the repositories and installing the updated packages on my Debian Lenny, I found that I could no longer run top or use nano or vi to open a file. It threw up this nasty error:

#top
'unknown': unknown terminal type.

After a bit of sleuthing, I came to the conclusion that my default console terminal type was defined as ‘unknown’, which, obviously, isn’t correct. To display your default terminal type, use this :

echo $TERM

If it says something other than linux, there is your problem.

To change it to linux, just type in:

export TERM=linux

To make the change permanent:

echo 'export TERM=linux' >> ~/.bash_profile

WordPress permalinks in nginx

nginx logoWordPress generally works out-of-the box on nginx. The posts load fine, the functions in the dashboard work pretty well, until you come to the permalinks. If you are on Apache, with mod_rewrite, WordPress will automatically add the required rewrite rules to your .htaccess file for permalinks to work. But for nginx, you have to add the rules manually.

Moreover, when WordPress detects that mod_rewrite is not loaded (which is the case with nginx), it falls back to using PATHINFO permalinks, which inserts an extra ‘index.php’ in front. This hasn’t been much of a problem for me as I have been using the custom structure option to remove the index.php. It has been working fine for me. (Screenshot below)

Apart from that, you will also need to edit your nginx configuration file to make the permalinks work. We will use the try_files directive (available from nginx 0.7.27+) to pass URLs to WordPress’s index.php for them to be internally handled. This will also work for 404 requests.

  • If your blog is at the root of the domain (something like http://www.myblog.com), find the “location /” block inside the configuration file, and add the following line to it.
    [bash]try_files $uri $uri/ /index.php?q=$uri&$args;[/bash]

    It should look like this after the edits :

    [bash] location / {
    index index.php index.html index.htm;
    try_files $uri $uri/ /index.php?q=$uri&$args;
    }[/bash]

  • If your blog is in a subfolder (say /blog), you’ll have to add an extra “location /blog/” block to your configuration file :
    [bash] location /blog/ {
    try_files $uri $uri/ /blog/index.php?q=$uri&$args;
    }[/bash]

After you have finished making the changes in the configuration file, reload the nginx configuration by :

[bash]nginx -s reload[/bash]

WordPress’ pretty permalinks should work fine now.