Battery replacement on a NiteRider Swift 500

1 minute, 24 seconds

A while ago the battery on my NiteRider Swift 500 headlight stopped taking a charge. I looked at NiteRider’s FAQ page and saw no mention of the batteries being user serviceable. Further, when I searched online, I didn’t find any guides or replacement parts for the light. Time to grab a screwdriver and DIY!

I started by removing the strap mount – a single Phillips head on the bottom:

Then I removed the 4 allen screws around the base of the head:

The lens assembly should come off – be careful as the rubber battery cover will fall free now. Be sure to keep track of all the parts!

Remove the two Philips head screws at the top of the LED plate:

You should now be able to slide the LED plate out which is attached to the battery and the main circuit board. You’ll note the battery is both soldered on an aggressively affixed with double sided sticky tape. Peel the battery off, cut the two wires (one red, one black) half way between the battery and the circuit board. Rub the tape residue off enough so you can see the specs of the battery:

There’s no direct replacement part for this, but I found this “CaoDuRen Rechargeable 3.7V Li Lipo Lithium” on Amazon was close enough to work. Only $9 at the time – what a deal!

Cut the JST connector off of the new battery, cutting half way between the battery and connector. Solder the black to black and red to red wires, and seal up the solder connection. I used heat-shrink tubing and then affixed it with sticky Velcro:

Reassemble your light by following the steps above in reverse order. Careful when working with the light as it is quite bright and I had it accidentally turn on while assembling it – yikes!

Now enjoy your light and drop me a line if you have any other tips or succeed in replacing your battery!

Simple Single Page Site with Secure Log Access

3 minutes, 36 seconds

And image of the sticker with a ".xyz" TLD

A friend of mine created some fun stickers for use at the most recent DEF CON. They were sly commentary about how corporate a lot of the stickers are and how maybe we should get back to our DIY roots. But…what’s this? There’s a .xyz in there…is that a TLD…is there domain I could go to?! IS THIS STICKER AN AD ITSELF?!?!?!?!1!

(Sticker image is marked with CC0 1.0)

It’s all of those things and none of those things – that’s why I love it so much. Best of all, when you go to website, you get just what you deserve ;)

The website was initially setup on a free hosting provider, but they didn’t provide any logs – something my friend was curious about to see how much traffic the non-ad ad was generating. I have a VERY cheap VPS server that already had Ubuntu server and Caddy on it, and I figured I could help by hosting a wee single file static web site and be able to easily offer the logs. Let’s see what we can do!

Step 1: One HTML file + Four Caddy config lines = Web server

I frickin’ love Caddy! I made a single index.html file and then added these 4 lines of config:

the-domain-goes-here.xyz {
        root * /var/www/the-domain-goes-here.xyz
        file_server
}

After I restarted Caddy (systemctl restart caddy) – I was all set! As DNS had already been changed to point to the IP of my lil’ server, Caddy auto-provisioned a free Let’s Encrypt cert, redirected all traffic from port 80 -> 443 and the site worked perfectly!

By default Caddy has logs turned off – let’s fix that!

Step 2: Turn up the (log) volume

Unsurprisingly, Caddy makes enabling logs very straight forward. Just add these three lines

  log {
    output file /var/log/caddy/the-domain-goes-here.xyz-access.log
  }

I reloaded the config in Caddy (systemctl reload caddy) and checked for log activity in /var/log/caddy/. It was there! Oh…it was there in full repetitive, verbose JSON…OK, cool, I guess that’s a sane default in our new cloud init all JSON/YAML all the time world. However, how about common log format though?

This was the first time Caddy surprised me! While it was easy enough to do (huge props to “keen” on Stack Overflow), it was a bit more convoluted and verbose than I expected. You have to change the log deceleration to be log access-formatted and then specify both a format and a transform. The final full server config looks like this:

the-domain-goes-here.xyz {
	root * /var/www/the-domain-goes-here.xyz
	file_server
	log access-formatted {
    		output file /var/log/caddy/the-domain-goes-here.xyz-access.log
		# OMG - thank you!! https://stackoverflow.com/a/76988109
		format transform `{request>remote_ip} - {request>user_id} [{ts}] "{request>method} {request>uri} {request>proto}" {status} {size} "{request>headers>Referer>[0]}" "{request>headers>User-Agent>[0]}" "host:{request>host}"` {
          time_format "02/Jan/2006:15:04:05 -0700"
       		}
  	}
}

Now let’s figure how to to add secure access to download those logs.

Step 3: Rsync via Authorized Keys FTW!

A straight forward way to give access to the logs would be to create a new user (adduser username) and then update the user to be able to read the files created by the Caddy process by adding them to the right group (usermod -a -G caddy username). This indeed worked well enough, but it also gave the user a full shell account on the web server. While they’re a friend and I trust them, I also wanted see if there was a more secure way of granting access.

From prior projects, I knew you could force an SSH session to immediately execute a command upon login, and only that command, by prepending this to the entry in the authorized_key file:

command="SOME_COMMAND",no-port-forwarding,no-user-rc SSH-KEY-HERE

If I had SOME_COMMAND be /usr/bin/rsync then this would be great! The user could easily sync the updates to their access log file at /var/log/caddy/the-domain-goes-here.xyz-access.log. but then I realized they could also rsync off ANY file that they had read access too. That’s not ideal.

The final piece to this Simple Single Page Site with Secure Log Access is rrsync. This is a python script developed specifically for the use case of allowing users to rsync only specific files via the Authorized Keys trick. The full array of security flags now looks like this:

restrict,command="/usr/bin/rrsync -ro /var/log/caddy/",no-agent-forwarding,no-port-forwarding,no-pty,no-user-rc,no-X11-forwarding SSH-KEY-HERE

As there’s no other logs in /var/log/caddy – this works great! The user just needs to call:

rsync -axv username@the-domain-goes-here.xyz: .

Because of the magic of rrsync (two rs) on the server forcing them into a specific remote directory, the rsync (one r) on client is none the wiser and happily syncs away.

Happy web serving and secure log access syncing and Happy Halloween!

Blog Theme Improvements

2 minutes, 38 seconds

Most of the time when I’m reading articles online, I very often switch my browser to Reader View. This gets rid of all fluff of display and theme that a site’s have and, most importantly, fixes the low contrast text trend. Reader View is white text on black background (but this is configurable) and also adds a “Time to Read” at the top of the article. The latter prevents me from clicking on a “quick read” which is actually 30 min!

I noticed some times I visit a site and don’t flip in Reader View because they’ve done it for me already! While I know not everyone is like me, so they may prefer miles of white space with a nice, thin, light gray font on an off-white background. However, as this is my blog, I’ve just converted to be just as I like those sites where I don’t turn on Reader View!

Referencing the image above, here’s’ the changes with the “before” on the right and the “after” on the left:

  1. All code blocks are now numbered. They’re still zebra striped, but they’re higher contrast
  2. Text is now larger and easier to read – straight up white on black
  3. White background is now black background*
  4. Items that are variables or code snippets have a different colored background, like this
  5. The read time is shown at the top of every post (not pictured)
  6. Removed “Share This” icons at the bottom of each post (also not pictured)

* – I actually don’t force the black background! All the changes are live based on the users’ OS preference via the prefers-color-scheme CSS selector. You can pick! Here’s a video showing the two flipping back and forth:

I’m still tweaking a few of the CSS colors and what not as I find things not quite right, but please send along any questions or comments. I’d love to hear what y’all think!

Addendum

The “Share This” plugin mentioned above was not only adding some extra clutter of icons I no longer thought too helpful, but was also including an external CSS or JavaScript or something file that didn’t feel right given I don’t prefer to share my HTTP traffic with any other sites.

As well, I removed two extensions (Share This and Code Highlighter syntax) which I then implemented in my own wee plugin. Less 3rd party code means less to update means less security concerns for my ol’ blog here. As well, I greatly reduced the feature set and amount of PHP – I think the plugin is about 5 active lines of PHP.

Finally, I’m using the Twenty Twelve theme with this additional CSS (added via “appearance” section of prefs):

#site-navigation, .comments-link { 
    display:none;
}
.wp-block-quote {
    border-left: .25em solid;
    border-left-width: 0.25em;
    border-left-style: solid;
    border-left-color: cyan;
}
body .site {
    margin-top: 0;
    margin-bottom: 0;
}
body, * ,.site {
    font-size: 1.15rem;
}
body {
    background-color: white;
    color: black;
}
.wp-block-code ol li:nth-child(odd), code  {
    background: lightcyan;
}
code {
    padding: 4px 6px 4px 6px;
}
@media screen and (prefers-color-scheme: dark) {
     *,.site  {
        background-color: black;
        color: white;
    }
    .widget-area .widget a, .entry-content a, .comment-content a {
        color: #51c9ff;
    }
   .widget-area .widget a:visited,.widget-area .widget a,.entry-content a:visited, .comment-content a:visited {
        color: lightgray;
    }
    .widget-area .widget a:hover ,.widget-area .widget a,.entry-content a:hover, .comment-content a:hover {
        color: white;
    }
    body {
        background-color: #010149;
    }
    .wp-block-code ol li:nth-child(odd), .wp-block-code ol li:nth-child(odd) {
        background: #000030;
        color:white;
    }
    code {
        background: #000065;
    }
    .entry-content img, .comment-content img, .widget img, img.header-image, .author-avatar img, img.wp-post-image, .wp-block-embed {
        border-radius: 3px;
        box-shadow: 1px 2px 5px rgba(255, 255, 255, 0.84);
    }
}

With all this you should able able to reproduce these settings on your own blog if you so desire!

Dead Simple Continuous Deployment

3 minutes, 54 seconds

At work we’ve written some monitoring and alerting software. We’ve also automated releases and we wanted to take it to the next level so that the instant there was a new release, we’d deployed to production. This automated releasing of software upon a new release is called Continuous Deployment, or “CD” for short. This post documents the simple yet effective approach we took to achieve CD using just 20 lines of bash.

The 20 lines of bash

Super impatient? I got you. This code will mean a lot more if you read the post below, but I know a lot of engineers want more copypasta and less preachy blog posts:

#!/usr/bin/env bash

# Checks for local version (current) and then remote version on GH (latest)
# and if they're not the same, run update script
#
# uses lasttversion:  https://github.com/dvershinin/lastversion

current=$(cd /root/cht-monitoring;/usr/bin/git describe --tags)
latest=$(/usr/local/bin/lastversion https://github.com/medic/cht-watchdog)

update(){
	cd /root/cht-monitoring
	git fetch
	git -c advice.detachedHead=false checkout "$latest"
	/root/down-up.sh	
}

announce(){
	/usr/bin/curl -sX POST --data-urlencode "payload={\"channel\": \"#channel-name-here\", \"username\": \"upgrade-bot\", \"text\": \"Watchdog has been updated from "$current" to "$latest". Check it out at https://WATCHDOG-URL-HERE\", \"icon_emoji\": \":dog:\"}" https://hooks.slack.com/services/-SECRET-TOKEN-HERE-
}


if [ ! "$current" = "$latest" ];then
	$(update)
	$(announce)
	echo "New version found, upgraded from $current to $latest"
else
	echo "No new version found, staying on $current."
fi

Why use CD?

There’s been a lot written about CD, like this quote:

Engineering teams can push code changes to the main branch and quickly see it being used in production, often within minutes. This approach to software development [allows] continuous value delivery to end users.

rando GitHub blog post

I’ll add to this and say, as an engineer, it’s also extremely gratifying to have a pull request get merged and then minutes later see a public announcement in Slack that a new release has just automatically gone live. There’s no waiting even to do the release manually yourself, it’s just done for you, every time, all the time! Often when engineers are bogged down by a lengthy release process and slow customer adoption where it can take weeks or months (or years?!) to see their code go live, CD is the antidote to the poison of slowness.

Tools Used

The script uses just one custom binary, but otherwise leans on some pretty bog standard tools:

  • git – Since our app is in a local checked out git repository, we can use trivially find out what version is running locally with git describe --tags
  • curl – We use curl to do the POST to slack using their webhook API. It’s dead simple and requires just a bearer token
  • lastversion – I found this when searching for an easy and reliable way to resolve what the latest release is for our product. It’s on GitHub and it just made my day! It solved the exact problem I had perfectly and it really Does One Thing Well (well, admittedly also downloads)
  • down-up.sh – this is the cheating part of this solution ;) This is an external bash script that makes it easy tokeep track of which docker compose files we’re using. Every time we create a new compose file, we add it to the compose down and compose up calls in the script. This ensures we don’t exclude one by accident. It’s just two lines which are something like:

    docker compose -f docker-compose.yml -f ../docker-compose-extra.yml down
    docker compose -f docker-compose.yml -f ../docker-compose-extra.yml up --remove-orphans -d

  • Inside the repository itself, we’ve hooked up Semantic Release which automatically cuts a new release based on Semantic Versioning (aka “SemVer”)

New release process

With all the tools in place, here’s how we cut a new release, assuming we’re on version 1.10.0:

  1. An engineer will open a pull request (PR) with some code changes.
  2. The PR will be reviewed by another engineer and any feedback will be addressed.
  3. Once approved, the PR is merged to main with an appropriate commit message.
  4. A release is automatically created: A fix commit will release version 1.10.1. A feat (feature) commit will release version 1.11.0. A commit citing BREAKING CHANGE will release version 2.0.0
  5. Every 5 minutes a cronjob runs on the production server to compare the current local version verses the latest version on GitHub
  6. If a new version is found, git is run to check out the latest code
  7. The down-up.sh script is called to restart the docker services. We even support updates to compose files citing a newer tag of a docker image in which case docker downloads new releases of upstream. Easy-peasy!
  8. A curl command is run to make a POST to the Slack API so an announcement is made in our team’s public channel:

Wrap up

You may become overwhelmed in the face of needing to store SSH private keys as a GitHub secret or having a more complex CD pipeline with a custom GitHub Runner. Don’t be overwhelmed! Sometimes the best solution is just 20 lines of bash you copied off a rando blog post (that’s me!) which you tweaked to work for your deployment. You can always get more complex at a later date as needed, but getting CD up and running today can be incredibly empowering for the engineers working on your code!

Timekpr-nExT Remote

3 minutes, 45 seconds

tl;dr – Timekpr-Next Remote is an easy to use web app to add or remove time to users of linux login time tracking app, Timekpr-nExT

Recently, one of my kids got a drawing tablet and wanted to use it with Krita. Given they were on a Chromebook, we decided to repurpose an old Intel NUC i3 server with a clean Ubuntu 22.04 Desktop. Not only would this allow Krita to run well, but it would also enable video editing via KDEnlive and other hard-to-run on ChromeOS softwares.

For some time now, we’ve been happily running Legba to track computer usage. However, we wanted something with a bit more teeth, so we settled on Timekpr-nExT (henceforth just “Timekpr”, but it’s the “nExT” one, not this out of date one or this waaay out of date one, k?). This is a great app that allows for a finite amount of time to be used per day, and it is relatively easy to add more time. Well, easy if you’re on a desktop. And you have SSH installed. And you know the login and password to each computer you want to control. So, not at all easy if you’re a busy parent who’s juggling managing kids and helping with school work and cooking dinner. So, not at all easy if you’re a parent, amiright?!

Enter Timekpr-Next Remote! This is a Dockerized Python app that allows you to easily update update your kids’ computer time right from the nearest parental phone or desktop device:

As you can see, for any given user (only one sample user, “Muhammad”, is shown here) you can easily add more time (or remove time if you fat fingered the add time). Given how ubiquitous phones are, having a self hosted, non-cloud way to easily control time has been a win for us. Video chat with grandma after you’ve done your homework and used all your time allotment? Add -> 30 min -> Save, it only takes 3 seconds \o/

SSH FTW

Let’s have a look under the covers at how all this works.

I should start out by saying that Timekpr is licensed GNU GPL v3 and they post the code online. I did consider adding an HTTP server to handle REST requests to core package up stream. Then I realized I’d have to do the securely and that I’d have to deal with certificates and such. A greenfield approach would be quicker (grok only my code, not someone else’s) and more secure, but not as clean (I used SSH vs REST). With that out of the way…

Timekpr Remote uses SSH to communicate! This is clunky, indeed. But, Python has a wonderful SSH library in the form of Fabric (which turn uses the awesome Paramiko) which took a lot of the really clunky parts out and made them pretty elegant.

Here’s the flow of data when we load the page and want to get the current usage of Mumammad:

This “handwritten” diagram compliments of JS Sequence Diagrams – thank you!

All data flows this way, and there’s three AJAX endpoints which the web client sends via this flow:

  • Get all Users and IPs
  • Get usage for a user and IP pair
  • Add/Remove time for a user and IP pair

This isn’t a perfect REST API, but it’s OK enough. It was my first time writing an app in Flask, so it was fun to figure how to do different URL handling and JSON returning and such, even if my REST uses some GETs instead of POSTs/PUTs

“s” in Timekpr-next Remote is for “Security”

While we an trust SSH between Timekpr Remote and Server, you may note a lack of authentication between the mobile handset and Timekpr Remote. Indeed, there is none. Here’s what I recommend:

Run something like Traefik or Caddy in another docker container. From there you can bind the timekpr-next remote server to the host docker IP with something like TIMEKPR_IP=172.17.0.1 docker compose up -d. It will no longer be available on the network, only only via the revers proxy you set up.

You can then either use basicauth (eg in Caddy) or what I did is make a host name that is un-guessable like https://user-time-8957446623432192758492038.domain.com. Everyone just bookmarks this. Even if your kids see the URL, they won’t be able to remember it.

For those that give SSH the stink eye (smells like an injection attack, eh?), you can harden this too. Ensure the SSH user on the server can not do anything more than run timekpra by restricting it in the authorized_keys file on each client. This will ensure if extra variables are passed (though we do explicitly protect against this), they won’t do any more harm.

Like and Subscribe that video

Here’s a 9 second video demonstrating it in situ!


Jokes on you though, there’s no like and subscribe because it’s not actually YouTube (though GitHub is pretty close to a social media network…)

Keys-To-The-Tunnel 1.1.0 released

3 minutes, 14 seconds

Hey hey! I’d been meaning to consolidate some of my VMs I’m paying for and semi-accidentally deleted the place where I was hosting my work’s instance of Keys-To-The-Tunnel (KTTT). With this VM deletion, I decided to leverage my recent dabblings with Caddy and do a big refactor of KTTT by replacing Apache with Caddy.

As a reminder, KTTT is an easy way for an organization which both develops web apps and uses GitHub to share their local dev app on the internet via a publicly accessible URL all protected with solid TLS and SSH encryption.

Let’s dive into what the update means and why I did it!

Parenthetical aside about TLS for local Android development

Originally I asked if potential KTTT users :

ever need to test an Android application against a web server such that you need a valid TLS certificate?

Not so good idea on my original KTTT post

This is actually a bad idea if all you need is a valid TLS cert for Android testing. The reason is that you’ll literally be holding your phone in your hand which is less than a foot from the computer hosting your app you want to test against. By introducing KTTT into the mix, you send your traffic thousands of miles/kilometers (you pick which) and back just to get a TLS cert. Crazy times!

A much better approach is to use something like local-ip.co . If you want an easy way to keep your traffic entirely local, you can use nginx-local-ip with a one line docker compose call to set up everything you need to locally run a local-ip.co TLS. It’s a really sweet set up! (I’m biased because I help author some of it ;)

KTTT is still a great idea if you need to share your local dev environment though!

With that out of the way, back to KTTT updates…

What’s new with KTTT

For an end user of KTTT who just wants to share their app, it’s now WAY easier to figure out which SSH command to run and which URL to share. This is thanks to the handy web app which walks you through three easy questions:

  1. What is your GitHub Username?
  2. What port is your app running on locally?
  3. Is your local app using http or https?

This is MUCH better than wading through a list of random ports and other GitHub usernames which you only cared about your username and port. Here’s what the updated web app looks like in action (15 second video):

For the administrators of KTTT, you’ll note that KTTT now uses Caddy instead of Apache. While there’s nothing wrong with Apache, Caddy is a simpler take on the needs of an app like KTTT that requires a bunch of small reverse proxies. Caddy is 7 years old and came to being in the world of Docker, containers and micro-services. Where as Apache is 27 years old and came being near the birth of world wide web.

Ironically, a key feature of Caddy, the ability to automatically provision and renew TLS certs, is NOT being used. Instead, the opportunity to use a wildcard TLS cert came up via acme-dns.io and I took it.

That all said, it’s a joy to use Caddy because I can create a simple JSON file with four lines to define a reverse proxy:

mrjones-plip.awesome-tunnel.plip.com {
   tls /etc/certs/fullchain.pem /etc/certs/privkey.pem
   reverse_proxy 127.0.0.1:3089
}

Love it!

Putting it all together

The web app above gives a pretty good idea of the improvements, but since I added a demo video of the whole KTTT experience on GitHub, may as well post it here in case you’re curious (44 second video):

Other odds and ends in 1.1.0

There’s also a bunch of other fixes and improvments I made while in there. Here’s the notes from the 1.1.0 release:

  • Replace Apache with Caddy
  • Add mini web app to help devs figure which URL and SSH command to use
  • Unify mutli SNI certs to one wildcard with acme-dns.io (still using Let’s Encrypt though) per #1
  • Don’t overwrite existing user’s ports every time you run setup per #3
  • Don’t regenerate TLS certs every time you run setup per #3
  • Don’t rewrite vhosts in web server every time you run setup per #3
  • Update MOTD on login per #4
  • Try my hand at being an artist and create a KTTT logo

My first from-scratch 3D printed design (that is actually useful)

2 minutes, 24 seconds

I’ve had a 3d printer for a about 8 months now. It’s been fun to download STLs from the internet and print them. We’ve done articulated snakes and planetary gear prints. It is really amazing to print something in one go that you can move and spin around. Amazeballs!

The kids have printed up little castles and and little knickknacks the designed in TinkerCAD. This has been fun, but I was always bothered that I didn’t know a “real” design program that was also open source. Really that left FreeCAD. I did a tutorial or two on-line which were pretty great. I understood “parametric” and “fully constrained” finally \o/. But, it was time consuming to learn, and I was impatient. I printed up a laptop holder I found online. Time went on.

Recently I got some new mini servers (I’ll save the setup and install story for another blog post) and I wanted to mount them to the wall. I knew this would be pretty trivial to design a custom bracket if I’d learned FreeCAD well enough, but I still hadn’t.

I just decided to go for it, use TinkerCAD even though it was proprietary and online. Not ideal, but I got what I wanted! I made a two left and right brackets that held two mini servers against a plywood board. Perfect!

The point of this blog post is to let other folks know that they should just go for it – don’t let the tools slow you down. Once you see the power of creating something from nothing, and a physical something (not just software), you’ll see how empowering and inspiring it is!

I will say, you won’t get it right on your first try. I already knew this from some earlier prototyping I’d done, so I was ready for the 4 or 5 prints that failed before the final one succeeded. Don’t get flustered when, even after measuring twice before you cut print, that it’s still wrong!

So, here’s what I made:

What you see here are two “L” type brackets that each can hold the corners of two mini servers. Note the hole through the cross brace that allows you to screw in the top screw (the first draft totally lacked this and it was nigh impossible to mount). These are of course rotated up in how you’d mount them, and in a terrible position to print.

While these held the servers pretty well in place, there was still a chance they could tip forward off the wall, so I made a toolless top bracket to match. The nice thing is that you can screw this in as one piece, and the slide off the bracket from the screw mount. Some very light sanding was needed to ensure the two pieces slide together with just the right of amount of friction (showing two of them side by side, but only used one):

And here they are in situ:

If you have little servers and you want the STL, here’s the file and here’s the TinkerCAD link (which may go away b/c I’m not a fan of cloud services).

I hope if you’re thinking about designing your first part from scratch, you go for it!!

Legba the Net-tracker

4 minutes, 15 seconds

Intro

I’d been meaning to learn how to write an app using something more than CSV files, but less than MariaDB, to store files – I’m thinking SQLite of course! Then along came the desire to have a simple way to track when a computer was on a network as a proxy for kids’ daily screen time. After all, the network is the computer, right?

While there’s so very many ways to solve detecting if a computer is online (more on this later), I thought it’d be fun to write a simple app that could correlate multiple IPs to a single person, and then give a histogram of minutes per day per person. Given this is just a proxy for screen time, it’s fine if it doesn’t have alerting, password protection or even a way to prevent going over the allotted time per day. The goal will be for any interested parties to see how long a device has been on for the current day. It’s then up to the family to have a discussion about what it means to go over your daily allotment.

Ok, let’s do this! We have a requirement to track computers being online and to write and read the results to a SQLite DB. I’ve been groovin’ on learning Python, so let’s double down and use that. I did some Wikipedia exploring and read about Papa Legba, and thought it made a mighty fine sounding name. Finally, after some nudging from a friend, we’ll package it up in Docker so it’s easy to try out and host in an isolated container.

Ping FTW

The first step to using Legba, is to define a list of users and which IPs they’ll be on. Very likely the best way to do this is to either use static IPs on your LAN clients, or have your DHCP server set the same IPs per MAC every time.

Then you’ll create a conf.py file copied from the conf.example.py file and fill it out. Here we see Jon and Habib have one IP each, where as Mohamed has 2:

trackme = {
    'Jon': ["192.168.1.82"],
    'Habib': ["192.168.1.12"],
    'Mohamed, ': ["192.168.1.240", "192.168.1.17"]
}

The code to track if a device is achieved via the subprocess module via a ping() function with just two lines that send a single ICMP packet:

# thanks https://stackoverflow.com/a/10402323
def ping(host):
    """ Ping a host on the network. Returns boolean """
    command = ["ping", "-c", "1", "-w1", host]
    return subprocess.run(args=command, stdout=subprocess.DEVNULL, stderr=subprocess.DEVNULL).returncode == 0

Back in the main() function, we then read in config, loop over each person and try and ping() each of their IPs. If we see them online, we write to the DB via record(). It ended up, just as I’d hoped, that Python’s SQLite libraries are robust and it’s just 6 lines to insert a row:

sql = ''' INSERT INTO status(name,state,date) VALUES(?,?,?) '''
cur = sqlite.cursor()
activity = (name, state, datetime.now())
cur.execute(sql, activity)
sqlite.commit()

return cur.lastrowid

Just before the end of the loop we call probably the most complex function of the lot output_stats_html(). This function is responsible for reading the day’s active users, getting each users activity by hour, the total for the day and finally output static HTML as well as a static JSON file that will get called via AJAX so the stats will auto-refresh.

At the end of the loop we sleep for 60 seconds. In theory if you had hundreds (thousands?!) of IPs to track and they were on connections with >500ms latency, it would take way longer than 60 seconds. Legba will not scale to this level. It’s currently been comfortably tested with 5-10 devices on a LAN where each device has ~20ms of latency.

A histogram is worth a 1000 words

After you’ve done a bit of a git clone with a lil pip3 install and fleshed out your own config.py and done a little systemd love, you’ll have some sweet sweet histograms! (Some keen eyed readers may note this histogram looks familiar ;)

It’s interesting to note that mobile devices, as seen withe “Adnon Cell”, are effectively on all the time. In this sense, Legba is not much use to track a cell phone. Meanwhile, Bobby Table’s desktop, Adnon’s Laptop and Chang’s Nintendo Switch all work as expected (NB – I didn’t actually test with a Switch).

Existing Solutions

I’ve been running this for solution for just about 4 months now. It’s been a great way for our family to have an open discussion about what it means to spend too much time on the computer and it’s been rock solid. Checking ls and select count(*) from status; I see my DB is 23MB and has 487,069 rows.

Given the simplicity of this app, could this DB and rows be easily stored and retrieved elsewhere? When I wrote the app, I didn’t care – I just wanted to write it for the fun of writing it! However, I was listening to episode 171 of Late Night Linux and they mentioned how utilitarian Telegraf is. It struck me that, indeed, if you had a Telegraf, InfluxDB and Grafana (aka “the TIG stack”) already set up, it would be pretty trivial to capture these same stats. I would do this by setting up a centralized instance of Telegraf and either use the built in Ping plugin, or possibly the more extensible [[inputs.exec]] input type. With the latter, you could even re-use parts of Legba to pretty trivially input the data to InfluxDB. Then, it would be equally trivial, to slice up the ping counts per hour, per user and have a slick dashboard. Just food for thought!

Otherwise, I hope some else than me gives Legba a try!

DockStat: Docker stats in a simple to use and easy to read Bash script

2 minutes, 38 seconds

Intro

At work I’ve been doing a lot of Docker based projects. I ended up writing a neat little Bash utility which I then recently extended into what I’m calling DockStat. It shows running containers and their related resources. You could use it if you’re repeatedly upping, downing and destroying docker containers over and over like I was. Or maybe you just want a nice little dashboard to see what’s running on your server?

DockStat at work

However, if you have more than say a dozen active containers, this script might not scale nicely (oh, perhaps a monitor in portrait mode might fix this? ;)

Being the good little open source nerd that I am, this is of course available for download with a permissive license in the hopes that some one will find it useful or possibly even offer a PR with some improvements to my nascent Bash coding skills.

Background

With countless primers on how to use Docker out there, I won’t get into what the commands all mean, but the impetus for this script was repeatedly running docker ps to show a list of the active containers. A bit later I remembered you could run endless Bash loops with a one-liner which made the process a bit nicer as it auto-refreshed:

while true; do clear;date;docker ps;sleep 5; done

A bit after that I stumbled upon the glorious watch command! Wow – just when you think that that you know and OS, they come and show you that there’s this awesome command they’ve been hiding from you all these years. Thanks Linux!

watch greatly improved on my Bash one-liner as it was an even short one-liner, could trivially be configured to refresh at what ever frequency you wanted and show a header or not. The icing on the cake was that prevents the flash of a redraw upon refresh:

watch -t -n 1 docker ps

About now I got more cozy with the --format feature built into most Docker command line calls. This was handy because I could reduce the number of fields shown in docker ps that I wasn’t interested in. Here’s maybe the simplest of them which shows JUST the container name and if how long the it has been running:

docker ps --format='{{.Names}} {{.Status}}'

Research continued on how to architect the helper script. I had need to show different data than docker ps had to offer. I branched out into docker inspect as well as finding other Dockeristas one-liners that I shamelessly co-opted (I’d be honored if anyone did the same of my work!!). This allowed me to joined ps and inspect like as seen with this fave that shows all the running containers’ and their internal IPs:

docker ps --format='{{.Names}}'|xargs docker inspect --format='{{.Name}} - {{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}'

I was ready to assemble all the docker Data dukets I’d gathered into a nice CLI dashboard so our app developers could see the status of our containers booting. Finding a solution for this enabled both the the helper script to spring forth and simultaneously created the nascent DockStat. This was a Bash utility that was both easy to use, automates flash-less refreshes and introduced basic terminal layout functionality with a near zero learning curve (assuming you know Bash): Bash Simple Curses

Thanks

Thanks to James and Russ for reviewing my code and an early draft of this post. I’ve been trying to improve both my posts and code and this won’t happen without folks kind donation of their time and input!

Easy way to play Boggle on a flight

1 minute, 15 seconds

I was taking a flight and wanted to play Boggle while onboard. I looked for a simple “show me a Boggle board” app for my phone, and only found ones of dubious quality, ad laden ones or ones that were something like “play a boggle-like game with friends (account required, has ads, is not boggle)”.

Eventually I gave up when I found this great website that generates .png images of Boggle boards. Even better, I could do a bunch of curl calls (with a 5 second sleep in-between, to be nice) to download a BUNCH of boards. Then I could use ImageMagick’s montage to stitch them all together in a 2×4 layout, which printed out nicely. montage -tile 2x4 -mode concatenate *.png output.png to be exact ;) With 4 or 5 pages printed out, I cut out each board, stapled together and made for the perfect Offline Boggle Booklet. Worked great!

However, on the flight I was thinking that it’d be pretty easy to write a bit of JavaScript and bang out a board that worked well offline. Further, I realized that CSS supports transformations, like transform: rotate(90deg)that could even mimic the rotated dice like the site above I referenced. Indeed, by the end of the flight I had all the hard parts worked out. This was mostly me working offline without Stack Exchange to remember this or that. Special thanks to xuth.net for post the source to the perl app which gave me some good ideas on how to write this!

After landing, and doing some refining, I’m happy to present Offline Boggle Boards. Load up the web page before you take off and you’re good to go! Also, for you coder types, pull requests welcome!