Category Archives: Uncategorized

Blog Theme Improvements

2 minutes, 38 seconds

Most of the time when I’m reading articles online, I very often switch my browser to Reader View. This gets rid of all fluff of display and theme that a site’s have and, most importantly, fixes the low contrast text trend. Reader View is white text on black background (but this is configurable) and also adds a “Time to Read” at the top of the article. The latter prevents me from clicking on a “quick read” which is actually 30 min!

I noticed some times I visit a site and don’t flip in Reader View because they’ve done it for me already! While I know not everyone is like me, so they may prefer miles of white space with a nice, thin, light gray font on an off-white background. However, as this is my blog, I’ve just converted to be just as I like those sites where I don’t turn on Reader View!

Referencing the image above, here’s’ the changes with the “before” on the right and the “after” on the left:

  1. All code blocks are now numbered. They’re still zebra striped, but they’re higher contrast
  2. Text is now larger and easier to read – straight up white on black
  3. White background is now black background*
  4. Items that are variables or code snippets have a different colored background, like this
  5. The read time is shown at the top of every post (not pictured)
  6. Removed “Share This” icons at the bottom of each post (also not pictured)

* – I actually don’t force the black background! All the changes are live based on the users’ OS preference via the prefers-color-scheme CSS selector. You can pick! Here’s a video showing the two flipping back and forth:

I’m still tweaking a few of the CSS colors and what not as I find things not quite right, but please send along any questions or comments. I’d love to hear what y’all think!

Addendum

The “Share This” plugin mentioned above was not only adding some extra clutter of icons I no longer thought too helpful, but was also including an external CSS or JavaScript or something file that didn’t feel right given I don’t prefer to share my HTTP traffic with any other sites.

As well, I removed two extensions (Share This and Code Highlighter syntax) which I then implemented in my own wee plugin. Less 3rd party code means less to update means less security concerns for my ol’ blog here. As well, I greatly reduced the feature set and amount of PHP – I think the plugin is about 5 active lines of PHP.

Finally, I’m using the Twenty Twelve theme with this additional CSS (added via “appearance” section of prefs):

#site-navigation, .comments-link { 
    display:none;
}
.wp-block-quote {
    border-left: .25em solid;
    border-left-width: 0.25em;
    border-left-style: solid;
    border-left-color: cyan;
}
body .site {
    margin-top: 0;
    margin-bottom: 0;
}
body, * ,.site {
    font-size: 1.15rem;
}
body {
    background-color: white;
    color: black;
}
.wp-block-code ol li:nth-child(odd), code  {
    background: lightcyan;
}
code {
    padding: 4px 6px 4px 6px;
}
@media screen and (prefers-color-scheme: dark) {
     *,.site  {
        background-color: black;
        color: white;
    }
    .widget-area .widget a, .entry-content a, .comment-content a {
        color: #51c9ff;
    }
   .widget-area .widget a:visited,.widget-area .widget a,.entry-content a:visited, .comment-content a:visited {
        color: lightgray;
    }
    .widget-area .widget a:hover ,.widget-area .widget a,.entry-content a:hover, .comment-content a:hover {
        color: white;
    }
    body {
        background-color: #010149;
    }
    .wp-block-code ol li:nth-child(odd), .wp-block-code ol li:nth-child(odd) {
        background: #000030;
        color:white;
    }
    code {
        background: #000065;
    }
    .entry-content img, .comment-content img, .widget img, img.header-image, .author-avatar img, img.wp-post-image, .wp-block-embed {
        border-radius: 3px;
        box-shadow: 1px 2px 5px rgba(255, 255, 255, 0.84);
    }
}

With all this you should able able to reproduce these settings on your own blog if you so desire!

Dead Simple Continuous Deployment

3 minutes, 54 seconds

At work we’ve written some monitoring and alerting software. We’ve also automated releases and we wanted to take it to the next level so that the instant there was a new release, we’d deployed to production. This automated releasing of software upon a new release is called Continuous Deployment, or “CD” for short. This post documents the simple yet effective approach we took to achieve CD using just 20 lines of bash.

The 20 lines of bash

Super impatient? I got you. This code will mean a lot more if you read the post below, but I know a lot of engineers want more copypasta and less preachy blog posts:

#!/usr/bin/env bash

# Checks for local version (current) and then remote version on GH (latest)
# and if they're not the same, run update script
#
# uses lasttversion:  https://github.com/dvershinin/lastversion

current=$(cd /root/cht-monitoring;/usr/bin/git describe --tags)
latest=$(/usr/local/bin/lastversion https://github.com/medic/cht-watchdog)

update(){
	cd /root/cht-monitoring
	git fetch
	git -c advice.detachedHead=false checkout "$latest"
	/root/down-up.sh	
}

announce(){
	/usr/bin/curl -sX POST --data-urlencode "payload={\"channel\": \"#channel-name-here\", \"username\": \"upgrade-bot\", \"text\": \"Watchdog has been updated from "$current" to "$latest". Check it out at https://WATCHDOG-URL-HERE\", \"icon_emoji\": \":dog:\"}" https://hooks.slack.com/services/-SECRET-TOKEN-HERE-
}


if [ ! "$current" = "$latest" ];then
	$(update)
	$(announce)
	echo "New version found, upgraded from $current to $latest"
else
	echo "No new version found, staying on $current."
fi

Why use CD?

There’s been a lot written about CD, like this quote:

Engineering teams can push code changes to the main branch and quickly see it being used in production, often within minutes. This approach to software development [allows] continuous value delivery to end users.

rando GitHub blog post

I’ll add to this and say, as an engineer, it’s also extremely gratifying to have a pull request get merged and then minutes later see a public announcement in Slack that a new release has just automatically gone live. There’s no waiting even to do the release manually yourself, it’s just done for you, every time, all the time! Often when engineers are bogged down by a lengthy release process and slow customer adoption where it can take weeks or months (or years?!) to see their code go live, CD is the antidote to the poison of slowness.

Tools Used

The script uses just one custom binary, but otherwise leans on some pretty bog standard tools:

  • git – Since our app is in a local checked out git repository, we can use trivially find out what version is running locally with git describe --tags
  • curl – We use curl to do the POST to slack using their webhook API. It’s dead simple and requires just a bearer token
  • lastversion – I found this when searching for an easy and reliable way to resolve what the latest release is for our product. It’s on GitHub and it just made my day! It solved the exact problem I had perfectly and it really Does One Thing Well (well, admittedly also downloads)
  • down-up.sh – this is the cheating part of this solution ;) This is an external bash script that makes it easy tokeep track of which docker compose files we’re using. Every time we create a new compose file, we add it to the compose down and compose up calls in the script. This ensures we don’t exclude one by accident. It’s just two lines which are something like:

    docker compose -f docker-compose.yml -f ../docker-compose-extra.yml down
    docker compose -f docker-compose.yml -f ../docker-compose-extra.yml up --remove-orphans -d

  • Inside the repository itself, we’ve hooked up Semantic Release which automatically cuts a new release based on Semantic Versioning (aka “SemVer”)

New release process

With all the tools in place, here’s how we cut a new release, assuming we’re on version 1.10.0:

  1. An engineer will open a pull request (PR) with some code changes.
  2. The PR will be reviewed by another engineer and any feedback will be addressed.
  3. Once approved, the PR is merged to main with an appropriate commit message.
  4. A release is automatically created: A fix commit will release version 1.10.1. A feat (feature) commit will release version 1.11.0. A commit citing BREAKING CHANGE will release version 2.0.0
  5. Every 5 minutes a cronjob runs on the production server to compare the current local version verses the latest version on GitHub
  6. If a new version is found, git is run to check out the latest code
  7. The down-up.sh script is called to restart the docker services. We even support updates to compose files citing a newer tag of a docker image in which case docker downloads new releases of upstream. Easy-peasy!
  8. A curl command is run to make a POST to the Slack API so an announcement is made in our team’s public channel:

Wrap up

You may become overwhelmed in the face of needing to store SSH private keys as a GitHub secret or having a more complex CD pipeline with a custom GitHub Runner. Don’t be overwhelmed! Sometimes the best solution is just 20 lines of bash you copied off a rando blog post (that’s me!) which you tweaked to work for your deployment. You can always get more complex at a later date as needed, but getting CD up and running today can be incredibly empowering for the engineers working on your code!

My first from-scratch 3D printed design (that is actually useful)

2 minutes, 24 seconds

I’ve had a 3d printer for a about 8 months now. It’s been fun to download STLs from the internet and print them. We’ve done articulated snakes and planetary gear prints. It is really amazing to print something in one go that you can move and spin around. Amazeballs!

The kids have printed up little castles and and little knickknacks the designed in TinkerCAD. This has been fun, but I was always bothered that I didn’t know a “real” design program that was also open source. Really that left FreeCAD. I did a tutorial or two on-line which were pretty great. I understood “parametric” and “fully constrained” finally \o/. But, it was time consuming to learn, and I was impatient. I printed up a laptop holder I found online. Time went on.

Recently I got some new mini servers (I’ll save the setup and install story for another blog post) and I wanted to mount them to the wall. I knew this would be pretty trivial to design a custom bracket if I’d learned FreeCAD well enough, but I still hadn’t.

I just decided to go for it, use TinkerCAD even though it was proprietary and online. Not ideal, but I got what I wanted! I made a two left and right brackets that held two mini servers against a plywood board. Perfect!

The point of this blog post is to let other folks know that they should just go for it – don’t let the tools slow you down. Once you see the power of creating something from nothing, and a physical something (not just software), you’ll see how empowering and inspiring it is!

I will say, you won’t get it right on your first try. I already knew this from some earlier prototyping I’d done, so I was ready for the 4 or 5 prints that failed before the final one succeeded. Don’t get flustered when, even after measuring twice before you cut print, that it’s still wrong!

So, here’s what I made:

What you see here are two “L” type brackets that each can hold the corners of two mini servers. Note the hole through the cross brace that allows you to screw in the top screw (the first draft totally lacked this and it was nigh impossible to mount). These are of course rotated up in how you’d mount them, and in a terrible position to print.

While these held the servers pretty well in place, there was still a chance they could tip forward off the wall, so I made a toolless top bracket to match. The nice thing is that you can screw this in as one piece, and the slide off the bracket from the screw mount. Some very light sanding was needed to ensure the two pieces slide together with just the right of amount of friction (showing two of them side by side, but only used one):

And here they are in situ:

If you have little servers and you want the STL, here’s the file and here’s the TinkerCAD link (which may go away b/c I’m not a fan of cloud services).

I hope if you’re thinking about designing your first part from scratch, you go for it!!

Easy way to play Boggle on a flight

1 minute, 15 seconds

I was taking a flight and wanted to play Boggle while onboard. I looked for a simple “show me a Boggle board” app for my phone, and only found ones of dubious quality, ad laden ones or ones that were something like “play a boggle-like game with friends (account required, has ads, is not boggle)”.

Eventually I gave up when I found this great website that generates .png images of Boggle boards. Even better, I could do a bunch of curl calls (with a 5 second sleep in-between, to be nice) to download a BUNCH of boards. Then I could use ImageMagick’s montage to stitch them all together in a 2×4 layout, which printed out nicely. montage -tile 2x4 -mode concatenate *.png output.png to be exact ;) With 4 or 5 pages printed out, I cut out each board, stapled together and made for the perfect Offline Boggle Booklet. Worked great!

However, on the flight I was thinking that it’d be pretty easy to write a bit of JavaScript and bang out a board that worked well offline. Further, I realized that CSS supports transformations, like transform: rotate(90deg)that could even mimic the rotated dice like the site above I referenced. Indeed, by the end of the flight I had all the hard parts worked out. This was mostly me working offline without Stack Exchange to remember this or that. Special thanks to xuth.net for post the source to the perl app which gave me some good ideas on how to write this!

After landing, and doing some refining, I’m happy to present Offline Boggle Boards. Load up the web page before you take off and you’re good to go! Also, for you coder types, pull requests welcome!

Keys-To-The-Tunnel

3 minutes, 39 seconds

tl;dr – An open source bash script which provisions a server to terminate TLS traffic with a valid certificate and reverse proxies the traffic back to a web based development instance via an SSH tunnel. Enables sharing of a dev instance and testing Android apps.

Intro

Are you an ngrok user (or one of its many competitors)? Do you use GitHub (GH) or work in a GH Organization? Is part of your application a web server? Do you ever need to test an Android application against a web server such that you need a valid TLS certificate? Have you ever wished it’d be easy to share you local dev instances with a colleague? If yes, then maybe Keys-To-The-Tunnel is for you!

Keys-To-The-Tunnel converts a newly provisioned, dedicated Ubuntu instance into a multi-user server that can both terminate inbound TLS traffic as well as gives developers easy to follow instructions on how to connect their web based instance of their app to it. By using easy to get public SSH keys for your users, Keys-To-The-Tunnel takes the legwork out of setting up a reverse proxy.

This post discusses the impetus and some of the background ideas that make Keys-To-The-Tunnel possible. If you’re all ready to get started, the GitHub repository and the FAQ have everything you need!

DNS & TLS via Let’s Encrypt

The first cornerstone to Keys-To-The-Tunnel is that Let’s Encrypt offers free TLS certificates. Once the script creates the certificates, they’re automatically renewed because that’s how certbot works. Free certs with free renewals – love it!

By using valid certificates with root CAs already deployed in Android, the resulting vhosts in a Apache allow you to test your Android apps against the URLs. Further, if you’re working with other members of your organization on the project, they can easily access your local instance via your URL. As well, while desktops allow responsive mode testing, sometimes you just need to see your app/site in a true mobile browser to properly test.

A critical part of making this all work is that you need both an A record and a wildcard in your DNS for the host you’re on. This allows Let’s Encrypt to verify your certificate no matter which hostname it is as hostnames are dynamically generated off GH handles.

Usernames & Keys

The second cornerstone of Keys-To-The-Tunnel is that GH allows anyone to know your public SSH key(s) (e.g. here’s mine). When you couple this with the fact that GH has an API to retrieve all the members of an organization (needs a token), it means that given just the organization name, the script can provision dozens of accounts with just a one liner:

./installTunnelServer.sh domain.com you@domain.com

Any GH users that do not have an SSH key are skipped, but you can simply re-run the script to add a user who has since added an SSH key to their account. In a soon to be released version, you can run the script in a cronjob so it will both add and remove users when your GH organization changes.

SSH

SSH is the final of three corner stones of the project. By using SSH, specifically an SSH tunnel, something which every developer has already installed, it’s easy to quickly set up an ad hoc tunnel. Here’s example command that your users will be given:

ssh -T -R REMOTEPORT:127.0.0.1:LOCALPORT GH-HANDLE@domain.com

So if you had a web app running on http://localhost:8080 and your GH Handle was mrjones-plip and Keys-To-The-Tunnel had assigned you a port of 1234, you would run this SSH command to set up a the tunnel:

ssh -T -R 1234:127.0.0.1:8080 mrjones-plip@domain.com

When you run the SSH command, you know you’re successfully connect when you’re greeted with this text:

Connected to SSH tunnel server
Ubuntu 20.04.2 LTS (GNU/Linux 5.4.0-51-generic x86_64)
Press 'ctrl + c' to exit

When Keys-To-The-Tunnel is done running, it will write an index.html to the root of your bare domain instructing users how to use the service:

Caveat Emptor

There are some gotchas to this script which you should be aware of:

  • Unlike ngrok which generates a random hostname every time your connect, Keys-To-The-Tunnel always uses the same hostnames. While handy for on-going use, it means the URLS may be discovered and scraped/crawled. As developer instances often have weak or no passwords, care should be used to tear down the tunnel when not in use (or use strong passwords – or both!).
  • Highly distributed teams may consider deploying multiple instances of Keys-To-The-Tunnel so they’re closer to the team members. If a developer is in Kenya and is connecting to a server in Canada, their traffic might be MUCH slower than if the server was deployed very close to Kenya from a packet’s perspective.
  • While this should Just Work™, this script was only tested on LXD containers and Digital Ocean (referral link) Ubuntu droplets. Please open a ticket if it doesn’t work!

Kids DNS in the Time of Covid

6 minutes, 4 seconds

Like all of you parents lucky enough to still have a job during the COVID19 layoffs, I’ve been struggling to balance time at work, personal time, family time and being the family’s IT person. With school closed, and now all summer camps closed, our use of kids screen time (aka internet time) has gone up from 0.5hrs/day to 3hrs+/day. How do we ensure we have safe computing environment for them?

DoT by Design

Originally, we had a MacOS workstation for the kids with parental controls enabled. This allowed us to do things like set up a 30 min per day limit, create separate accounts for each kid, limit which apps they could use and, most importantly, limit which URLs they could use (deny all, allow some). When coupled with my love of LXD/Pi-Hole/Quad9, that looked like this:

In this scenario the kid’s single shared workstation would get an IP lease from the DHCP server running on the pfSense router. This lease would specify the house wide Pi-Hole which sent all it’s DNS clear text queries to Stubby which in turn sent them encrypted to Quad9 via DNS over TLS (DoT). This is really nice as not only do we do get LAN wide ad blocking, but we get LAN wide encrypted DNS too. Score!

The kids workstation gets no special treatment on the network and is a peer of every other DNS lease on the LAN. However, with them needing to do school work and have fun and learn over the summer, they’ve since each gotten their own workstations. Now we have three workstations! This is starting to be a hassle to maintain the lock down on which sites they can browse. As a result, we just told them “be good” and let them use their new workstations with out any filters. This is sub-optimal!

.* Blacklist

How can we improve this situation to make it more tenable and more secure? By adding more instances of Pi-Hole, of course! It’s trivial to add a new instance of Pi-Hole with LXD. Just add a new container lxc launch ubuntu: pi-hole2 and then install Pi-Hole on the new container with curl -sSL https://install.pi-hole.net | bash . It’s two one liners that take all of 5 minutes.

For those of you like me that want an easy way to export their existing whitelist from MacOS’s parental controls, check out the “Directory Service command line utility” aka dscl. With this command you can create a file with all the URLs you’ve whitelisted. You can then easily import them into your new Pi-Hole instance (be sure to swap out USERNAME for your user):

dscl . -mcxexport /Users/USERNAME com.apple.familycontrols.contentfilter|grep http|cut -d'>' -f2|cut -d'<' -f 1|sort|uniq

Back to the new Pi-Hole instance, if we set the upstream DNS server to be the initial Pi-Hole, this means the kids DNS gets all the benefits of the existing encrypted infrastructure, but can add their own layer of blocking. Here I configure their Pi-Hole to just use the existing Pi-Hole as the resolver:

Specifically, if you add .* as a blacklist, EVERY site on the internet will fail to resolve. Then you can incrementally add sites you want resolve to your whitelist:

Once we hard code each of the three workstations to use the new Kids DNS, we’re good to go! And, this indeed works, but the savvy technologist will see the time suck of a flaw in my plan: If you whitelist example.com, there’s 5 or more sites you need to whitelist as well in order for example.com to work. This is because 99% of all sites use 3rd party javascript via content delivery networks (CDNs), have integrations with social media and of course often use the ever present Google Analytics. It gets even more tricky because if you want to keep your kid from searching on Google, you can’t think, “Oh, I’ll just whitelist *.google.com and then all it’ll save a bunch of time!”. Along with that will come Gmail and who knows what ever else. I knew this issue would be there going in, so I wasn’t afraid to take the time to get it to work. But caveat emptor!

Teaching Kids to be Smart

Speaking of caveats of a plan – all parents should know that this plan is VERY easy to bypass once your kids starts to figure out how the internet and their specific devices work. I’ve literally told my kids what I’m doing (stopping just about every site from working) why I’m doing it (the internet can be a horrible place) and that they can likely figure a way around it (see Troy Hunt’s tweet – as well as his larger write up on parenting online).

Like Troy Hunt, I’ll be super proud when they figure a way around it – and that day will come! But I do want to prevent them from randomly clicking a link and ending up somewhere we don’t want them to be. They can then ask us parents about why they can’t access a site or when it might allowed.

Being honest with your kids about what you’re doing is the way for them to be aware that this is for their benefit. The end goal is not to lock the entire internet away forever, it’s actually the opposite. The end goal is to prepare them to be trusted with unfettered access to the internet. This will happen soon enough whether we parents want it or not!

Banning 8.8.8.8 et al.

While I was in there tuning up the DNS, I remembered that some clients on my network (I’m looking at you Roku!) weren’t listening to the DHCP rules about using my preferred, encrypted DNS and going direct to Google’s DNS (8.8.8.8) or others I didn’t like. After a little research I found I could redirect all outbound TCP and UDP DNS traffic so that all devices use my Pi-Hole/Stubby/Quad9 DNS* whether they thought they were or not. For others running pfSense and want to do this, see the steps to “Blocking DNS Queries to External Resolvers” and then “Redirecting all DNS Requests to pfSense” (both thanks to this Reddit thread).

* We shall not speak of how devices will soon speak DNS over HTTPs (DoH), thus ruining this idea.

What about product X?

Some of you may be thinking, “this seems like a lot of work, why don’t you just implement an existing off the shelf solution?” Good question! For one, I like to DIY so I control my data and what’s done with it instead of letting a 3rd party control it. As well, while there’s home based solutions, I prefer open source solutions. To put my money where my mouth is, I’ve just donated for the 2nd (3rd?) time to Pi-Hole. I encourage you to do the same!

To be clear though, this set up is a pretty crude tool to achieve the end result. It looks like there’s some quite polished solutions out there if you’re OK with closed source, cloud hosted solutions. As well, there’s of course other variations on the “Use Pi-Hole For Parental Controls“.

Wrapping Up

Now that we have all in this in place, we can trivially support N clients which we want to force to use the kids more lock down DNS set up. This looks like exactly like it did before, but we have an extra container in the LXD server (and, some what orthogonally, a fancier pfSense DNS blocking setup):

I suspect this set up won’t last for more than a year or two. As more and more sites get added to the white list, it will be harder and harder to maintain. Maybe after that I’ll give each kid their own Pi-Hole instance to run on an actual Raspberry Pi and let them do with it as they please ;)

(Of course just after I deployed this, Pi-Hole 5.0 came out which offer the concept of groups, so you can likely do this idea above in a single instance instead of multiple. A bummer for me now, but a win for all other Pi-Hole users, including my future use!)

Deploying a PGP SKS server on Ubuntu 18.04

3 minutes, 37 seconds

Intro

While the venerable Matt Rude has a great write up on compiling the latest SKS server for Ubuntu 18.04, using the one that ships with Ubuntu 18.04 is considerably easier. Given I just did this for work, this blog post has already been 99% written – handy!

The prerequisite for this guide is that you have root on an Ubuntu 18.04 server with a static, public IP address.

Hardening

This is out of scope for this guide, but you should secure your server. I recommend only allowing SSH with public keys, no passwords. If possible, only allow SSH from your subnet, or from behind a VPN/firewall. Only open the ports to the internet for the keyserver (11371) and SSH (22) if you must. Keep your software up to date – consider turning on automatic updates for security releases.

Initial install

Given that there’s a native Ubuntu SKS package, this one command gets you a LOT. All your config files are put in place, your user is created, a cronjob is added and there’s no compiling from source. While compiling is nice for knowing exactly what code you’re runnig, it’s slower and more laborious to get set up.

Run this as root:

apt-get update
apt-get -y install sks gnupg wget apache2 xz-utils

Configure Apache

Apache functions as a reverse HTTP proxy for the SKS HTTP interface, and so binds to the SKS HTTP port (11371) only on the public IP addresses.

Create the file /etc/apache2/sites-available/100-keyserver.conf with the following contents. Be sure to update the SERVER_IP_HERE to be the real IP:

#######################################
# Configuration for SKS reverse proxy #
#######################################

Listen SERVER_IP_HERE:11371
<VirtualHost *:11371>
    <Proxy *>
            Order deny,allow
            Allow from all
    </Proxy>
    ProxyPass / http://127.0.0.1:11371/
    ProxyPassReverse / http://127.0.0.1:11371/
    ProxyVia On
    SetEnv proxy-nokeepalive 1
</VirtualHost>

Run the following commands as root to set up apache with the new vhost and enable the right config. The last command should yield no errors:

a2enmod proxy
a2enmod proxy_http
a2enmod proxy_balancer
a2enmod lbmethod_byrequests
a2ensite 100-keyserver.pch.net.conf
systemctl restart apache2
systemctl status apache2

Configure SKS

As everything was prepped for you with the apt-get install sks call from above, you only have to define server_contact: and hostname:in the /etc/sks/sksconf file. The contact should be your PGP key ID. Mine is 0xA105C2764BF2C4CB, for example. The hostname is the FQDN for your server.

Initialize the SKS Database

Run the following commands to download a recent dump of the SKS database, decompress it, update permissions and import it. This puts less of a load on other SKS servers as we won’t need to synchronize much when we come online.

When running sks-build.sh, select the ”normalbuild” option. This loads all keys into the database, unlike the ”fastbuild” option, which uses the key dump files as a basis over which to operate, but loads the keys much faster. The bunzip2 and load process will take **several hours to complete**.

Run this as root:


mkdir -p /var/lib/sks/dump/
cd /var/lib/sks/dump/
wget -e robots=off -r --no-parent https://mirror.cyberbits.eu/sks/dump
mv mirror.cyberbits.eu/sks/dump/*pgp .
cd /usr/lib/sks/
./sks_build.sh
chown -R debian-sks:debian-sks /var/lib/sks
systemctl enable sks

Note: The source of the key dump for our installation is http://keys.niif.hu/keydump/. However, a key dump may be obtained from any up-to-date SKS key server, since each is a mirror of all the others. Key dumps from public key servers are listed on sks-keyserver’s wiki on GitHub. If required, any of the sources listed on this page may be used to obtain the key dump.

If you need to rebuild the database from a fresh download for some reason, be sure to fully delete the DB file with rm -rf /var/lib/sks/DB first.

Membership File

For your keyserver to be known by others and have it’s keys be up to date via synchronizatoin, you need to contact the keyserver operators so they they list you in their membership file. That way when you list them, the servers will not not error out.

The format of /var/lib/sks/membership is simply KEYSERVER_URL PORT

After you create it, ensure it can be read by SKS:

chown -R debian-sks:debian-sks /etc/sks/memberships

Start and Test

Start your server with systemctl start sks. That’s it! You’re done.

Monitor the log files for any problems during startup and testing: tail -f /var/log/syslog. As well, check the output of systemctl status sks.

To test, open a browser window to http://YOUR_SERVER:11371. Search for a few different keys to verify that key information is being retrieved correctly.

Check the SKS stats page to verify the number of keys loaded: http://YOUR_SERVER:11371/pks/lookup?op=stats

If you have peered with other servers, verify that it is showing up properly in the pool.

Support the EFF and others in the face of the Tax Cuts and Jobs Act

0 minutes, 28 seconds

I’ll keep this short and sweet: When I read the headline, “EFF sues to kill FOSTA, calling it ‘unconstitutional Internet censorship law'”, I realized how proud I am to support the Electronic Frontier Foundation (EFF) in the form of annual donations. They are an awesome group doing amazing things. Further, my financial support will not wane now that the Tax Cuts and Jobs Act has been passed.

With the direction the current administration is going with human rights and health care, and with the Supreme Court switching up a justice, now, more than ever, is the time to support the causes you believe in. Do you what you can!

SYN Shop Class: SSH Keys with free VMs for members

0 minutes, 36 seconds

As part of putting some good hardware to use, I taught a class on how to use SSH Keys the other week. As this was the first time I taught this class, it took a good long while to do the prep for it. I figured it’d take 6 hours (3 sessions @ 2hr), but I think I ended up putting in closer to 15 hours. There’s still room for improvement!

The slides are available (PDF), as is the video (below). I hope to redo this class so that it’s a generic “learn about SSH Keys” which then could be open to the general public. That said, I’m really glad to see we’re up to 16 containers and the load average is only at 0.37 right now ;)

PS – I know, I know, those are not VMs, they’re actually containers!

USB-C accessories I like (and some I don’t)

2 minutes, 52 seconds

After buying USB-C power adapter that didn’t work out, I’d been on the hunt for a replacement. After getting a Chromebook for video chats and music (and curiosity :) which has USB-C charging, I had an extra need for cheap, portable chargers and other simple USB-C accessories.

All these devices were tested on either my ASUS Chromebook Flip C101PA-DB02 or my Dell XPS 13 9350 or both.

What I really wanted was to plug a single cord into my laptop and have it work with my external webcam, external speakers, external 4k monitor @60Hz, external wireless headset and USB3 gigabit NIC. In the end, for stability, I ended up going with two cables: one for power and DisplayPort and one USB3.0 for my USB3.0 hub with everything else. My laptop not crashing makes me be totally OK with the two cables ;)


First up is a good, small travel charger with USB-C and USBA ports. It delivers the full 45W so it’ll work nicely with your MacBook and Dell XPS 13. It’ll work with your MacBook Pro, but not at the full rate as the stock charger. It’s the Yojock YKJ USB-C PD Charger with 45W:

After that, if you want a charger that’ll work at home (read: no folding plug), then this charger with attached cable is quite small and cheap. It’s the Nekteck 45W USB-C charger. It only has USB-C and, beware, my XPS 13 9350 says it’s only delivering 40W, but I’ve had no problems with it keeping my laptop charged even when it’s powering an external 4k monitor, running 4 containers, a full IDE and twenty odd tabs in FireFox.

Then, in the non-power-adapter accessories:

These items I don’t recommend as they didn’t work as advertised with my ubuntu laptop;

To be clear, I am gathering the referral kickbacks from Amazon on these links, but I do use these products and wasn’t paid for the positive review.

If you have questions, feel free to contact me or ask in the comments!