plip blog https://blog.plip.com Sun, 21 Aug 2022 20:19:53 +0000 en-US hourly 1 https://wordpress.org/?v=6.0.2 My first from-scratch 3D printed design (that is actually useful) https://blog.plip.com/2022/08/20/my-first-from-scratch-3d-printed-design-that-is-actually-useful/ https://blog.plip.com/2022/08/20/my-first-from-scratch-3d-printed-design-that-is-actually-useful/#respond Sun, 21 Aug 2022 00:51:12 +0000 https://blog.plip.com/?p=2446 […]]]> I’ve had a 3d printer for a about 8 months now. It’s been fun to download STLs from the internet and print them. We’ve done articulated snakes and planetary gear prints. It is really amazing to print something in one go that you can move and spin around. Amazeballs!

The kids have printed up little castles and and little knickknacks the designed in TinkerCAD. This has been fun, but I was always bothered that I didn’t know a “real” design program that was also open source. Really that left FreeCAD. I did a tutorial or two on-line which were pretty great. I understood “parametric” and “fully constrained” finally \o/. But, it was time consuming to learn, and I was impatient. I printed up a laptop holder I found online. Time went on.

Recently I got some new mini servers (I’ll save the setup and install story for another blog post) and I wanted to mount them to the wall. I knew this would be pretty trivial to design a custom bracket if I’d learned FreeCAD well enough, but I still hadn’t.

I just decided to go for it, use TinkerCAD even though it was proprietary and online. Not ideal, but I got what I wanted! I made a two left and right brackets that held two mini servers against a plywood board. Perfect!

The point of this blog post is to let other folks know that they should just go for it – don’t let the tools slow you down. Once you see the power of creating something from nothing, and a physical something (not just software), you’ll see how empowering and inspiring it is!

I will say, you won’t get it right on your first try. I already knew this from some earlier prototyping I’d done, so I was ready for the 4 or 5 prints that failed before the final one succeeded. Don’t get flustered when, even after measuring twice before you cut print, that it’s still wrong!

So, here’s what I made:

What you see here are two “L” type brackets that each can hold the corners of two mini servers. Note the hole through the cross brace that allows you to screw in the top screw (the first draft totally lacked this and it was nigh impossible to mount). These are of course rotated up in how you’d mount them, and in a terrible position to print.

While these held the servers pretty well in place, there was still a chance they could tip forward off the wall, so I made a toolless top bracket to match. The nice thing is that you can screw this in as one piece, and the slide off the bracket from the screw mount. Some very light sanding was needed to ensure the two pieces slide together with just the right of amount of friction (showing two of them side by side, but only used one):

And here they are in situ:

If you have little servers and you want the STL, here’s the file and here’s the TinkerCAD link (which may go away b/c I’m not a fan of cloud services).

I hope if you’re thinking about designing your first part from scratch, you go for it!!

Facebooktwitterredditpinterestmail
]]>
https://blog.plip.com/2022/08/20/my-first-from-scratch-3d-printed-design-that-is-actually-useful/feed/ 0
Legba the Net-tracker https://blog.plip.com/2022/04/27/legba-the-net-tracker/ https://blog.plip.com/2022/04/27/legba-the-net-tracker/#respond Thu, 28 Apr 2022 01:58:43 +0000 https://blog.plip.com/?p=2424 […]]]> Intro

I’d been meaning to learn how to write an app using something more than CSV files, but less than MariaDB, to store files – I’m thinking SQLite of course! Then along came the desire to have a simple way to track when a computer was on a network as a proxy for kids’ daily screen time. After all, the network is the computer, right?

While there’s so very many ways to solve detecting if a computer is online (more on this later), I thought it’d be fun to write a simple app that could correlate multiple IPs to a single person, and then give a histogram of minutes per day per person. Given this is just a proxy for screen time, it’s fine if it doesn’t have alerting, password protection or even a way to prevent going over the allotted time per day. The goal will be for any interested parties to see how long a device has been on for the current day. It’s then up to the family to have a discussion about what it means to go over your daily allotment.

Ok, let’s do this! We have a requirement to track computers being online and to write and read the results to a SQLite DB. I’ve been groovin’ on learning Python, so let’s double down and use that. I did some Wikipedia exploring and read about Papa Legba, and thought it made a mighty fine sounding name. Finally, after some nudging from a friend, we’ll package it up in Docker so it’s easy to try out and host in an isolated container.

Ping FTW

The first step to using Legba, is to define a list of users and which IPs they’ll be on. Very likely the best way to do this is to either use static IPs on your LAN clients, or have your DHCP server set the same IPs per MAC every time.

Then you’ll create a conf.py file copied from the conf.example.py file and fill it out. Here we see Jon and Habib have one IP each, where as Mohamed has 2:

trackme = {
    'Jon': ["192.168.1.82"],
    'Habib': ["192.168.1.12"],
    'Mohamed, ': ["192.168.1.240", "192.168.1.17"]
}

The code to track if a device is achieved via the subprocess module via a ping() function with just two lines that send a single ICMP packet:

# thanks https://stackoverflow.com/a/10402323
def ping(host):
    """ Ping a host on the network. Returns boolean """
    command = ["ping", "-c", "1", "-w1", host]
    return subprocess.run(args=command, stdout=subprocess.DEVNULL, stderr=subprocess.DEVNULL).returncode == 0

Back in the main() function, we then read in config, loop over each person and try and ping() each of their IPs. If we see them online, we write to the DB via record(). It ended up, just as I’d hoped, that Python’s SQLite libraries are robust and it’s just 6 lines to insert a row:

sql = ''' INSERT INTO status(name,state,date) VALUES(?,?,?) '''
cur = sqlite.cursor()
activity = (name, state, datetime.now())
cur.execute(sql, activity)
sqlite.commit()

return cur.lastrowid

Just before the end of the loop we call probably the most complex function of the lot output_stats_html(). This function is responsible for reading the day’s active users, getting each users activity by hour, the total for the day and finally output static HTML as well as a static JSON file that will get called via AJAX so the stats will auto-refresh.

At the end of the loop we sleep for 60 seconds. In theory if you had hundreds (thousands?!) of IPs to track and they were on connections with >500ms latency, it would take way longer than 60 seconds. Legba will not scale to this level. It’s currently been comfortably tested with 5-10 devices on a LAN where each device has ~20ms of latency.

A histogram is worth a 1000 words

After you’ve done a bit of a git clone with a lil pip3 install and fleshed out your own config.py and done a little systemd love, you’ll have some sweet sweet histograms! (Some keen eyed readers may note this histogram looks familiar ;)

It’s interesting to note that mobile devices, as seen withe “Adnon Cell”, are effectively on all the time. In this sense, Legba is not much use to track a cell phone. Meanwhile, Bobby Table’s desktop, Adnon’s Laptop and Chang’s Nintendo Switch all work as expected (NB – I didn’t actually test with a Switch).

Existing Solutions

I’ve been running this for solution for just about 4 months now. It’s been a great way for our family to have an open discussion about what it means to spend too much time on the computer and it’s been rock solid. Checking ls and select count(*) from status; I see my DB is 23MB and has 487,069 rows.

Given the simplicity of this app, could this DB and rows be easily stored and retrieved elsewhere? When I wrote the app, I didn’t care – I just wanted to write it for the fun of writing it! However, I was listening to episode 171 of Late Night Linux and they mentioned how utilitarian Telegraf is. It struck me that, indeed, if you had a Telegraf, InfluxDB and Grafana (aka “the TIG stack”) already set up, it would be pretty trivial to capture these same stats. I would do this by setting up a centralized instance of Telegraf and either use the built in Ping plugin, or possibly the more extensible [[inputs.exec]] input type. With the latter, you could even re-use parts of Legba to pretty trivially input the data to InfluxDB. Then, it would be equally trivial, to slice up the ping counts per hour, per user and have a slick dashboard. Just food for thought!

Otherwise, I hope some else than me gives Legba a try!

Facebooktwitterredditpinterestmail
]]>
https://blog.plip.com/2022/04/27/legba-the-net-tracker/feed/ 0
DockStat: Docker stats in a simple to use and easy to read Bash script https://blog.plip.com/2021/09/28/dockstat-docker-stats-in-a-simple-to-use-and-easy-to-read-bash-script/ https://blog.plip.com/2021/09/28/dockstat-docker-stats-in-a-simple-to-use-and-easy-to-read-bash-script/#respond Wed, 29 Sep 2021 04:06:51 +0000 https://blog.plip.com/?p=2409 […]]]> Intro

At work I’ve been doing a lot of Docker based projects. I ended up writing a neat little Bash utility which I then recently extended into what I’m calling DockStat. It shows running containers and their related resources. You could use it if you’re repeatedly upping, downing and destroying docker containers over and over like I was. Or maybe you just want a nice little dashboard to see what’s running on your server?

DockStat at work

However, if you have more than say a dozen active containers, this script might not scale nicely (oh, perhaps a monitor in portrait mode might fix this? ;)

Being the good little open source nerd that I am, this is of course available for download with a permissive license in the hopes that some one will find it useful or possibly even offer a PR with some improvements to my nascent Bash coding skills.

Background

With countless primers on how to use Docker out there, I won’t get into what the commands all mean, but the impetus for this script was repeatedly running docker ps to show a list of the active containers. A bit later I remembered you could run endless Bash loops with a one-liner which made the process a bit nicer as it auto-refreshed:

while true; do clear;date;docker ps;sleep 5; done

A bit after that I stumbled upon the glorious watch command! Wow – just when you think that that you know and OS, they come and show you that there’s this awesome command they’ve been hiding from you all these years. Thanks Linux!

watch greatly improved on my Bash one-liner as it was an even short one-liner, could trivially be configured to refresh at what ever frequency you wanted and show a header or not. The icing on the cake was that prevents the flash of a redraw upon refresh:

watch -t -n 1 docker ps

About now I got more cozy with the --format feature built into most Docker command line calls. This was handy because I could reduce the number of fields shown in docker ps that I wasn’t interested in. Here’s maybe the simplest of them which shows JUST the container name and if how long the it has been running:

docker ps --format='{{.Names}} {{.Status}}'

Research continued on how to architect the helper script. I had need to show different data than docker ps had to offer. I branched out into docker inspect as well as finding other Dockeristas one-liners that I shamelessly co-opted (I’d be honored if anyone did the same of my work!!). This allowed me to joined ps and inspect like as seen with this fave that shows all the running containers’ and their internal IPs:

docker ps --format='{{.Names}}'|xargs docker inspect --format='{{.Name}} - {{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}'

I was ready to assemble all the docker Data dukets I’d gathered into a nice CLI dashboard so our app developers could see the status of our containers booting. Finding a solution for this enabled both the the helper script to spring forth and simultaneously created the nascent DockStat. This was a Bash utility that was both easy to use, automates flash-less refreshes and introduced basic terminal layout functionality with a near zero learning curve (assuming you know Bash): Bash Simple Curses

Thanks

Thanks to James and Russ for reviewing my code and an early draft of this post. I’ve been trying to improve both my posts and code and this won’t happen without folks kind donation of their time and input!

Facebooktwitterredditpinterestmail
]]>
https://blog.plip.com/2021/09/28/dockstat-docker-stats-in-a-simple-to-use-and-easy-to-read-bash-script/feed/ 0
Easy way to play Boggle on a flight https://blog.plip.com/2021/07/04/easy-way-to-play-boggle-on-a-flight/ https://blog.plip.com/2021/07/04/easy-way-to-play-boggle-on-a-flight/#respond Mon, 05 Jul 2021 06:10:10 +0000 https://blog.plip.com/?p=2405 […]]]> I was taking a flight and wanted to play Boggle while onboard. I looked for a simple “show me a Boggle board” app for my phone, and only found ones of dubious quality, ad laden ones or ones that were something like “play a boggle-like game with friends (account required, has ads, is not boggle)”.

Eventually I gave up when I found this great website that generates .png images of Boggle boards. Even better, I could do a bunch of curl calls (with a 5 second sleep in-between, to be nice) to download a BUNCH of boards. Then I could use ImageMagick’s montage to stitch them all together in a 2×4 layout, which printed out nicely. montage -tile 2x4 -mode concatenate *.png output.png to be exact ;) With 4 or 5 pages printed out, I cut out each board, stapled together and made for the perfect Offline Boggle Booklet. Worked great!

However, on the flight I was thinking that it’d be pretty easy to write a bit of JavaScript and bang out a board that worked well offline. Further, I realized that CSS supports transformations, like transform: rotate(90deg)that could even mimic the rotated dice like the site above I referenced. Indeed, by the end of the flight I had all the hard parts worked out. This was mostly me working offline without Stack Exchange to remember this or that. Special thanks to xuth.net for post the source to the perl app which gave me some good ideas on how to write this!

After landing, and doing some refining, I’m happy to present Offline Boggle Boards. Load up the web page before you take off and you’re good to go! Also, for you coder types, pull requests welcome!

Facebooktwitterredditpinterestmail
]]>
https://blog.plip.com/2021/07/04/easy-way-to-play-boggle-on-a-flight/feed/ 0
Keys-To-The-Tunnel https://blog.plip.com/2021/03/13/keys-to-the-tunnel/ https://blog.plip.com/2021/03/13/keys-to-the-tunnel/#respond Sat, 13 Mar 2021 19:52:43 +0000 https://blog.plip.com/?p=2380 […]]]> tl;dr – An open source bash script which provisions a server to terminate TLS traffic with a valid certificate and reverse proxies the traffic back to a web based development instance via an SSH tunnel. Enables sharing of a dev instance and testing Android apps.

Intro

Are you an ngrok user (or one of its many competitors)? Do you use GitHub (GH) or work in a GH Organization? Is part of your application a web server? Do you ever need to test an Android application against a web server such that you need a valid TLS certificate? Have you ever wished it’d be easy to share you local dev instances with a colleague? If yes, then maybe Keys-To-The-Tunnel is for you!

Keys-To-The-Tunnel converts a newly provisioned, dedicated Ubuntu instance into a multi-user server that can both terminate inbound TLS traffic as well as gives developers easy to follow instructions on how to connect their web based instance of their app to it. By using easy to get public SSH keys for your users, Keys-To-The-Tunnel takes the legwork out of setting up a reverse proxy.

This post discusses the impetus and some of the background ideas that make Keys-To-The-Tunnel possible. If you’re all ready to get started, the GitHub repository and the FAQ have everything you need!

DNS & TLS via Let’s Encrypt

The first cornerstone to Keys-To-The-Tunnel is that Let’s Encrypt offers free TLS certificates. Once the script creates the certificates, they’re automatically renewed because that’s how certbot works. Free certs with free renewals – love it!

By using valid certificates with root CAs already deployed in Android, the resulting vhosts in a Apache allow you to test your Android apps against the URLs. Further, if you’re working with other members of your organization on the project, they can easily access your local instance via your URL. As well, while desktops allow responsive mode testing, sometimes you just need to see your app/site in a true mobile browser to properly test.

A critical part of making this all work is that you need both an A record and a wildcard in your DNS for the host you’re on. This allows Let’s Encrypt to verify your certificate no matter which hostname it is as hostnames are dynamically generated off GH handles.

Usernames & Keys

The second cornerstone of Keys-To-The-Tunnel is that GH allows anyone to know your public SSH key(s) (e.g. here’s mine). When you couple this with the fact that GH has an API to retrieve all the members of an organization (needs a token), it means that given just the organization name, the script can provision dozens of accounts with just a one liner:

./installTunnelServer.sh domain.com you@domain.com

Any GH users that do not have an SSH key are skipped, but you can simply re-run the script to add a user who has since added an SSH key to their account. In a soon to be released version, you can run the script in a cronjob so it will both add and remove users when your GH organization changes.

SSH

SSH is the final of three corner stones of the project. By using SSH, specifically an SSH tunnel, something which every developer has already installed, it’s easy to quickly set up an ad hoc tunnel. Here’s example command that your users will be given:

ssh -T -R REMOTEPORT:127.0.0.1:LOCALPORT GH-HANDLE@domain.com

So if you had a web app running on http://localhost:8080 and your GH Handle was mrjones-plip and Keys-To-The-Tunnel had assigned you a port of 1234, you would run this SSH command to set up a the tunnel:

ssh -T -R 1234:127.0.0.1:8080 mrjones-plip@domain.com

When you run the SSH command, you know you’re successfully connect when you’re greeted with this text:

Connected to SSH tunnel server
Ubuntu 20.04.2 LTS (GNU/Linux 5.4.0-51-generic x86_64)
Press 'ctrl + c' to exit

When Keys-To-The-Tunnel is done running, it will write an index.html to the root of your bare domain instructing users how to use the service:

Caveat Emptor

There are some gotchas to this script which you should be aware of:

  • Unlike ngrok which generates a random hostname every time your connect, Keys-To-The-Tunnel always uses the same hostnames. While handy for on-going use, it means the URLS may be discovered and scraped/crawled. As developer instances often have weak or no passwords, care should be used to tear down the tunnel when not in use (or use strong passwords – or both!).
  • Highly distributed teams may consider deploying multiple instances of Keys-To-The-Tunnel so they’re closer to the team members. If a developer is in Kenya and is connecting to a server in Canada, their traffic might be MUCH slower than if the server was deployed very close to Kenya from a packet’s perspective.
  • While this should Just Work™, this script was only tested on LXD containers and Digital Ocean (referral link) Ubuntu droplets. Please open a ticket if it doesn’t work!

Facebooktwitterredditpinterestmail
]]>
https://blog.plip.com/2021/03/13/keys-to-the-tunnel/feed/ 0
Portable, headless Raspberry Pi development https://blog.plip.com/2020/12/09/portable-headless-raspberry-pi-development/ https://blog.plip.com/2020/12/09/portable-headless-raspberry-pi-development/#respond Thu, 10 Dec 2020 05:29:04 +0000 https://blog.plip.com/?p=2371 […]]]> The Problem

Back when I was writing Cattrotar on a Raspberry Pi (or really, any other of the myriad times I was hacking on a small embedded device), I often faced some sort of problem or another:

  • I thought I set up the network for a device, but can no longer remotely access it over WiFi
  • I’ve just written a new OS to a microSD card, want to further configure the OS but it’s not yet reachable on the network
  • I want to take my set up to the hacker space to be social while I work on an issue
  • I’m waiting in the library for my kids to finish up a class and can’t easily get my Pi on the library WiFi to hack on it from my laptop
  • I don’t want to carry around, or even bother setting up, a keyboard and monitor to get access to my Pi

The Solution

As many of you lovely readers may know, I’m already a fan of small travel routers made by GL.iNet, like the fine GL-MT300N-V2. For a scant $20 dollars (shipped!) you get DHCP server you can not only bring with you, but will happily run off a battery. Better yet, because it has a USB port, you can actually power your Pi off the wee router. Daisy chain USB power for the win! The final icing on the cake is that you can make the two Ethernet ports on be both LAN (instead of LAN & WAN). While these are great devices, their WiFi stack is a bit flaky, so being able to hardwire your network devices is a great, if not slightly cumbersome, work around.

The net result is that your around town bag can have:

  • A USB Battery
  • Two very short Ethernet cables
  • A USB Ethernet dongle (w/ USBC adapter tah boot)
  • A travel router (GL-MT300N-V2 in this case)
  • Two micro USB cables
  • Pi (In the picture below I have an Orange Pi Zero with a temp sensor and a screen threaded through one side of a case)

When you put it all together, you get a nice tidy mess and it works just great, solving all of the above problems. Your laptop can be on two networks so it can have local access to the Pi and to the internet too. Further, if the WiFi your on isn’t too hostile with their captive portals and such, you can actually have the GL.iNet router act as a WiFi repeater and backhaul that bandwidth to your Pi. Run your apt update away!

This set up is not only small and lightweight, but lets me work un-tethered on my Pi setup and Python code. Now if this frickin pandemic would just be solved, I’d actually be able to take this out of the house instead of just co-working with my partner in our office in our house!

Happy Pi Hacking!

Facebooktwitterredditpinterestmail
]]>
https://blog.plip.com/2020/12/09/portable-headless-raspberry-pi-development/feed/ 0
Replacing two iPods with a Bash script https://blog.plip.com/2020/08/30/replacing-two-ipods-with-a-bash-script/ https://blog.plip.com/2020/08/30/replacing-two-ipods-with-a-bash-script/#respond Sun, 30 Aug 2020 19:35:07 +0000 https://blog.plip.com/?p=2354 […]]]>

13 years ago we got an iPod to actually listen to music on the go. It was awesome! Some time later we had our first kid and some time after that smart phones became prevalent, so our iPod fell out of use. But it was about 9 years ago that we started to use the old iPod as an easy way to play the same a bedtime playlist for our kids (oh yeah, we had a 2nd kid at some point too ;). Soon, the kids split into their own bedrooms so we picked up a used iPod on eBay and loaded up the same bedtime playlist onto it. These two iPods dutifully played the same songs day in and day out every night for years. They’re even more utilitarian than designed, but worked well at their finite task (Photo by Nicnicoleleeolee):

However, over time things started to not work. First one of the fancy docks we used as a speaker stopped charging which ever iPod was in it. This meant every week or so we’d have to swap it over to the good dock to charge up. But then one started to lock up and had to be rebooted. We feared we’d have to replace them soon.

It was around this time that I starting using the Cast All The Things software (aka catt) in my quest to both (finally) learn Python and to easily control the volume on a chromecast. Checkout my cattmate and cattrotar projects! catt is a command line script and python library that allows you to easily control and play videos/music on Chromecasts. It was also around the time that Chomecast Audio’s were stopped being made so I’d stocked up them. We have about 4 or 5 plugged in here and there about the house, including one in each kids room. They work really well!

By now you can likely see how this post is going to resolve itself, but lets find out, shall we?

Yes, that’s right, I wrote a full featured web site that you could pull up and easily play music in which ever room needed to hear the play list. It was simple, yet fancy ;) On the front end there was a series of 2 buttons, one for each room. When you pressed a button, it sent an AJAX call to a to the single endpoint on the back end. The back end then made an exec call to a bash script. This in turn used catt to play a MP3 on the specified Chromecast.

This worked great for a day or so. But then a bug reared it’s ugley had. Let me tell you, the cost of a software bug in your web app which involves waking your child up at 10.30pm is EXTREMELY high. Like, unacceptably high. Sleeping children are gold. Parents get grown up time, the kids get much needed rest and we’re all better for it the next day. Don’t fuck with a kid’s sleep.

After some unfruitful debugging, I got lazy and realized I’d already installed the wonderful termux on my phone, along with a $2 add on to have a bash script launcher widget on my phone’s desktop. So, after a dozen or so minutes of coding and a little ssh-keygen -t ed25519 for good measure, I had this on my phone:

Here’s what happens when you press one of those four links:

  1. Call a local script on the phone with the same name shown above
  2. Each script has the same contents but just calls the remote server with a different argument: ssh napserver controlmusic.sh CHILD PLAY_STOP. So for the first one above, that’d be ssh napserver controlmusic.sh e play
  3. The remote server, (hardened of course to only allowing one command to be run for that SSH key,) inside controlmusic.sh runs a catt command: /usr/bin/catt -d DEVICE COMMAND OPTIONAL_COMMAND. Again for the first button that’d look like: /usr/bin/catt -d "E's Chromecast" cast songs.mp3

So, while not nearly has fancy as the web app I initially wrote, it works every time, has saved me the time of debugging the web app (aka let me be lazy ;) and most importantly, does not wake the kids up after they’ve gone to sleep! Icing on the cake is that continued my lazy streak and bought the app for my partner’s phone so they could activate the playlist while I’m out of town (instead of me VPNing in and activating it remotely on request ;).

Facebooktwitterredditpinterestmail
]]>
https://blog.plip.com/2020/08/30/replacing-two-ipods-with-a-bash-script/feed/ 0
How to get the Dell Windows 10 OS Recovery to boot on XPS 13″ 9350 https://blog.plip.com/2020/08/07/how-to-get-dell-os-recovery-to-boot-xps-13-9350/ https://blog.plip.com/2020/08/07/how-to-get-dell-os-recovery-to-boot-xps-13-9350/#respond Sat, 08 Aug 2020 04:41:00 +0000 https://blog.plip.com/?p=2333 […]]]> Remember that awesome Dell XPS 13 I got back in 2016? The one that came with Windows 10 but then I wiped clean with Ubuntu 16.04? Well, it’s still goin’ strong! So strong that it’s time to sell it to another happy user now that work got me an upgrade. In that post I just linked you can read about how I upgraded it to have a better wireless card. Since then I’ve also upgraded to Ubuntu 18.04 with zero hardware compatibility issues. Further, I put in a faster NVMe drive and replaced the battery with an OEM Dell one to give it a bit more running time (battery health in the BIOS showed as bad).

My buyer wanted to run the stock Windows 10 OS, so it was up to me to get it back to it’s roots to close the deal. Dell, it turns out, makes it REALLY easy to do a clean re-install of Windows 10 on your XPS laptop. They have this great tool called the Dell OS Recovery Tool. First you go to their site and punch in your Dell Service Tag. Then you download a windows executable. When you run that, you again punch in your Service Tag. The magic happens here then: the software builds a USB bootable image for Windows 10 with all the drivers needed for your specific laptop. This is totally awesome and saves a TON of time. Thanks Dell!

If you’re not on a Dell then that center panel isn’t available, still works though!

Then you wait a while (10 min?) while the program runs it’s course and you see the final screen saying it’s done. Odly, the other steps are not clickable to find out more information, they’re just showing you’re on step two of five:

Now you should just need to reboot your laptop and press “F12” to get the one time boot prompt to go so you can specify the USB drive to boot off of (screen shot curtesy of jasoncoltrin.com):

However, no matter what I did, my USB drive never showed up under “UEFI BOOT” there. My Ubuntu 18.04 install drive? It showed up. Ubuntu 20.04 server install drive? Yup, no problems. Ok, maybe it’s the brand of USB drive? I reflashed a different brand USB drive and it had the same problem. Maybe BIOS settings are tweaked? I reset BIOS back to defaults, still no option to boot. After a couple hours, i walked away and slept on it.

The next day I was researching more and some one mentioned something about an NTFS partition:

One thing folks may not realize is the Flash Drive has to be formatted as FAT 32 in order to boot as UEFI..

Dell Forum Post

This was a bit silly though – the Dell Recover Tool completely formats the USB drive so it’s pristine and nothing is left on it but the FAT boot partition. Wait, is it silly? Let’s look at the USB drive in question in the Ubuntu 20.04 Disks utility:

Yup, see, just like I said, FAT boot par….hey!! What’s that other partition there!?

What? What’s this possibly BIOS confusing NTFS partition doing there? Let’s click that minus icon:

Yes I’m sure, delete that thing!!

Now let’s see what the BIOS thinks when I reboot with a USB drive with just the one FAT partition that DELL originally wrote. It thinks life is wonderful and is happy to proceed with re-installing Windows 10. This, by the way, takes a good number of hours. Be patient.

So, tl;dr – if you can’t get your Dell USB Windows 10 Restore image to boot on your Dell XPS 13 9350, and likely a lot of other Dell models, consider deleting this extra partition on the USB drive. It worked magic for me.

Facebooktwitterredditpinterestmail
]]>
https://blog.plip.com/2020/08/07/how-to-get-dell-os-recovery-to-boot-xps-13-9350/feed/ 0
Kids DNS in the Time of Covid https://blog.plip.com/2020/06/21/kids-dns-in-the-time-of-covid/ https://blog.plip.com/2020/06/21/kids-dns-in-the-time-of-covid/#respond Mon, 22 Jun 2020 03:47:12 +0000 https://blog.plip.com/?p=2315 […]]]> Like all of you parents lucky enough to still have a job during the COVID19 layoffs, I’ve been struggling to balance time at work, personal time, family time and being the family’s IT person. With school closed, and now all summer camps closed, our use of kids screen time (aka internet time) has gone up from 0.5hrs/day to 3hrs+/day. How do we ensure we have safe computing environment for them?

DoT by Design

Originally, we had a MacOS workstation for the kids with parental controls enabled. This allowed us to do things like set up a 30 min per day limit, create separate accounts for each kid, limit which apps they could use and, most importantly, limit which URLs they could use (deny all, allow some). When coupled with my love of LXD/Pi-Hole/Quad9, that looked like this:

In this scenario the kid’s single shared workstation would get an IP lease from the DHCP server running on the pfSense router. This lease would specify the house wide Pi-Hole which sent all it’s DNS clear text queries to Stubby which in turn sent them encrypted to Quad9 via DNS over TLS (DoT). This is really nice as not only do we do get LAN wide ad blocking, but we get LAN wide encrypted DNS too. Score!

The kids workstation gets no special treatment on the network and is a peer of every other DNS lease on the LAN. However, with them needing to do school work and have fun and learn over the summer, they’ve since each gotten their own workstations. Now we have three workstations! This is starting to be a hassle to maintain the lock down on which sites they can browse. As a result, we just told them “be good” and let them use their new workstations with out any filters. This is sub-optimal!

.* Blacklist

How can we improve this situation to make it more tenable and more secure? By adding more instances of Pi-Hole, of course! It’s trivial to add a new instance of Pi-Hole with LXD. Just add a new container lxc launch ubuntu: pi-hole2 and then install Pi-Hole on the new container with curl -sSL https://install.pi-hole.net | bash . It’s two one liners that take all of 5 minutes.

For those of you like me that want an easy way to export their existing whitelist from MacOS’s parental controls, check out the “Directory Service command line utility” aka dscl. With this command you can create a file with all the URLs you’ve whitelisted. You can then easily import them into your new Pi-Hole instance (be sure to swap out USERNAME for your user):

dscl . -mcxexport /Users/USERNAME com.apple.familycontrols.contentfilter|grep http|cut -d'>' -f2|cut -d'<' -f 1|sort|uniq

Back to the new Pi-Hole instance, if we set the upstream DNS server to be the initial Pi-Hole, this means the kids DNS gets all the benefits of the existing encrypted infrastructure, but can add their own layer of blocking. Here I configure their Pi-Hole to just use the existing Pi-Hole as the resolver:

Specifically, if you add .* as a blacklist, EVERY site on the internet will fail to resolve. Then you can incrementally add sites you want resolve to your whitelist:

Once we hard code each of the three workstations to use the new Kids DNS, we’re good to go! And, this indeed works, but the savvy technologist will see the time suck of a flaw in my plan: If you whitelist example.com, there’s 5 or more sites you need to whitelist as well in order for example.com to work. This is because 99% of all sites use 3rd party javascript via content delivery networks (CDNs), have integrations with social media and of course often use the ever present Google Analytics. It gets even more tricky because if you want to keep your kid from searching on Google, you can’t think, “Oh, I’ll just whitelist *.google.com and then all it’ll save a bunch of time!”. Along with that will come Gmail and who knows what ever else. I knew this issue would be there going in, so I wasn’t afraid to take the time to get it to work. But caveat emptor!

Teaching Kids to be Smart

Speaking of caveats of a plan – all parents should know that this plan is VERY easy to bypass once your kids starts to figure out how the internet and their specific devices work. I’ve literally told my kids what I’m doing (stopping just about every site from working) why I’m doing it (the internet can be a horrible place) and that they can likely figure a way around it (see Troy Hunt’s tweet – as well as his larger write up on parenting online).

Like Troy Hunt, I’ll be super proud when they figure a way around it – and that day will come! But I do want to prevent them from randomly clicking a link and ending up somewhere we don’t want them to be. They can then ask us parents about why they can’t access a site or when it might allowed.

Being honest with your kids about what you’re doing is the way for them to be aware that this is for their benefit. The end goal is not to lock the entire internet away forever, it’s actually the opposite. The end goal is to prepare them to be trusted with unfettered access to the internet. This will happen soon enough whether we parents want it or not!

Banning 8.8.8.8 et al.

While I was in there tuning up the DNS, I remembered that some clients on my network (I’m looking at you Roku!) weren’t listening to the DHCP rules about using my preferred, encrypted DNS and going direct to Google’s DNS (8.8.8.8) or others I didn’t like. After a little research I found I could redirect all outbound TCP and UDP DNS traffic so that all devices use my Pi-Hole/Stubby/Quad9 DNS* whether they thought they were or not. For others running pfSense and want to do this, see the steps to “Blocking DNS Queries to External Resolvers” and then “Redirecting all DNS Requests to pfSense” (both thanks to this Reddit thread).

* We shall not speak of how devices will soon speak DNS over HTTPs (DoH), thus ruining this idea.

What about product X?

Some of you may be thinking, “this seems like a lot of work, why don’t you just implement an existing off the shelf solution?” Good question! For one, I like to DIY so I control my data and what’s done with it instead of letting a 3rd party control it. As well, while there’s home based solutions, I prefer open source solutions. To put my money where my mouth is, I’ve just donated for the 2nd (3rd?) time to Pi-Hole. I encourage you to do the same!

To be clear though, this set up is a pretty crude tool to achieve the end result. It looks like there’s some quite polished solutions out there if you’re OK with closed source, cloud hosted solutions. As well, there’s of course other variations on the “Use Pi-Hole For Parental Controls“.

Wrapping Up

Now that we have all in this in place, we can trivially support N clients which we want to force to use the kids more lock down DNS set up. This looks like exactly like it did before, but we have an extra container in the LXD server (and, some what orthogonally, a fancier pfSense DNS blocking setup):

I suspect this set up won’t last for more than a year or two. As more and more sites get added to the white list, it will be harder and harder to maintain. Maybe after that I’ll give each kid their own Pi-Hole instance to run on an actual Raspberry Pi and let them do with it as they please ;)

(Of course just after I deployed this, Pi-Hole 5.0 came out which offer the concept of groups, so you can likely do this idea above in a single instance instead of multiple. A bummer for me now, but a win for all other Pi-Hole users, including my future use!)

Facebooktwitterredditpinterestmail
]]>
https://blog.plip.com/2020/06/21/kids-dns-in-the-time-of-covid/feed/ 0
Punk Rock Band Names April 2020 https://blog.plip.com/2020/04/20/punk-rock-band-names-april-2020/ https://blog.plip.com/2020/04/20/punk-rock-band-names-april-2020/#respond Mon, 20 Apr 2020 18:12:21 +0000 https://blog.plip.com/?p=2308 […]]]> I’ve had a few of these queued up for a while now, so time to release them into the wild:

  • Antagonizing the Soup – On the beach with friends, it came up in conversation that a one of us was not being as nice as they could their kid’s Superintendent
  • Placental Revival – from this awesome Radio Lab
  • Shawarma on the Brain – Ordering Mediterranean with a bunch of friends and the person behind the counter kept on mistakenly hearing “shawarma”
  • Asprin Death – ??? I can’t remember. Maybe related to this older This American Life?

For those new readers wondering what this all about, see the first post on this and the series.

Facebooktwitterredditpinterestmail
]]>
https://blog.plip.com/2020/04/20/punk-rock-band-names-april-2020/feed/ 0