For some time I’ve had all my home lab systems running on LXD. For me at least, LXD predated Docker and so I’ve stuck with it. The containers are a bit more pet like and less cattle like – but that’s OK. They’re there for me to learn with – plus I can totes do docker in LXD inception if I wanna! (I’ll figure my way over to Incus some other month)
So years and years ago I found out bout FreshRSS as a great way to self host an RSS reader. This meant I didn’t have to install any apps on my phone, my feeds could stay synced and since I already had a VPN setup, it was trivial to access while on the road. Lovin’ it! I’d set this up around late 2018.
Fast forward to yesterday, I flippantly upgraded FresshRSS to the latest, as I occasionally do, and the site started having a fatal error, as it often does after I upgrade ;) I pulled up the error logs on the FressRSS LXD container running Apache and MariaDB and immediately saw some sort of unexpected { character in function blah blah on line 1-oh-smang-thirty
error. Huh, what’s that about?
Turns out in the latest dev release of FreshRSS, they’d formally removed support for PHP 7.x and required PHP >8.0. Specifically, this was because they’re using Union Types in their function declarations. This is cool stuff! you can see the int|string
values in the sample function here:
// Declaring a function with Union Type
function getMixedValue(int|string $value): int|string {
return $value;
}
This is no problem! I’ll just update PH….P…. oh yeah – I’m on hella old Ubuntu 18…so then I’ll just find some third party apt
repo and add that and then…. Hrrmm…might be more trouble than it’s worth. That’s cool! I’ll just deploy a new Ubuntu container on 24.04. Oh, well, that’s gonna take a chunk of disk space – I’ve been using a bit of Alpine to build small Docker images at work, what about a pet Alpine in LXD? They have an image – why not!
Before we sudo rm -rf /
on the old box (er, container), let’s get our data outa there. We need to first make a dump of the database. We’re root so we can just zing right through any permissions with a one liner. Next up we can zip up our old files into one big ol’ honkin zip file. That looks like this:
mysqldump freshrss > ~/freshrss.sql
cd /var/www/localhost/htdocs/
zip -r ~/freshrss.zip .
Finally, we can generate an SSH key for this root user to easily copy to the new container – I knowingly didn’t add a password because I’m about to delete the container and we’re all friends here:
ssh-keygen -t ed25519
cat /root/.ssh/id_ed25519.pub
Ok – I’ll hold on to that pub key for the “One last trip to the old digs” section below.
Here’s how to bloop out an Alpine container in LXD:
lxc launch images:alpine/3.20 freshrss
lxc shell freshrss
That’s it! You’re now sitting as root on the new instance. Let’s install the base packages including Apache, MariaDB, OpenSSH and PHP with all it’s libraries:
apk add \
mariadb mariadb-client openssh \
apache2 apache2-http2 php83 \
php83-cli php83-apache2 php83-session php83-curl \
php83-gmp php83-intl php83-mbstring php83-sqlite3 \
php83-xml php83-zip php83-ctype php83-fileinfo \
php83-dom php83-pdo
Now let’s ensure the three services start at boot and then we can start Apache and OpenSSH (MariaDB will have to wait):
rc-update add apache2
rc-update add sshd
rc-update add mariadb
rc-service apache2 start
rc-service sshd start
As well, that pub key you got from the old server? Let’s add that in on the new server:
mkdir ~/.ssh
echo "ssh-ed25519 AAAAC3Nz-SNIP-NplQ3 root@freshrss-old" > ~/.ssh/authorized_keys
chmod 700 ~/.ssh/
chmod 600 ~/.ssh/authorized_keys
As final hurrah at the old server, now that our SSH key is on the new server, let’s copy over the zip archive and the SQL dump. Be sure to replace 192.168.68.217
with your real IP!
scp ~/freshrss.* 192.168.68.217:
Now that we have our server with all the software installed and all the data copied over, we just need to pull together all the correct configs. First, let’s run setup
for MariaDB and then harden it. Note that it’s called mysql
… , but that’s just for backwards compatibility:
rc-service mariadb setup
rc-service mariadb start
mysql_secure_installation
That last command will ask questions – default answers are all good! And the initial password is empty, so you can just hit return when prompted for the current password. Maybe check out passphraseme
if you need a password generation tool? Let’s add the database, user and perms now. Be sure to not use password
as your password though!
echo "CREATE USER 'freshrss'@'localhost' IDENTIFIED BY 'password';" | mysql
echo "GRANT ALL PRIVILEGES ON *.* TO 'freshrss'@'localhost' WITH GRANT OPTIO" | mysql
echo "GRANT ALL PRIVILEGES ON *.* TO 'freshrss'@'localhost' WITH GRANT OPTION;" | mysql
Now we can load up the SQL and move all the PHP files to their correct home. Again, we’re root so no SQL password, and again, this is actually MariaDB, not MySQL:
mysql freshrss < freshrss.sql
unzip freshrss.zip
mv * /var/www/localhost/htdocs/.
mv .* /var/www/localhost/htdocs/.
Apache just needs four updates – easy!
Edit /etc/apache2/httpd.conf
with your favorite editor. Find these three lines and uncomment them – they won’t be next to each other:
#LoadModule rewrite_module modules/mod_rewrite.so
#LoadModule session_module modules/mod_session.so
#LoadModule remoteip_module modules/mod_remoteip.so
Now find the one line where DocumentRoo
t is set and change it to this value and add two more lines. One to allow encoded slashes and one to set the server name. Be sure to use the IP address or FQDN of the server – don’t use rss.plip.com
!
DocumentRoot "/var/www/localhost/htdocs/p"
AllowEncodedSlashes On
ServerName rss.plip.com:80
Now that apache has been configured, let’s restart it so all the settings are loaded:
rc-service apache2 restart
The old FreshRSS install should now be running on your new Alpine based container – congrats! This has been a fun adventure to appreciate how Alpine works as compared to Ubuntu. This really came down to two main differences:
systemd
vs OpenRC
– Ubuntu has used systemd
for sometime now and the primary interface to that is systemctl
. Alpine on the other hand uses OpenRC
which you interface with rc-update
and rc-service
. Alpine picked this up from when it split off from Gentoo.apt
vs apk
– Package management is slightly different! I found this to be an inconsequential change.There’s plenty of guides out there that do the same as this one. Heck, you’re likely better of just using a pre-built docker image (though the top results were pinned to PHP7)! However, I wanted to document this for myself and hopefully I’ll save someone a bunch of little trips off to this wiki or that FAQ to understand how to migrate off of Ubuntu to Alpine.
Cheers!
]]>Last year I wanted to be a “Hacker” and code up a solution to show near by access points and nearby phones. I failed. However, I did a good job of brushing up on what I needed to do over the past year and so this year I was a hacker for reals. Here I am in the final get up:
Let’s break it down! Here’s the hardware list (affiliate links to Amazon):
My final build out looked like this:
A quick write up of the software is:
/home/pi/.local/bin/howmanypeoplearearound
http://127.0.0.1
by following this awesome guide on pimylife.com. Note that you’ll only use the one URL and have no while
loop in the kiosk bash script.sudo apt install apache2 php
/var/www/html/
put all of the files I just published on this gist. Basically it’s a small web app to show the data we’re collecting as well as some bash scripts that get run in cron.pi
user. This will use wlan0
(built in) to look for nearby access points using the venerable iw
command. It will use wlan1
(USB adapter) to look for phones and such in monitor mode using howmanypeoplearearound
. Finally, it will get the temp and humidity using the python script from YANPIWS. You may need to make /var/www/html
writable by pi
user to make this work.It’s not my finest code, but if everything worked correctly, the Pi will boot up every time and show something like this:
As you can see it got cold tonight on our walk – by the time we got home at 8pm it was 45. Happy Hacker Halloween!
]]>At work recently I was charged with rebuilding a bare metal host. Beyond needing to follow our security best practices and be well documented, it was left up to me how to do it. I had my own needs for test VMs and there was a pending request for a VM* for semi-production instance. This meant some VMs* would be fine in a traditional NATed environment, where they had no publicly accessible interfaces, and others would need full fledged public IPs. (* – I’m using “VM” liberally in this post. These are technically LXD containers which use the host kernel.)
Given my penchant for LXD, I’m guessing you can see where this is going ;) If you don’t know my penchant, check out these posts, specifically, “From zero to LXD: Installing a private compute cloud on a Cisco C220 M4SFF“.
I won’t go as into nitty-gritty detail on the hardware setup (this time with an older c220 M3 LFF instead of the new M4 SFF), but I set up the system very similarly, but was forced to use a RAID10 set up on 4 drives – no fancy ZFS set up this time. I’ll see some performance and features lost as LXD was configured to just use filesystem (/var/lib/lxd
), but given I have bare metal in a colo with as many VMs as I want, I’m happy ;)
After installing Ubuntu 18.04, giving it a static IP and running our Ansible hardening roles against it, I was ready to configure LXD. The nice thing about LXD is that you can have as many container profiles as you want. This means I can zip through the default lxd init
process to have VMs which are behind NAT and then trivially add a new profile that allows hosts to have a public IP after that.
The initial config of LXD looks like this:
Would you like to use LXD clustering? (yes/no) [default=no]:
Do you want to configure a new
storage pool? (yes/no) [default=yes]:
Name of the new
storage pool [default=default]:
Name of the storage backend to use (btrfs, dir, lvm) [default=btrfs]:
Create a new
BTRFS pool? (yes/no) [default=yes]:
Would you like to use an existing block device? (yes/no) [default=no]:
Size in GB of the new
loop device (1GB minimum) [default=15GB]:
Would you like to connect to a MAAS server? (yes/no) [default=no]:
Would you like to create a new
local network bridge? (yes/no) [default=yes]:
What should the new
bridge be called? [default=lxdbr0]:
What IPv4 address should be used? (CIDR subnet notation, “auto” or “none”) [default=auto]:
What IPv6 address should be used? (CIDR subnet notation, “auto” or “none”) [default=auto]:
Would you like LXD to be available over the network? (yes/no) [default=no]:
Would you like stale cached images to be updated automatically? (yes/no) [default=yes]
Would you like a YAML "lxd init"
preseed to be printed? (yes/no) [default=no]:
After that, and HUGE thanks to this concise post by Simos Xenitellis, we can now configure a new profile with Macvlan for VMs that need a public IP. Simos’ post really covers this nicely (I even use their same code snippets ;), but by copying the default profile (lxc profile copy default
lanprofile
) and then setting the the nictype (lxc profile device set lanprofile eth0 nictype macvlan
) and the parent (lxc profile device set lanprofile eth0 parent enp5s12
) on the new profile, we’re ready to go. Note that this assume your bare metal’s nic is enp5s12
and your LXD VMs use eth0
(the default).
But wait, what is Macvlan? And, just so we’re all clear, how does it differ from the default NAT set up or the fancy bridged set up in my earlier post? Let’s break it down:
lxd init
. This is super handy for testing and development! As well, we can use it to our advantage with a reverse HTTP proxy in production – more on this below. Now that you know what the three setups are, and how easy it was to set up NAT (just accept LXD defaults) and how easy it is to set up Macvlan (3 command line calls) – let’s see what we can do with them!
Again per Simos’ post, we can easily create a new NATed VM and then a Macvlan VM like so:
lxc launch ubuntu: natVM
lxc launch -p lanprofile ubuntu: lanVM
To set a static IP on either host, assuming your running Ubuntu 18.04 like me, you’d just edit /etc/netplan/50-cloud-init.yaml
. So let’s say I wanted to give the natVM IP .10 in the 10.x.x.x range that LXD gave me and use Quad9 for DNS. I’d edit50-cloud-init.yaml
to look like this:
network: version: 2 renderer: networkd ethernets: eth0: dhcp4: no addresses: [10.0.0.10/24] gateway4: 10.0.0.1 nameservers: addresses: [9.9.9.9]
This ends the part of the post where we talk about NAT and Macvlan both easily co-existing on LXD. Now on to what you might do with that set up! Specifically, how you might use Apache to forward on HTTP requests on a public IP to a NATed VM.
If you wanted to run lots of VMs, none of which needed a public IP, but a few needed to run a public service, you might wonder how to best do this? In my case, I had a small amount of public IPs, so burning one for every VM was a big waste. A better way is to just selectively forward some HTTP traffic from the bare-metal host’s public IP to a NATed VM’s IP. I’m an Apache kinda person, but this could be done with your web server of choice. It goes with out saying, but this trick will only work with HTTP traffic. I’ll speak to being able to SSH “directly” to any NATed hosts below!
Let’s get started by installing apache2 on the Ubuntu bare-metal host and enable some key modules:
apt install apache2
a2enmod ssl rewrite proxy proxy_http
Now edit /etc/apache2/ports.conf
so that it’s listening on any ports you need – in our example it’s 3000 (Grafana) and 8086 (InfluxDB) so we’ll add just two lines:
<IfModule ssl_module>
Listen 443
Listen 3000
Listen 8086
</IfModule>
Assuming you want to run a service on 8086 (InfluxDB) and a service on 3000 (Grafana) on the VM we configured above on .10, you’d create a vhost file called /etc/apache2/sites-available/influxdb-int.conf
and it would look like this:
<VirtualHost *:3000> ServerName grafana-int.example.com LogLevel warn SSLEngine on SSLCertificateFile /etc/httpd/ssl.crt/your.crt SSLCertificateKeyFile /etc/httpd/ssl.key/your.key ProxyRequests Off<Proxy *>
Require all granted
</Proxy>
ProxyPass / https://10.0.0.10:3000/ ProxyPassReverse / https://10.0.0.10:3000/ </VirtualHost> <VirtualHost *:8086> ServerName influxdb-int.example.com LogLevel warn SSLEngine on SSLCertificateFile /etc/httpd/ssl.crt/your.crt SSLCertificateKeyFile /etc/httpd/ssl.key/your.key ProxyRequests Off<Proxy *>
Require all granted
</Proxy>
ProxyPass / http://10.0.0.10:8086/ ProxyPassReverse / http://10.0.0.10:8086/ /VirtualHost>
Note that this assumes you’re running everything over TLS (you should!!). As well, it assumes that your cert (SSLCertificateFile
) and key (SSLCertificateKeyFile
) are in /etc/httpd/ssl.key
. Change these according to your specifc set up.
From here, you would follow the set up your apps to ensure they’re working locally on .10 and they should work on the public ip of your bare metal. Of course these all need to be configured to use TLS over the default HTTP. Huh – sounds like a whole “How to harden your TIG deployment” might be in order! (Of course, store any passwords encrypted when automating your deployments.)
A final note on this set up is how to securely SSH to LXD hosts. Of course you can just SSH to your bare metal host and then bash in (eg lxd exec natVM bash
), but how do you run your Ansible roles against these NATed VMs or another automation tool? SSH config files to the rescue!
Let’s assume your public IP of your bare metal is 1.2.3.4 and you want to ssh to the 10.0.0.10 IP we just set up above. All you need to do is create a file in your .ssh folder called “config” with 3 lines like this:
Host
natVMHostname 10.0.0.10
ProxyCommand ssh -W %h:%p 1.2.3.4
With this set up, you can run ssh natVM
and your config will automatically see the configuration to securely proxy the command through the 1.2.3.4 host through to your internal only .10 host. This works especially well when you have SSH Keys set up with SSH Agents.
Drop me a note if you have any questions!
]]>Recently I was tasked at work to get an instance of Sympa set up. Their docs are a bit scattered, but I found a promising post on debian.org which suggested I could get away with an apt-get install instead of needing to compile from source. Well, it turns out I did get it working, but only after a lot of trial and error. Given that some one else might be trying to do this, and because I had to document the exact steps for work, here’s a handy dandy blog post which I hope will help some one trying to do the same thing.
Good news for those looking to do this for Sympa 6.2 (latest at time of publishing), I have a post on how to do this exact thing on the soon to be released Ubuntu 17.04 with Sympa 6.2. Stay tuned!
This post assumes you have root on your box. It assumes you have Apache2 installed. It assumes you’re running a stock Ubuntu 16.04 install. It assumes you want to run Sympa on your server. It also assumes you’ll be using Postfix as the lists MTA. It assumes you have a DNS entry (A record) for the server. As well it assumes you also have an MX record pointing to the A record or no MX record so the MX defaults to the A record. If this doesn’t apply to you, caveat emptor!
To recap, that’s:
I also was using this server solely to serve Sympa mail and web traffic so if you have a multi-tenant/multi-use server, it may be more complicated.
These steps assume you’re going to install Sympa on list.example.com. There’s no reason you couldn’t use example.com instead.
apt-get install -y sympa
if ($cookie and $cookie =~ /^\d\{,16}$/) {
use_fast_cgi 1
If you don’t do this step, you’ll see full HTML pages show up in /var/logs/syslog and only 500 errors in the browser :(
lists.example.com should show the sympa UI, w00t!
update-rc.d sympa defaults update-rc.d sympa enable
myhostname = lists.example.com smtpd_tls_cert_file=/full/path/to/cert/apache/uses.pem smtpd_tls_key_file=/full/path/to/key/apache/uses.pem alias_maps = hash:/etc/aliases,hash:/etc/mail/sympa/aliases alias_database = hash:/etc/aliases,hash:/etc/mail/sympa/aliases mydestination = $myhostname, lists.example.com, , localhost relay_domains = $mydestination, lists.example.com
listmaster email1@example.com,other_here@domain.com domain lists.example.com wwsympa_url https://lists.example.com/wws
default_home lists create_list intranet
The “intranet” value will prevent some one from signing up and requesting a list with any approval.
## main sympa aliases sympa: "| /usr/lib/sympa/bin/queue sympa@lists.example.com" listmaster: "| /usr/lib/sympa/bin/queue sympa@lists.example.com" bounce+*: "| /usr/lib/sympa/bin/bouncequeue sympa@lists.example.com" sympa-request: email1@example.com sympa-owner: email1@example.com
newaliases reboot
Sympa should now be up and running at lists.example.com! All mail and and out should work so you can run your own list server. Please report any problems so I can keep this post updated and accurate – thanks!
]]>I’ve been brushing up on my web security best practices recently. OWASP is a great resource for this! One of their recommended best practices is to use HTTP Strict Transport Security (HSTS). This involves redirecting traffic from unencrypted HTTP to HTTPS. However to ensure that no future Man in the Middle attacks happen with the redirect, it’s best to tell the browser to always go directly to HTTPS regardless of the protocol. This, in a nutshell is the HSTS solution.
I’ve updated plip.com and blog.plip.com to be served over exclusively over HTTPS. This is thanks to a *.plip.com wildcard certificate from Global Sign. After setting up Apache to use the certs on the SSL vhosts, I then needed to redirect all traffic away from HTTP. For plip.com, this was a simple Apache rule in the HTTP vhost:
# send everything to HTTPS RewriteCond %{HTTPS} off RewriteRule (.*) https://%{HTTP_HOST}%{REQUEST_URI}
And then for the blog.plip.com, iThemes had this codex entry about a simple plugin to rewrite HTTP to HTTPS, following the second option on their page. They do caution that this plugin might have performance drawbacks as you’re parsing every post on the fly. You can fix this if you’re running a caching system, like W3 Total Cache, which I am! W3TC recommends you fix slow HTTPS calls by enabling caching of HTTPS: Go to Performance -> Page Cache and check “Cache SSL (https) requests.” Easy peasy!
Now to add the HSTS to the HTTP header. For plip.com this is easy as I have a single PHP header file for the entire site. I just added this line:
header('Strict-Transport-Security: max-age=31536000');
For the blog, I extended the simple iThemes plugin by adding these lines:
add_action( 'send_headers', 'add_header_hsts'); function add_header_hsts(){ header('Strict-Transport-Security: max-age=31536000'); }
Special thanks to the WordPress Codex on how to set headers as well as a random post over at Hakre on WordPress on how to format the HTTP header in PHP for HSTS.
Plip.com has absolutely zero affect on the big players, and the EFF would never care about giving me a report, but I’m scoring 4 out of 5 on EFFs encrypt the web report:
Looking at what it takes to set up my ciphers, I’m still gonna shoot for getting a perfect 5 of 5!
]]>This post is a short parable told in three lessons:
Lesson 1: The web is not as temporal as you might think!
Recently a co-worker was travelling and was unable to access her work based email. Instead, she directed folks to email her at her personal email. Being a curious fellow, I clicked over to her personal site to see what she had to say. All I found was “Site in progress, check back later” and link to a very outdated resume. Well, that’s just no fun! Enter the wayback machine! Using this fine site, I was able to see all the text, photos and links she had long since redacted. The wayback machine never forgets, so don’t you forget that.
Lesson 2: Robots.txt can pull Jedi mind tricks.
A natural response to seeing the archive of other sites, is to see what dirt folks might find out about me via the same method. Sure enough, there’s some good stuff! However, the more interesting fact I learned is that my robots.txt of today redacted the archive.org copy of yesterday! This is cool! A while ago I took down my resume and some older, more personal content and as well took a sec to make some broad strokes of search engines shouldn’t index. It was these actions that archive.org took note of. With a wave of my robots.txt hand, indeed these are not the pages you’re looking for.
Lesson 3: The wayback machine is way cool.
Ok, this parable kinda peters out right about here, but still, the wayback machine is way cool. Check out the rad looks plip.com has had over the years! Hrm, maybe that should be “rad”. You decide.
]]>I still have what most would call an unfounded fear of privacy when it comes to Google. They may receive a copy of every email I send to my friends who use gmail, they may place every call to me via Google Voice, they may server every ad from Double Click (which I then block) and I sure as heck never stray from their bad-ass search on google.com, but I don’t host anything with them directly.
I’ve run my share of web analizer tools, but some times I wanna know, right now, “how many people subscribe to my blog feed?”. Now, I probably should be using FeedBurner (No shit – I did not know, ’til just this second, that they too are now owned by Google. Oh, the irony!), but my site, despite its claims, is still a bit of the cobbler’s child when it comes to analytics. Heck, I still don’t have mod_usertrack on!
Enter tail, cut, sort, uniq and wc!
tail -10000 access_log|grep /blog|cut -d" " -f 1|sort|uniq|wc
In layman’s term’s that’s “get the last 10000 lines of my access log, cut each line into fields separated by the space character, grab the first field (the IP address in this case), sort the resulting lines of now just an IP address per line, remove the duplicates and count the number resuling lines (or IP addresses)”. Presto! 388 of you out there, including all the bots, spiders, crawlers, trolls and goblins. Thanks for the interest!
]]>