Category Archives: Linux

NAT and Macvlan on production LXD (plus reverse proxy & SSH Config)

7 minutes, 41 seconds

Intro and LXD install

At work recently I was charged with rebuilding a bare metal host. Beyond needing to follow our security best practices and be well documented, it was left up to me how to do it. I had my own needs for test VMs and there was a pending request for a VM* for semi-production instance. This meant some VMs* would be fine in a traditional NATed environment, where they had no publicly accessible interfaces, and others would need full fledged public IPs. (* – I’m using “VM” liberally in this post. These are technically LXD containers which use the host kernel.)

Given my penchant for LXD, I’m guessing you can see where this is going ;) If you don’t know my penchant, check out these posts, specifically, “From zero to LXD: Installing a private compute cloud on a Cisco C220 M4SFF“.

I won’t go as into nitty-gritty detail on the hardware setup (this time with an older c220 M3 LFF instead of the new M4 SFF), but I set up the system very similarly, but was forced to use a RAID10 set up on 4 drives – no fancy ZFS set up this time. I’ll see some performance and features lost as LXD was configured to just use filesystem (/var/lib/lxd), but given I have bare metal in a colo with as many VMs as I want, I’m happy ;)

After installing Ubuntu 18.04, giving it a static IP and running our Ansible hardening roles against it, I was ready to configure LXD. The nice thing about LXD is that you can have as many container profiles as you want. This means I can zip through the default lxd init process to have VMs which are behind NAT and then trivially add a new profile that allows hosts to have a public IP after that.

The initial config of LXD looks like this:

Would you like to use LXD clustering? (yes/no) [default=no]:
Do you want to configure a new storage pool? (yes/no) [default=yes]:
Name of the new storage pool [default=default]:
Name of the storage backend to use (btrfs, dir, lvm) [default=btrfs]:
Create a new BTRFS pool? (yes/no) [default=yes]:
Would you like to use an existing block device? (yes/no) [default=no]:
Size in GB of the new loop device (1GB minimum) [default=15GB]:
Would you like to connect to a MAAS server? (yes/no) [default=no]:
Would you like to create a new local network bridge? (yes/no) [default=yes]:
What should the new bridge be called? [default=lxdbr0]:
What IPv4 address should be used? (CIDR subnet notation, “auto” or “none”) [default=auto]:
What IPv6 address should be used? (CIDR subnet notation, “auto” or “none”) [default=auto]:
Would you like LXD to be available over the network? (yes/no) [default=no]:
Would you like stale cached images to be updated automatically? (yes/no) [default=yes]
Would you like a YAML "lxd init" preseed to be printed? (yes/no) [default=no]:

After that, and HUGE thanks to this concise post by Simos Xenitellis, we can now configure a new profile with Macvlan for VMs that need a public IP. Simos’ post really covers this nicely (I even use their same code snippets ;), but by copying the default profile (lxc profile copy default lanprofile) and then setting the the nictype (lxc profile device set lanprofile eth0 nictype macvlan) and the parent (lxc profile device set lanprofile eth0 parent enp5s12) on the new profile, we’re ready to go. Note that this assume your bare metal’s nic is enp5s12 and your LXD VMs use eth0 (the default).

Network types: NAT, Bridge & Macvlan

But wait, what is Macvlan? And, just so we’re all clear, how does it differ from the default NAT set up or the fancy bridged set up in my earlier post? Let’s break it down:

  • Network Address Translation (NAT) – You’re very likely using this right now to read this post ;) NAT is what enables us to easily share a connection to the Internet with out everyone having a public IP. Have you seen IPs that start with 10.x.x.x, 172.x.x.x or 192.x.x.x? While not exclusive to NAT, they’re the most common IP ranges used in conjunction with it (See RFC 1918 for TMI). NAT allows a gateway to hand out these IPs which then can send traffic out to the Internet and, by modifying the ports used, send the responses back to the NATed host who originally made the requested.

    NAT is what LXD uses when you accept all the defaults in lxd init. This is super handy for testing and development! As well, we can use it to our advantage with a reverse HTTP proxy in production – more on this below.

  • BridgeBridges are a layer 2 connection that makes it appear as all devices are on the same network. This is convenient when you want all devices to work with the same IP range, either with public IPs or in your NATed network. This is how I set up LXD in the prior article. Any time a VM is created in LXD, it can see all hosts, but it does take a slightly more complex network set up on the bare metal.

  • Macvlan – I’ll quote this great write up on hicu.be to describe Macvlan, “[it] allows you to configure multiple Layer 2 (i.e. Ethernet MAC) addresses on a single physical interface. Macvlan allows you to configure sub-interfaces (also termed slave devices) of a parent, physical Ethernet interface (also termed upper device), each with its own unique (randomly generated) MAC address, and consequently its own IP address.”. This achieves the same result as bridges with one major caveat: host and VMs can not talk to each other. That is, your VMs won’t be able to talk to you bare metal LXD host and vice versa – caveat emptor!

Now that you know what the three setups are, and how easy it was to set up NAT (just accept LXD defaults) and how easy it is to set up Macvlan (3 command line calls) – let’s see what we can do with them!

Again per Simos’ post, we can easily create a new NATed VM and then a Macvlan VM like so:

lxc launch ubuntu: natVM
lxc launch -p lanprofile ubuntu: lanVM

To set a static IP on either host, assuming your running Ubuntu 18.04 like me, you’d just edit /etc/netplan/50-cloud-init.yaml. So let’s say I wanted to give the natVM IP .10 in the 10.x.x.x range that LXD gave me and use Quad9 for DNS. I’d edit50-cloud-init.yaml to look like this:

network:
   version: 2
   renderer: networkd
   ethernets:
     eth0:
      dhcp4: no
      addresses: [10.0.0.10/24]
      gateway4: 10.0.0.1
      nameservers:
        addresses: [9.9.9.9]

This ends the part of the post where we talk about NAT and Macvlan both easily co-existing on LXD. Now on to what you might do with that set up! Specifically, how you might use Apache to forward on HTTP requests on a public IP to a NATed VM.

Apache reverse proxy

If you wanted to run lots of VMs, none of which needed a public IP, but a few needed to run a public service, you might wonder how to best do this? In my case, I had a small amount of public IPs, so burning one for every VM was a big waste. A better way is to just selectively forward some HTTP traffic from the bare-metal host’s public IP to a NATed VM’s IP. I’m an Apache kinda person, but this could be done with your web server of choice. It goes with out saying, but this trick will only work with HTTP traffic. I’ll speak to being able to SSH “directly” to any NATed hosts below!

Let’s get started by installing apache2 on the Ubuntu bare-metal host and enable some key modules:

apt install apache2
a2enmod ssl rewrite proxy proxy_http

Now edit /etc/apache2/ports.conf  so that it’s listening on any ports you need – in our example it’s 3000 (Grafana) and 8086 (InfluxDB) so we’ll add just two lines:

<IfModule ssl_module>
   Listen 443
   Listen 3000
   Listen 8086
</IfModule>

Assuming you want to run a service on 8086 (InfluxDB) and a service on 3000 (Grafana) on the VM we configured above on .10, you’d create a vhost file called /etc/apache2/sites-available/influxdb-int.conf and it would look like this:

<VirtualHost *:3000>
         ServerName grafana-int.example.com
         LogLevel warn
         SSLEngine on
         SSLCertificateFile /etc/httpd/ssl.crt/your.crt
         SSLCertificateKeyFile /etc/httpd/ssl.key/your.key
         ProxyRequests Off
         <Proxy *>
             Require all granted
         </Proxy>
         ProxyPass / https://10.0.0.10:3000/
         ProxyPassReverse / https://10.0.0.10:3000/
 </VirtualHost>
 <VirtualHost *:8086>
         ServerName influxdb-int.example.com
         LogLevel warn
         SSLEngine on
         SSLCertificateFile /etc/httpd/ssl.crt/your.crt
         SSLCertificateKeyFile /etc/httpd/ssl.key/your.key
         ProxyRequests Off
         <Proxy *>
             Require all granted
         </Proxy>
         ProxyPass / http://10.0.0.10:8086/
         ProxyPassReverse / http://10.0.0.10:8086/
 /VirtualHost>

Note that this assumes you’re running everything over TLS (you should!!). As well, it assumes that your cert (SSLCertificateFile) and key (SSLCertificateKeyFile) are in /etc/httpd/ssl.key . Change these according to your specifc set up.

From here, you would follow the set up your apps to ensure they’re working locally on .10 and they should work on the public ip of your bare metal. Of course these all need to be configured to use TLS over the default HTTP. Huh – sounds like a whole “How to harden your TIG deployment” might be in order! (Of course, store any passwords encrypted when automating your deployments.)

Secure SSH to NATed LXD hosts

A final note on this set up is how to securely SSH to LXD hosts. Of course you can just SSH to your bare metal host and then bash in (eg lxd exec natVM bash), but how do you run your Ansible roles against these NATed VMs or another automation tool? SSH config files to the rescue!

Let’s assume your public IP of your bare metal is 1.2.3.4 and you want to ssh to the 10.0.0.10 IP we just set up above. All you need to do is create a file in your .ssh folder called “config” with 3 lines like this:

Host natVM
   Hostname 10.0.0.10
   ProxyCommand ssh -W %h:%p 1.2.3.4

With this set up, you can run ssh natVM and your config will automatically see the configuration to securely proxy the command through the 1.2.3.4 host through to your internal only .10 host. This works especially well when you have SSH Keys set up with SSH Agents.

Drop me a note if you have any questions!

Easy Pi-Hole and Stubby on Orange Pi Zero & Raspberry Pi 3

4 minutes, 8 seconds

Skip to the install guide if you just want to know how to set up your Pi easily ;) Otherwise, read on for a little background.

Introduction


I’ve been deep in DNS land of late. At work I’m working on DNS Stats and helping QA/release/document a packet capture tool for DNS stats. I even, just today, automated a complete Pi-Hole install to have a reliable dev environment for DNS stats. At the shop, I’ve set up the same encrypted DNS + Pi-Hole + LXD + Quad9 as I have at home. It’s all DNS all the time here is what I’m trying to say ;)

I’ve yet to find the magic sauce to compile Stubby on the Orange Pi Zero board though. It’s so cheap ($20 shipped), has a built in Ethernet, and is just so dang cute! So, I was looking around at stubby posts and Linux posts and Ubuntu posts and found this great write up on Ubuntu 18.04 and stubby and it said,

Stubby is in Ubuntu 18.04 repository

-linuxbabe.com

This was awesome! This means my previous trickery of having to compile stubby from source on Ubuntu wasn’t needed! However, the revelations about easy DNS set up and encryption were only just getting started.

The next one I found was that the 4.0 release of Pi-Hole from early August, had a new feature: custom ports can be used for upstream servers. Wham! This double awesome! Now, in the GUI of Pi-Hole, you could safely add a the IP of stubby and specify a random port to use! But we’re done yet, no sir, two more revelations to go.  Hold on.

The penultimate revelation was BOTH the Orange Pi Zero AND the Raspberry Pi 3 b had a release of Ubuntu 18.04 for them. This means that you not only don’t have to compile stubby for your x86 LXD environment, but you don’t have to do it for ARM SoC setups either! Yay!

The final revelation dates back to a long, LONG time ago, and I’m just late to the party.  I’m talking proto-internet long time ago.  The legend Jon Postel decided that not only would IPv4 have a reserved IP address of 127.0.0.1 for localhost, but in RFC 790 in 1981, he said it would actually be a /8, so you get just over 16 million localhost IPs just for your bad ass self. This means you don’t actually need the new port specifying feature of Pi-Hole – you can just set up Stubby on port 53 on 127.1.1.1 and Pi-Hole on port 53 on 127.0.0.1. Ugh – this makes it so much easier – if I only was more a network guy!

Now that my rambling background on my recent revelations is done, let’s get to the technical write up.  Though, honestly, this part will be pretty short and sweet.

Installation Guide

This guide assumes you’ve downloaded and installed Ubuntu 18.04 for either your Orange Pi Zero or Raspberry Pi 3 B. Note that both the official download page of both Orange Pi and Raspberry Pi Foundation, do not list 18.04 options. It also assumes you’re running everything as root. The instructions are identical for both boards:

  1. Ensure you’re up to date:
    apt-get update&&apt-get upgrade
  2. Install Stubby: apt-get install stubby
  3. Edit /etc/stubby/stubby.yml so that it’s listening on 127.1.1.1:
    listen_addresses:
     - 127.1.1.1
  4. Restart Stubby: systemctl restart stubby
  5. Install Pi-Hole. Use what ever upstream DNS server you want when prompted, we’re going to override it with Stubby:
    curl -sSL https://install.pi-hole.net | bash

    IMPORTANT
    – See Troubleshooting below if you get stuck on “Time until retry:” or “DNS resolution is not available” when installing Pi-Hole

Pi-Hole DNS settings
  1. Log into your new Pi-Hole at YOUR_PI_IP/admin and go to Settings -> DNS. Uncheck any DNS servers and enter a Custom 1 (IPv4) of 127.1.1.1:

Coming full circle, the Reddit thread I cited in my original write up, now has a comment that Ubuntu 18.04 has a Stubby package.

Quad9

If you want to use Quad9 (and I think you should ;), in step 3, while you’re in stubby.yml, comment out all the other servers in upstream_recursive_servers: and un-comment Quad9 so it looks like this::

upstream_recursive_servers:
  - address_data: 9.9.9.9
    tls_auth_name: "dns.quad9.net"
  - address_data: 2620:fe::fe
  tls_auth_name: "dns.quad9.net

Full disclosure, I work for PCH which sponsors Quad9.

Troubleshooting

A few things I found while researching this post that might help you:

  • The login on the Raspberry Pi is Ubuntu with password is Ubuntu. The login on the Orange Pi Zero is root and password is 1234. Check out my SSH Bootstrap trick as well.
  • The Orange Pi Zero didn’t get an IP via DHCP the first boot. A reboot solved that.
  • The Pi-Hole script gave me a headache when installing. Near the end of the install it said, “Starting DNS service” and then was waiting to retry. I found a post on the Pi-Hole boards that solved it perfectly. To work around this, edit /etc/init.d/pihole-FTL so that this line:

    su -s /bin/sh -c "/usr/bin/pihole-FTL" "$FTLUSER"

    is replaced by this line:

    /usr/bin/pihole-FTL


    After that, be sure to reload your init script with:

    systemctl daemon-reload

    Finally you should be able to complete your install just by restarting Pi-Hole:

    systemctl restart pihole-FTL
     

  • Even though I followed step 4, during one my tests stubby was still blocking port 53 on 127.0.0.1. If that happens, restart stubby:

    systemctl restart stubby
     

  • At any point you can test that stubby or pi-hole is working. These are good to intersperse with each install and configuration change:

    dig @127.1.1.1 plip.com +short # stubby
    dig @127.0.0.1 plip.com +short # pi-hole

Bootstrap SSH on Ubuntu Core with out Ubuntu SSO credentials

1 minute, 23 seconds

I was playing around with Snaps and I wanted to try out Ubuntu Core as well. I still had some of those  Orange Pi Zero boards laying around and when you go to the Core Download page – there’s an option for Orange Pi right there – sweet! I downloaded the .img file, wrote it to my microsd card with dd, slapped it in my Orange Pi Zero, found the new IP in my DHCP server and off I went to SSH in.

Then I saw this step on install docs:

you will be asked to enter your Ubuntu SSO credentials

– Core Install Docs

Whhhaaat? Oh, I see, Core’s whole shtick is that it’s secure by default. They say, “Secure by default – Automatic updates ensure that critical security issues are addressed in the field, even if a device is unattended.”.  Cool, I can get behind that. IoT needs some thought leaders in IoT security. However…I still just want to SSH in and poke around a bit – I don’t want to have to set up an account at Ubuntu. 

Then I thought, “What if I just create a .ssh directory in /root/ and put my public key in the authorized keys file?” Assuming you’re on an Ubuntu system, logged in as mrjones and just stuck in your microSD card with the core image on it, that’d look like this:

cd /media/mrjones/writable/system-data/root/
mkdir .ssh
chmod 700 .ssh/
vim .ssh/authorized_keys 
chmod 700 .ssh/authorized_keys

After unmounting the card, inserting it into and rebooting my Pi, I SSHed as root and it just worked – that’s awesome! Now you know how to do it as well!  In fact, this likely will work with the Raspbian, Armbian and Ubuntu images for all kinds of Pi boards as well.

Stay tuned for my next post where I’ll massively simplify my stubby and pi-hole how to!

Teaching Kids About Busy Signals with a PBX on LXD for $15

2 minutes, 40 seconds

Linksys RTP300 Circa 2005

The other day the kids and I were at our local, awesome Goodwill store. I guess they’re all awesome, eh? I was looking over the pile of small electronics while the kids picked out a shirt (“any shirt you want!!”), and stumbled across a device with Ethernet jacks (RJ45) and phone jacks (RJ11). It’s a router with two ATA ports – cool! It was only $5 bucks!! A quick internet search suggests that it may be locked to a provider like Vonage, but that there’s lots of folks unlocking it. Given this was Goodwill, there were was a plethora of POTs devices handy – two more phones at just $5/ea – no problem!

Now that I had the whole kit home, it was time to see what the ATA was up to. It’s a Linksys RTP300 – and the voice section was definitely locked down :( But that means it’s time to start hacking! I dug up the site I initially found before buying it – luckily all the downloads still work! The post is 8 years old, forever in ‘net time, so I was impressed.

I’m pretty pleased with myself because I fully understood everything and was able to improve the process because of this. Mainly:

Using inspector tool to make the ping input field longer
  • Being a web developer I already had a web server running. This is much easier to use instead of setting up a new TFTP server as they suggested.
  • Being a web developer (still) I was kinda in awe that I’d already forgotten how amazeballs Firebug was, and how I take for granted that modern browsers have such good developer tools built in.
  • Knowing all the linux commands used (wget, chmod, cd & dd) I was able to explore the device a bit
  • I figured out that the guide wanted you to download into /var/tmp which didn’t exist on my device, but /var worked
  • They have you download some *.img, but running it through strings and checking the size, it appeared to match the 3.1.24 version. Completing the install confirmed this. However, md5sum outputs don’t match.
That, sir, is an odd response to a ping ;)

After all this I had a generic, if not quite dated, ATA that was ready to talk to a VoIP provider. Time to install Asterisk! This is where LXD comes in. I already had my home server running Stubby and Pi Hole – time to add another. After spinning up an Ubuntu 16.04 box, hitting a snag, then spinning up an 18.04 box, a quick apt-get install asterisk and I was ready to go!

I created 4 extensions, one for each of us, by following some simple guides I’d found (edit /etc/asterisk/sip.conf, /etc/asterisk/extensions.conf and a reload). I plunked in the LXD Asterisk IP, extension and password and BAM! both lines registered easy peasy on the ATA.

With two hard phones set up (just plug ’em in to the ATA), I set up me and the wife using the built in client on our phones. Now we all had a phone and could call each other. See the gallery below for the visual story!

On a personal note, this was pleasing for a two reasons:

  1. I saved 3 small electronic devices from the landfill.  More and more I’m trying to be conscientious about buying less or buying used.
  2. My kids got to look up at me and say, “it’s making a weird sound!?” I explained to them that it was a “busy signal”.  See, cell phones, office phones and 99% of home phones don’t do this any more. They have either call waiting or voice mail, or both ; )

From zero to LXD: Installing a private compute cloud on a Cisco C220 M4SFF

17 minutes, 42 seconds

Introduction

Having some nice server grade hardware on hand, I knew I needed to put it to good use.  I’m talking about the Cisco C220 M4SFF:

This lil’ guy comes with some nice specs. The one I have is configured thusly:

  • 1U
  • 64GB RAM
  • 8 x 240GB SSD
  • 2 x 10 core 2.4Ghz E5-2640 v4 Xeon  (40 threads)
  • 2 x PSU
  • Cisco 12G SAS RAID controller (more on this later)
  • 2 x 1Gbit on-board NIC

While having only 2 of his 24 RAM slots full, we can make do with this – besides, filling it up to 1.5TB of RAM will cost $14,400 – ouch! Specifically, I think we can make this into a nice host of lots and lots of virtual machines (VMs) via a modern day hypervisor.

Just 4 steps!!

Looking about at what might be a good solution for this, I cast my eyes upon LXD. I’m already a huge fan of Ubuntu, the OS LXD works best with. As well, with 2.0, Canonical is really pumping up LXD as a viable, production worthy solution for a running your own cloud compute. Given that 2.0 now ships with Ubuntu 16.04, I figured I’d give it for a spin.

I’m guessing a techie at Canonical got a hold of a marketing person at Canonical; they realized that with LXD baked into Ubuntu, anyone comfy on the command could line run 4 steps to get an instant container.  This meant that while there’s a lot more complexities to getting a LXD running in a production environment (LAN bridge, ZFS Pools, ZFS Caches, JBOD on your RAID card, static IPs dynamically assigned, all covered below), it was indeed intensely satisfying to have these 4 steps work exactly as advertised.  Sign me up!

But what about the other options?  Did I look at more holistic approaches like OpenStack?  Or even the old, free as in beer vSphere? What about the up and coming SmartOS (not to be confused with the unrelated TrueOS or PureOS)? Given the timing of the demise of SmartOS’s roots in Solaris, maybe it’s time to pay my respects? Finally, there is of course KVM.

While I will still will take up my friend’s offer to run through an OpenStack install at a later date, I think I’m good for now.

This write up covers my learning how to provision my particular bare metal, discovering the nuances of ZFS and finally deploying VMs inside LXD.  Be sure to see my conclusion for next steps after getting LXD set up.

Terms and prerequisites

Since this post will cover a lot of ground, let’s get some of the terms out of the way. For experts, maybe skip this part:

  • Ubuntu – the host OS we’ll install directly onto the C220, specifically 16.04 LTS
  • LXD – say “Lex-dee” (and not this one) – the daemon  that controls LXC. LXD  ships with 16.04 LTS. All commands are lxc
  • LXC –  Linux Containers – the actual hypervisor under LXD. Yes, it’s confusing – even the man says so ;)
  • container – interchangeably used with “VM”, but it’s a secure place to run a whole instance of an OS
  • ZFS – the file system that allows for snapshots, disk limits, self healing arrays, speed and more.
  • JBOD/RAID – Two different disk configurations we’ll be using.
  • CIMC – Cisco Integrated Management Interface which is a out of band management that runs on Cisco server hardware. Allows remote console over SSL/HTML.
  • Stéphane Graber – Project leader of LXC, LXD and LXCFS at Canonical as well as an Ubuntu core developer and technical board member. I call him out because he’s prolific in his documentation and you should note if you’re reading his work or someone else’s.
  • KVM – “Keyboard/Video/Mouse” – The remote console you can access via the CIMC via a browser. Great for doing whole OS installs or for when you bungle your network config and can no longer SSH in.
  • KVM – The other KVM ;)  This is the Kernel Virtual Machine hypervisor, a peer of LXC. I only mention it here for clarification.  Hence forth I’ll always be referencing the Keyboard/Video/Mouse KVM, not the hypervisor.

Now that we have the terms out of the way, let’s give you some home work.  If you had any questions, specifically about ZFS and LXD, you should spend a couple hours on these two sites before going further:

  • Stéphane Graber’s The LXD 2.0: Blog post series – These are the docs that I dream of when I find a new technology: concise, from the horses mouth, easy to follow and, most of all, written for the n00b but helpful for even the most expert
  • Aaron Toponce’s ZFS on Debian GNU/Linux – while originally authored back on 2012, this series of posts on ZFS is canon and totally stands the test of time.  Every other blog post or write up I found on ZFS that was worth it’s salt, referenced Aaron’s posts as evidence that they knew they were right.  Be right too, read his stuff.

Beyond that, here’s a bunch of other links that I collected while doing my research. I will call out that Jason Bayton’s LXD, ZFS and bridged networking on Ubuntu 16.04 LTS+ gave me an initial burst of inspiration that I was on the right track:

LXD Info:

ZFS Info

Prep your hardware

Before starting out with this project, I’d heard a lot of bad things about getting Cisco hardware up and running.  While there may indeed be hard-to-use proprietary jank in there somewhere, I actually found the C220M4 quite easy to use.  The worst part was finding the BIOS and CIMC updates, as you need a Cisco account to download them.  Fortunately I know people who have accounts.

After you download the .iso, scrounge up a CD-R drive, some blank media and  burn the .iso. Then, plug in a keyboard, mouse and CD drive to your server, boot from it with the freshly burned disk, and upgrade your C220. Reboot.

Then you should plug the first NIC into a LAN that your workstation is on that has DHCP.  What will happen is that the CIMC will grab an address and show it to you on the very first boot screen.

You see that 10.0.40.51 IP address in there (click for large image)?  That means you can now control the BIOS via a KVM over the network.  This is super handy. Once you know this IP, you can point your browser to the c220 and never have to use a monitor, keyboard or mouse directly connected to it.  Handy times!  To log into the CIMC the first time, default password is admin/password (of course ;).

The upgraded CIMC looks like this:

I’ve highlighted two items in red: the first, in the upper left, is the secret, harder-to-find-than-it-should-be menu of all the items in the CIMC.  The second, is how to to get to the KVM.

For now, I keep the Ubuntu 16.04 install USB drive plugged into the C220.  Coupled with the remote access to the CIMC and KVM, this allows to me to easily re-install on the bare metal, should anything go really bad.  So handy!

While you’re in here, you should change the password and set the CIMC IP to be static so it doesn’t change under DHCP.

Prep your Disks

Now it’s time to set up the RAID card to have two disks be RAID1 for my Ubuntu boot drive, and the rest show up as JBOD for ZFS use.  When accessing the boot process, wait until you see the RAID card prompt you and hit ctrl+v.  Then configure 2 of your 6 drives as a RAID1 boot drive:

And then expose the rest of your disks as JBOD.  The final result should look like this:

Really, this is too bad that this server has a RAID card.  ZFS really wants to talk to the devices directly and manage them as closely as possible.  Having the RAID card expose them as JBOD isn’t ideal.  The ideal set up would be to have a host bus adapter (HBA) instead of the raid adapter. That’d be a Cisco 9300-8i 12G SAS HBA for my particular hardware. So, while I can get it to work OK, and you often see folks set up their RAID cards as JBOD just like I did, it’s sub-optimal.

Install Ubuntu 16.04 LTS + Software

F6 to select boot drive

As there’s plenty of guides on how to install Ubuntu server at the command line, I won’t cover this in too much detail.  You’ll need to:

  1. Download 16.04 and write it to a USB drive
  2. At boot of the C220, select F6 to select your USB drive as your boot device (see right)
  3. When prompted in the install processtt, select the RAID1 partition you created above.  For me this was /dev/sdg, but will likely be different for you. I used LVM with otherwise default partitions.
  4. Set your system to “Install security updates automatically” when prompted with “Configuring tasksel
  5. For ease of setup, I suggest selecting “OpenSSH Server” when prompted for “Software select”. This way we won’t have to install and enable it later.
  6. Finally, when you system has rebooted, login as your new user and you can install the core software we’ll need for the next steps:
sudo apt-get install lxd zfsutils-linux bridge-utils
sudo apt install -t xenial-backports lxd lxd-client

ZFS

ZFS is not for the faint of heart.  While the LXD can indeed be run on just about any Ubuntu 16.04 box and the default settings will just work, and I do now run it on my laptop regularly, getting a tuned ZFS install was the hardest part for me.  This may be because I have more of a WebDev background and less of a SysAdmin background.  However, if you’ve read up on Aaron Toponce’s ZFS on Debian GNU/Linux’s posts, and you read my guide here, you’ll be fine ;)

After reading and hemming and hawing about which config was best, I decided that getting the most space possible was most important.  This means I’ll allocate the remaining 6 disks to the ZFS pool and not have a hot spare.  Why no hot spare?  Three reasons:

  1. This isn’t a true production system and I can get to it easily (as opposed to an arduous trip to a colo)
  2. The chance of more than 2 disks failing at the same time seems very low (though best practice says I should have different manufacturer’s batches of drives – which I haven’t checked)
  3. If I do indeed have a failure where I’m nervous about either the RAID1 or RIADZ pool, I can always yank a drive from one pool to another.

Now that I have my 6 disks chosen (and 2 already running Ubuntu), I reviewed the RAID levels ZFS offers and choose RAIDZ-2 which is, according to Aaron, “similar to RAID-6 in that there is a dual parity bit distributed across all the disks in the array. The stripe width is variable, and could cover the exact width of disks in the array, fewer disks, or more disks, as evident in the image above. This still allows for two disk failures to maintain data. Three disk failures would result in data loss. A minimum of 4 disks should be used in a RAIDZ-2. The capacity of your storage will be the number of disks in your array times the storage of the smallest disk, minus two disks for parity storage.”

In order to have a log and cache drive (which can be the same physical disk), I’ll split it so 5 drives store data and 1 drive with two partitions store my log and cache. Read up those log and cache links to see how these greatly improve ZFS performance, especially if your data drives are slow and you have a fast cache drive (SSD or better).

To find the devices which we’ll use in our pool, let’s first see a list of all devices from dmesg:

sudo dmesg|grep \\[sd|grep disk
[ 17.490156] sd 0:0:0:0: [sdh] Attached SCSI removable disk
[ 17.497528] sd 1:0:9:0: [sdb] Attached SCSI disk
[ 17.497645] sd 1:0:8:0: [sda] Attached SCSI disk
[ 17.499144] sd 1:0:10:0: [sdc] Attached SCSI disk
[ 17.499421] sd 1:0:12:0: [sde] Attached SCSI disk
[ 17.499487] sd 1:0:11:0: [sdd] Attached SCSI disk
[ 17.499575] sd 1:0:13:0: [sdf] Attached SCSI disk
[ 17.503667] sd 1:2:0:0: [sdg] Attached SCSI disk

This shows us that sdh is our USB drive (“removable”) and that the rest are the physical drives.  But which is our boot drive comprised of a RAID1 volume?  It’s the one mounted with an ext4 file system:

sudo dmesg|grep mount|grep ext4
[ 21.450146] EXT4-fs (sdg1): mounting ext2 file system using the ext4 subsystem

So now we know that sdb, sda, sdc, sde, sdd and sdf are up for grabs. The hard part with ZFS is just understanding the full scope and impact of how to set up your disks.  Once you decide that, things become pleasantly simple.  Thanks ZFS!  To set up our RAIDZ-2 pool of 5 drives and enable compression to it is just these two command:

sudo zpool create -f -o ashift=12 lxd-data raidz1 sda sdb sdc sdd sde
sudo zfs set compression=lz4 lxd-data

Note that there’s no need to edit the fstab file, ZFS does all this for you. Now we need to create our log and cache partitions on the remaining sdf disk.  First find out how many blocks there are on your drive, 468862128 in my case, using parted:

parted /dev/sdf 'unit s print'|grep Disk|grep sdf
 Disk /dev/sdf: 468862128s

Aaron’s guide suggests we can make both partitions in one go with this command, which will make a 4GB log partition and use the remainder to make a ~230GB cache partition:

parted /dev/sdf unit s mklabel gpt mkpart log zfs 2048 4G mkpart cache zfs 4G 468862120

However, this didn’t work for me. I had to go through the parted using the interactive mode. Run this twice starting both times with the sdf device:

parted /dev/sdf unit s

I forgot to record this session when I formatted my partitions, but you should end up with this when you’re done:

sudo parted /dev/sdf print
Model: ATA SAMSUNG MZ7LM240 (scsi)
Disk /dev/sdf: 240GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags:

Number Start  End    Size   File system Name Flags
 2     1049kB 3999MB 3998MB zfs         log
 1     4000MB 240GB  236GB  zfs         cache

OK – almost there!  Now, because linux can mount these partitions with different device letters (eg sda vs sdb), we need to use IDs instead in ZFS.  First, however, we need to find the label to device map:

sudo ls -l /dev/disk/by-partlabel/
 total 0
 lrwxrwxrwx 1 root root 10 Sep 15 11:20 cache -> ../../sdf2
 lrwxrwxrwx 1 root root 10 Sep 15 11:20 log -> ../../sdf1

Ok, so we’ll use sdf1 as log and sdf2 as cache.  Now what are they’re corresponding IDs?

sudo ls -l /dev/disk/by-id/|grep wwn|grep sdf|grep part
 lrwxrwxrwx 1 root root 10 Sep 15 11:20 wwn-0x5002538c404ff808-part1 -> ../../sdf1
 lrwxrwxrwx 1 root root 10 Sep 15 11:20 wwn-0x5002538c404ff808-part2 -> ../../sdf2

Great, now we know that wwn-0x5002538c404ff808-part1 will be our log and wwn-0x5002538c404ff808-part2 will be our cache.  Again, ZFS’s commands are simple now that we know what we’re calling.  Here’s how we add our cache and log:

zpool add lxd-data log /dev/disk/by-id/wwn-0x5002538c404ff808-part2
 zpool add lxd-data cache /dev/disk/by-id/wwn-0x5002538c404ff808-part1
 wwn-0x5002538c404ff808-part2 log
 wwn-0x5002538c404ff808-part1 cache

Now we can confirm our full ZFS status:

sudo zpool iostat -v
                                   capacity     operations    bandwidth
pool                            alloc   free   read  write   read  write
------------------------------  -----  -----  -----  -----  -----  -----
lxd-data                        1.37G  1.08T      0     16  3.26K  56.8K
  raidz1                        1.37G  1.08T      0     16  3.25K  56.5K
    sda                             -      -      0      4  1.04K  36.5K
    sdb                             -      -      0      4  1.03K  36.5K
    sdc                             -      -      0      4  1.01K  36.5K
    sdd                             -      -      0      4  1.02K  36.5K
    sde                             -      -      0      4  1.03K  36.5K
logs                                -      -      -      -      -      -
  wwn-0x5002538c404ff808-part2   128K  3.72G      0      0      4    313
cache                               -      -      -      -      -      -
  wwn-0x5002538c404ff808-part1   353M   219G      0      0      0  20.4K
------------------------------  -----  -----  -----  -----  -----  -----

Looking good!  Finally, we need to keep our ZFS pool healthy by scrubbing regularly.  This is the part where ZFS self heals and avoids bit rot. Let’s do this with a once per week job in cron:

0 2 * * 0 /sbin/zpool scrub lxd-data

Networking

Now, much thanks to Jason Bayton’s excellent guide, we know how to have our VMs get IPs in on LAN instead of being NATed.  Right now my NIC is enp1s0f0 and getting an IP via DHCP.  Looking in  /etc/network/interfaces I see:

# The loopback network interface
auto lo
iface lo inet loopback

# The primary network interface
auto enp1s0f0
iface enp1s0f0 inet dhcp

But to use a bridge (br0) and give my host the static IP of 10.0.40.50, we’ll make that file look like this:

# The loopback network interface
auto lo
iface lo inet loopback

# The primary network interface
auto br0 
iface br0 inet static
       address 10.0.40.50
       network 10.0.40.0
       netmask 255.255.255.0
       gateway 10.0.40.1
       broadcast 10.0.40.255
       dns-nameservers 8.8.8.8 8.8.4.4
       bridge_ports enp1s0f0
iface enp1s0f0 inet manual

This will allow the VMs to use br0 to natively bridge up to enp1s0f0 and either get a DHCP IP from that LAN or be assigned a static IP. In order for this change to take effect, reboot the host machine.  Jason’s guide suggests running ifdown and ifup, but I found I just lost connectivity and only a reboot would work.

When you next login, be sure you use the new, static IP.

 

Set file limits

While you can use your system as is to run containers, you’ll really want to updated per the LXD recommendations.  Specifically, you’ll want to allow for a lot more headroom when it comes to file handling.  Edit sysct.conf:

sudo vim /etc/sysctl.conf

Now add the following lines, as recommended by the LXD project:

fs.inotify.max_queued_events = 1048576
fs.inotify.max_user_instances = 1048576
fs.inotify.max_user_watches = 1048576

As well, edit limits.conf

sudo vim /etc/security/limits.conf

Now add the following lines. 100K should be enough:

* soft nofile 100000
* hard nofile 100000

LXD

Now that we have our bare metal provisioned, we have our storage configured, file limits increased and our network bridge in place, we can actually get to the virtual machine part, the LXD part, of this post. Heavily leveraging again Jason Bayton’s still excellent guide, we’ll initialize LXD by running lxd init. This go through the first time LXD guide and ask a number of questions about how you want to run LXD.  This is specific to 2.0 ( >2.1 has different questions).  You’ll see below, but the gist of it is that we want to use our ZFS pool and our network bridge:

lxd init
 -----
 Name of the storage backend to use (dir or zfs) [default=zfs]:
 Create a new ZFS pool (yes/no) [default=yes]? no
 Name of the existing ZFS pool or dataset: lxd-data
 Would you like LXD to be available over the network (yes/no) [default=no]? yes
 Address to bind LXD to (not including port) [default=all]:
 Port to bind LXD to [default=8443]:
 Trust password for new clients:
 Again:
 Do you want to configure the LXD bridge (yes/no) [default=yes]?

Note that we don’t accept the ZFS default and specify our own ZFS pool lxd-data. When you say “yes”, you want to configure the bridge, you’ll then be prompted with two questions.  Specify br0 when prompted. Again, refer to Jason Bayton’s guide for thorough screen shots:

Would you like to setup a network bridge for LXD containers now? NO
Do you want to use an existing bridge? YES
Warning: Stopping lxd.service, but it can still be activated by:
 lxd.socket
 LXD has been successfully configured

Success!  LXD is all set up now.  Let’s prep the host OS for a bunch of open files now.

Hello world container

Whew!  Now that all the hard parts are done, we can finally create our first container. In this example, we’ll be creating a container called  nexus and put some basic limitations on it. Remember, do not create these as root! One of the massive strengths of LXD is that it’s explicitly designed to be secure and running containers as root would remove a lot of this security. We’ll call lxc init and then pass some raw commands to set the IP to be .52 in our existing /24 subnet on the bridge. If you run lxc list our new container shows up. That all looks like this:

lxc init ubuntu: nexus
echo -e "lxc.network.0.ipv4 = 10.0.40.52/24\nlxc.network.0.ipv4.gateway = 10.0.40.1\n" | lxc config set nexus raw.lxc -
lxc start nexus

lxc list
+-------+---------+-------------------+------+------------+-----------+
| NAME  | STATE   | IPV4              | IPV6 | TYPE       | SNAPSHOTS |
+-------+---------+-------------------+------+------------+-----------+
| nexus | RUNNING | 10.0.40.52 (eth0) |      | PERSISTENT | 0         |
+-------+---------+-------------------+------+------------+-----------+

Finally, we want to limit this container to have 4 CPUs, have 4GB of RAM and 20GB of disk.  Like before, these commands are not run as root. They are imposed on the container in real time and do not require a restart.  Go LXD, go!

lxc config set nexus limits.cpu 4
lxc config set nexus limits.memory 4GB
lxc config device set nexus root size 20GB

You’re all done!  From here you can trivially create tons more containers.  You can let them have an ephemeral IP via DCHP on the bridge or easily set a static IP. If you have a resource intensive need, you don’t set the limits above, the container will have access to the full host resources.  That’d be 40 cores, 64GB RAM and 1TB of disk. If you created more VMs with out resource limiting, LXC would do the balancing of resources so that each container gets it’s fair share.  There’s a LOT more configuration options available.  As well, even the way I declared a static IP can likely be done via dnsmasq, (see this cyberciti.biz article), but I had trouble getting that to work on my setup, hence the raw calls.

Next steps

Now that you you’ve bootstrapped your bare metal, dialed in your storage back end, specifically deployed your LAN, you should be all set, right?  Not so fast! To make this more of a production deployment, you really need to know how (and practice!) your backup and restore procedures.  This likely will involve the snapshot command (see post 3 of 12) and then backing those snapshots off of the ZFS pool. Speaking of the ZFS pool, our set up as is doesn’t have any alerting if a disk goes bad.  There’s many solutions out there for this – I’m looking at this bash/cron combo.

Having a more automated system to provision containers integrated with a more DevOps-y set up makes sense here. For me this might mean using Ansible, but for you it might be something else!  I’ve heard good things about cloud-init and Juju charms. As well, you’d need a system to monitor and alert on resource consumption.  Finally, a complete production deployment would need at least a pair, if not more, of servers so that you can run LXD in a more highly available setup.

Given this is my first venture into running LXD, I’d love any feedback from you!  Corrections and input are welcome, and praise too of course if I’ve earned it ;) Thanks for reading this far on what may be my longest post ever!

Howto: Sympa 6.2 on Ubuntu 17.04

2 minutes, 55 seconds

This post is a continuation of the my last post on the topic, “Howto: Sympa 6.1 on Ubuntu 16.04“.  It should come as no surprise that this is about installing Sympa on the most recent version of Ubuntu to get the most recent version of Sympa (at the time of this writing).  That’d be Sympa 6.2.16 and Ubuntu 17.04. The steps only vary a little between between the two, but here’s the all them for completeness.

Assumptions/Prerequisites

Like the last post, this one assumes you have root on your box.  It assumes you have Apache2 installed.  It assumes you’re running a stock Ubuntu 17.04 install.  It assumes you want to run Sympa on your server.  It also assumes you’ll be using Postfix as the lists MTA. It assumes you have a DNS entry (A record) for the server.  As well it assumes you also have an MX record pointing to the A record or no MX record so the MX defaults to the A record.  If this doesn’t apply to you, caveat emptor!

To recap, that’s:

  • Apache 2 installed and working
  • Postfix as MTA
  • Ubuntu 17.04 server
  • Existing DNS entry
  • Run all commands as root

I also was using this server solely to serve Sympa mail and web traffic so if you have a multi-tenant/multi-use server, it may be more complicated.

Steps

These steps assume you’re going to install Sympa on list.example.com.  There’s no reason you couldn’t use example.com instead. Your zero step should be sudo apt-get update&& sudo apt-get upgrade.

  1. First step is to install Sympa. As well, we’ll install some packages that are used to fix a bug in 6.2’s GUI where the drop down nav menus don’t work:
    apt-get install -y sympa javascript-common libjs-jquery-migrate-1
  2. When prompted during this install:
    1. Choose a good mysql root password and enter it when prompted
    2. Please select the mail server configuration type that best meets your needs: Internet Site
    3. System mail name: list.example.com
    4. Which Web Server(s) are you running?: apache 2
    5. Configure database for sympa with dbconfig-common?: Yes
    6. Database type to be used by sympa: mysql
    7. MySQL application password for sympa: <blank> (will assign random one)
  3. Now it’s time to fix that css bug in 6.2 with a copy:
    cp /usr/share/sympa/default/web_tt2/head_javascript.tt2 /etc/sympa/web_tt2

    And then edit /etc/sympa/web_tt2/head_javascript.tt2 and add the jquery-migrate-1.js file after jquery.js:

    <script src="[% static_content_url %]/external/jquery.js"></script>
    <script src="/javascript/jquery-migrate-1.js"></script>
  4. Edit /etc/sympa/sympa/sympa.conf to match the following values:
    listmaster listmaster_here@domain.com,other_here@domain.com
    domain list.example.com
    wwsympa_url https://list.example.com/wws
    default_home home
    create_list intranet

    The “intranet” value will prevent some one from signing up and requesting a list with any approval.

    lists.example.com should show the sympa UI, w00t!

  5. Ensure Sympa starts at boot:
    update-rc.d sympa defaults
    update-rc.d sympa enable
  6. ensure postfix is updated in /etc/postfix/main.cf edit these values to match:
    myhostname = list.example.com
    alias_maps = hash:/etc/aliases,hash:/etc/mail/sympa/aliases
    alias_database = hash:/etc/aliases,hash:/etc/mail/sympa/aliases
    mydestination = $myhostname, example.com, list.example.com, localhost.example.com, localhost
    relay_domains = $mydestination, list.example.com
    local_recipient_maps =
  7. add default aliases for sympa at the top of /etc/mail/sympa/aliases:
    ## main sympa aliases
    sympa: "| /usr/lib/sympa/bin/queue sympa@lists.example.com"
    listmaster: "| /usr/lib/sympa/bin/queue sympa@lists.example.com"
    bounce+*: "| /usr/lib/sympa/bin/bouncequeue sympa@lists.example.com"
    sympa-request: email1@example.com
    sympa-owner: email1@example.com
  8. reboot and rebuild aliases:
    newaliases
    reboot

Sympa should now be up and running at lists.example.com!  All mail in and out should work so you can run your own list server. Please report any problems so I can keep this post updated and accurate – thanks!

Howto: Sympa 6.1 on Ubuntu 16.04

3 minutes, 6 seconds

Recently I was tasked at work to get an instance of Sympa set up. Their docs are a bit scattered, but I found a promising post on debian.org which suggested I could get away with an apt-get install instead of needing to compile from source. Well, it turns out I did get it working, but only after a lot of trial and error. Given that some one else might be trying to do this, and because I had to document the exact steps for work, here’s a handy dandy blog post which I hope will help some one trying to do the same thing.

Good news for those looking to do this for Sympa 6.2 (latest at time of publishing), I have a post on how to do this exact thing on the soon to be released Ubuntu 17.04 with Sympa 6.2.  Stay tuned!

Assumptions/Prerequisites

This post assumes you have root on your box.  It assumes you have Apache2 installed.  It assumes you’re running a stock Ubuntu 16.04 install.  It assumes you want to run Sympa on your server.  It also assumes you’ll be using Postfix as the lists MTA. It assumes you have a DNS entry (A record) for the server.  As well it assumes you also have an MX record pointing to the A record or no MX record so the MX defaults to the A record.  If this doesn’t apply to you, caveat emptor!

To recap, that’s:

  • Apache 2 installed and working
  • Postfix as MTA
  • Ubuntu 16.04 server
  • Existing DNS entry
  • Run all commands as root

I also was using this server solely to serve Sympa mail and web traffic so if you have a multi-tenant/multi-use server, it may be more complicated.

Steps

These steps assume you’re going to install Sympa on list.example.com.  There’s no reason you couldn’t use example.com instead.

  1. Install sympa:
    apt-get install -y sympa
  2. When prompted during this install:
    1. Choose a good mysql root password and enter it when prompted
    2. Please select the mail server configuration type that best meets your needs: Internet Site
    3. System mail name: list.example.com
    4. Which Web Server(s) are you running?: apache 2
    5. Database type to be used by sympa: mysql
    6. MySQL application password for sympa: <blank> (will assign random one)
  3. Sympa 6.1 has a present! It ships with a bug.  Fix the regex on line 126 of /usr/share/sympa/lib/SympaSession.pm so it looks like this (note “{” is now “\{” ):
    if ($cookie and $cookie =~ /^\d\{,16}$/) {
  4. Edit /etc/sympa/wwsympa.conf to change line 81 to 1 instead of 0:
     use_fast_cgi 1

    If you don’t do this step, you’ll see full HTML pages show up in /var/logs/syslog and only 500 errors in the browser :(

    lists.example.com should show the sympa UI, w00t!

  5. Ensure Sympa starts at boot:
    update-rc.d sympa defaults
    update-rc.d sympa enable
  6. ensure postfix is updated in /etc/postfix/main.cf edit these values to match:
    myhostname = lists.example.com
    smtpd_tls_cert_file=/full/path/to/cert/apache/uses.pem
    smtpd_tls_key_file=/full/path/to/key/apache/uses.pem
    alias_maps = hash:/etc/aliases,hash:/etc/mail/sympa/aliases
    alias_database = hash:/etc/aliases,hash:/etc/mail/sympa/aliases
    mydestination = $myhostname, lists.example.com, , localhost
    relay_domains = $mydestination, lists.example.com
  7. Update /etc/sympa/sympa.conf so that these values match:
    listmaster email1@example.com,other_here@domain.com
    domain lists.example.com
    wwsympa_url https://lists.example.com/wws

     

  8. update /etc/sympa/wwsympa.conf so that these values match:
    default_home lists
    create_list intranet

    The “intranet” value will prevent some one from signing up and requesting a list with any approval.

  9. add default aliases for sympa at the top of /etc/mail/sympa/aliases:
    ## main sympa aliases
    sympa: "| /usr/lib/sympa/bin/queue sympa@lists.example.com"
    listmaster: "| /usr/lib/sympa/bin/queue sympa@lists.example.com"
    bounce+*: "| /usr/lib/sympa/bin/bouncequeue sympa@lists.example.com"
    sympa-request: email1@example.com
    sympa-owner: email1@example.com
  10. reboot and rebuild aliases:
    newaliases
    reboot

Sympa should now be up and running at lists.example.com!  All mail and and out should work so you can run your own list server. Please report any problems so I can keep this post updated and accurate – thanks!

Scanning multiple subnets for SSH servers that accept passwords

1 minute, 25 seconds

At work I was tasked to see if any of our servers are running SSH which allow passwords instead of strictly only allowing SSH keys.  You can tell if they allow passwords when you get a password prompt like this:

$ ssh user@example.com 
user@example.com's password:

Of course we’ll use nmap to scan for open SSH ports. I suspect I should have have used nmap NSE to do scripting, but we’ll plod ahead with out it.  Here’s the call I used to scan each subnet for open SSH ports and append it to ‘open.raw.txt’. Run this for each of your subnets:

nmap -PN -p 22 --open -oG - 1.2.3.0/24 >> open.raw.txt

Here’s an example line from open.raw.txt:

Host: 1.2.3.1 ()	Status: Up
Host: 1.2.3.1 ()	Ports: 22/open/tcp//ssh///

To get that all formatted nice for the next phase, we’ll just cut out dupe and grab just the IPs:

grep 'Up' open.raw.txt |cut -d' ' -f2 > open.ips.txt

Finally, taking much inspiration from this script on StackOverflow, I wrote a bash script to check for servers with a password prompt on SSH called ssh.test.sh:

#!/usr/bin/expect
proc isHostAlive {host} {
set timeout 5 
spawn ssh -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null -o LogLevel=QUIET -o PasswordAuthentication=yes ssh-testing@$host
expect {
    timeout {puts "Timeout happened"; return 'TIMEOUT'}
    eof {return 'NO'}
    -nocase "password:" {send "exit\r";return 'YES'  } 
}
}

# Lists to maintain the each host's information 
set serverList {1.2.3.1 1.2.3.2 1.2.3.3 1.2.3.4 1.2.3.5}

# Looping through all the servers and checking it's availability
foreach server $serverList { 
    puts "\n$server : [isHostAlive $server]\n"
}

To execute and log the results, call:

ssh.test.sh > password.accepted.raw.txt

And finally, to clean up those results into a file with “YES”, “NO” or “TIMEOUT” for each IP, just use this final grep

egrep  'YES|NO|TIMEOUT' password.accepted.raw.txt > password.accepted.txt

The final results will look like this:

1.2.3.1 : 'YES'
1.2.3.2 : 'NO'
1.2.3.3 : 'YES'
1.2.3.4 : 'YES'
1.2.3.5 : 'TIMEOUT'

Happy SSH testing!

Ubuntu 16.04 on Dell XPS 13 9350 (Updated 11/24/16)

4 minutes, 52 seconds

Why upgrade?

dell.xps.12.deAfter watching my shopmate get one, then seeing The Wirecutter recommend it (as of May 2016) and then seeing Dell upgrade it, I was very tempted to get a Dell XPS 13 and run Linux on it.

The straw that broke the camel’s back was seeing factory refurbished ones with 16GB of RAM, 512GB of disk, an i7, a Dell 1 year warranty and that gorgeous QHD+ resolution (3200×1800) for less than $1,300 on eBay (again as of May 2016). This is is about $700 off of retail for a comparably equipped machine (or a lot more). Though this ships with Windows 10, I figured I could install Ubuntu on it.

Ubuntu 16.04 install

Xerus_White-1024x576Before pulling the trigger on eBay, I did a lot of reading on the forums about how to install over windows. Specifically, posts like this we’re key. I knew in the BIOS that I needed to turn off Secure boot and switch disk mode to AHCI. Further reading suggested that it might be a real uphill battle to get everything, hardware wise, working.

I’m happy to report that just about everything works under 16.04! I installed it after making an install image after USB boot drive from the .iso. The touch screen doesn’t work after suspend. I was hopeful that the xinput enable/disable trick might fix it, but no such luck just yet. Given that you can’t do single finger scroll and pinch to zoom when touch is working, I’m not too bummed about this.

I was very pleased to see that this cheap USB-C to DisplayPort adapter works great. Though it can’t quite do 4k (resolution shows in “Displays”, but my Dell won’t show an image), it will do all resolutions below that. It works on boot and after repeated hot swaps. The cheap USB to Ethernet adapter worked flawlessly as well, Handy times!

The other pleasant surprise was that the Broadcom WiFi/Bluetooth card also seems to just work under 16.04. Admittedly, I’ve just been using the WiFi with no problems and haven’t tested the Bluetooth.

The trackpad works as well as my MacBook Air trackpad did under 14.04. It looks like there’s some PS/2 vs Native twiddling that might improve palm detection, but I’m happy with it as is given it’s status quo for me.

Software and eBay

air.ebayThere’s a little hiccup with OwnCloud resetting after suspend or WiFi hopping, but this is nothing to do with the XPS 13, and everything to do with 16.04. As well, the version of KeepassX that ships with 16.04 no longer supports the old version 1 flavor of the password safe. It was high time I upgraded to version 2 anyway!

Finally, anyone want to buy an 11″ Air formerly running Ubuntu 14.04, but now with a clean install of El Capitan?

6/2/2016 Update

intel.cardAfter having this laptop for a bit now and having made a few tweaks, I wanted to update this post. The first, and most important, change is that I found the Broadcom chip I mentioned above did not “just work”. Though strictly anecdotal, I found that the card had intermittent high packet loss on WiFi and was actually unusable when I went to use my Bluetooth mouse for a few hours. Swapping out the card it shipped with for the Intel 7265 802.11ac one, instantly solved all these problems. The Intal card may have caused WiFi to stop working just once after a bunch of suspend and resumes, but I can’t remember so it must not be that annoying ;)

Two notes on replacing the card: The first is that while the Amazon link cited for this is certainly the right price at $20.99 (as of Jun 6 ’16), it is a bit slow in shipping. The second is that you should totally use iFixit’s XPS 13 tear down guide as a how to for opening this laptop. As well, they cite the online Dell repair manual (pdf), if you want to reference that. And, yes, when iFixit says the bottom of the laptop takes more force to open than you think it should, they’re totally correct!

The other big change is that I installed TLP in hopes of improving battery life. Using the default config out of the box seems to yield good results. However, this is just anecdotal evidence again, nothing scientific.

Otherwise, the general update is that this laptop continues to rock. I use it about 3-10 hours a week, including some intense 4 hour work sessions at the cafe. For full dev environment I run two vagrant VMs on Virtual Box along with Chromium, Firefox and PHP Storm as my IDE. I’m pretty sure I could get 6-8 hours of battery life running these apps if I was conservative with the screen brightness. Some times when I resume it, I’ll see 14 hours remaining in the battery life. It almost always drops down, but still neat to see.

It looks like you can still get this bad boy on eBay for just under $1,400!

8/13/2016 Update

I’m still totally loving this laptop! Seeing my wife get more comfy with her laptop and it’s touch screen, I wondered if I could find a fix for mine under 16.04. After some searching around a found a fix that is…so simple it’s silly. Quickly close and open the lid:

close the lid just to enter suspend state, but then reopen it quickly, so that it stays in suspend state even when the lid is open. Then press power button to wake it up. Then the touchscreen works again!
xps9350: touchscreen stops working after sleep

After testing this over the last 4 days, I can confirm it works. Coupled with the fact that the OS and Chrome/Chromium handle single finger scroll and pinch to zoom, I’m a happy camper! The icing the cake was finding the Grab and Drag plugin for FireFox. Though pinch to zoom doesn’t work, single finger scroll does. Handy times!

11/24/2016 Update

One my fine readers pointed out that the touch screen under 16.04 has been more properly fixed. This is awesome! This gist on github has all the info you need. Be sure you read the part about using uname -r to get your kernel and make sure it matches on the line that the script is hard coded against. For it it was 4.4.0-45, but for the original author it was 4.4.0-47. As well, it didn’t fully start working for me until a restart.

5/17/2018 Update

Almost exactly two years in and this laptop is still my daily driver! It’s been *super* stable and I’ve had no problems with it. I’m still on 16.04, despite 18.04 having just been released. Also, check out my latest post about USB-C accessories I’m using!

Let’s Encrypt TLS & A+ on SSL Labs

1 minute, 23 seconds

As you may recall, around here we like our TLS, and we like it rare!  I have no idea what “rare” means in the context of web encryption, but you know, that’s the way we like it!  We also like it cheap!  Even better than cheap (free) is open (libre)!  So, put it all together, and you get Let’s Encrypt TLS certs on plip.com and related properties.  Yay!

I saw the EFF/Let’s Encrypt crew speak at at DEF CON this past summer,  and it was super inspiring.  I’d already heard about the project, but it was that much more exciting to hear their talk.  I was dead set on cutting over from my old CA, to the new, definitively open, one.

While I was in my server, I went and visited my fave site for secure web server configs, cipherli.st. The only option I can’t run is SSLSessionTickets, as it requires Apache 2.4.11 or later. We’ll get there later with an Apache upgrade.

Put this all together?  You get a big fat A+ on SSL Labs.  w00t!

Some caveats are that Let’s Encrypt is just in public beta, so there’s some still some kinks to work out. For example, I mistakenly tried to get a cert for a wildcard vhost (eg “*.foo.bar.com”).  I got back an error:

An unexpected error occurred:
The request message was malformed :: Error creating new authz :: Invalid character in DNS name
Please see the logfiles in /var/log/letsencrypt for more details.

So, while I shouldn’t have selected the domain to get a cert (Let’s Encrypt doesn’t do wildcard certs), the error was a bit cryptic.  Fortunately the fine folks in their IRC channel (web IRC client – #letsencrypt @ Freenode to DIY) quickly pointed out to not use wildcards and that there was issue 1683 already open.

Awesome sauce, thanks Let’s Encrypt!