Navigation

Subscribe

Get the latest posts delivered right to your inbox.

or subscribe via RSS with Feedly!

The Ubuntu Home Server

The following is a script for my presentation at LinuxFest Northwest in Bellingham, WA April 28th, 2018, although I didn't necessarily stick to it because the room was packed and I wanted to make it feel more natural. View the recording here.


Welcome to The Ubuntu Home Server: Convenience and Security. I am Michael DeMers, a teacher, educational technologist, and technical trainer. I work for the Clark County School District in Las Vegas, Nevada, where I consult teachers as a digital coach and instruct students on digital citizenship and common-core technology standards. In this session I will be talking about my experiences setting up a home server, what it is doing for me right now, most of the things that I learned from the experience, and overall about what I really like about my setup and why I recommend other Linux users to try it.
If you’ve never attempted something like this before, I hope you may learn from my mistakes. If you’re a pro sysadmin, try to not let your eyes roll across the floor; I learned a lot of things the hard way. If you’re somewhere in-between, you’re in good company. Stick with me and I’m sure we’ll all pick up something enjoyable, and maybe informative from the experience.

Introduction

I’ve been using Linux for a while. I have tried out many different flavors, failed gloriously a few times at installing Arch, succeeded once, and have spent a good amount of time exploring what the Linux desktop has to offer. I still use Linux as the primary OS for my work computer, laptop, and for a lot of stuff that I do at home. I've even been able to convert some friends and family into Linux users.

I knew that Linux had a lot more that it could offer. I just needed to take the leap and work on a server. At the time I didn’t want to use a cloud service, I didn’t want to build a rack server, and I didn’t want to have to worry about a desktop that was always on. I needed something small that I could play around with, that I wouldn’t mind accidentally bricking or having compromised in any way. Overall, I wanted to be able to experiment to see what I could gain without too much at stake.

I wanted to have a central file system. I wanted to have something to run a couple dedicated game servers. I wanted to have something that allowed to access my private network from anywhere. So, I needed something to toy with to try and achieve some of those goals...and I had to read up a lot.

What I already had to play around with was an Acer laptop with an AMD E-350 APU. If you're unfamiliar with the APU chips, they are Accelerated Processing Units that combine a GPU and a CPU on a single die (integrated graphics). I had failed a few times to turn this into a Linux desktop, as I couldn’t find a decent graphics driver for the APU and Nouveau wouldn't run it correctly. It never would set to the native resolution for the monitor and that bugged me way too much (having an improperly configured display is one of my pet-peaves). I dug it out of the box in the garage, cleared off a USB drive, and got to work.

First Installation

My first instinct was to try to really “server-it-up” and use Red-Hat, or CentOS. I wasn't setting up a shared host, though, or any virtual machines. At the time, two of my devices were running Ubuntu Mate and I was really leaning that way. Don’t get me wrong, those are fantastic options for hosting any of the services I was looking to use. I just opted for Ubuntu because...well,just because. No sense in getting too far into distrobution choices. The title of this presentation is "The Ubuntu Home Server", after all.

Predictably, Ubuntu was an easy, breezey, beautiful install. It's one of the few distrobutions that has always been easy to get into. A wonderful blinking cursor was there, just waiting for me to log in. Blinking on the 17” laptop screen that was shining like a beacon in the night, eating up electricity and begging for your attention. This is not what I had imagined at all. I needed it dark, and to be able to close the lid and logon remotely. I ran the Cat5e cable from my router and plugged it in and got to work. I'd later realize that I should do some setup of my network options before connecting to the network.

I logged in to my account on the device and used my phone to Google how to turn off the power options on the lid switch. Easily enough, I added HandleLidSwitch=ignore into the /etc/systemd/logind.conf file. Now I could keep the lid open or shut and it wouldn't affect the system at all. I quickly ran ifconfig to check what local IP I had been assigned so I could connect remotely login and not be sitting in the corner typing on the laptop I didn't like.

Monitor

Something that I wanted to take care of right away was the screen. At this point, I could shut the lid but the screen remained lit. I wanted the server to be low power and low heat, especially since it would be tucked away behind an end-table in my living room. A cursory search brought me to a few forum posts that directed me to use the vbetool package. It allowed me to use the Display Power Management Signalling (DPMS) feature to force the display to power off.

I added the DPMS command (sudo vbetools dpms off) into my /etc/rc.local file so I could automate the screen toggle at every startup. While I was in there I figured I'd add sudo apt update && sudo apt upgrade -y so I wouldn't have to manually run updates.

Static IP

I knew from my studies that this was definitely a use-case for a static IP address. Sure, some of the functions could be handled through port-triggering, but I wanted to be able to have a single spot I could direct my local devices to. I logged into my wonderful router that was provided by my ISP and enterred in the MAC address for my new server. I clicked "save" and ... nothing. No feedback from the UI at all. In fact, it logged me out of the interface. I tried again, and again, and read the documentation again. After struggling plenty with my consumer-grade router and a crummy web-interface that didn't want to actually assign static IPs (get it together, Comtrend), I decided to do it the harder way.

More funtimes in config files, yay! Luckily it never gets old. I checked out /etc/network/interfaces (use cat or your favorite text editor) and located the interface that I saw earlier in my ifconfig search for my network IP address. I figured I'd just change the IP address to one low in the DHCP scope.

# This file describes the network interfaces available on your system
# and how to activate them. For more information, see interfaces(5).

source /etc/network/interfaces.d/*

# The loopback network interface
auto lo
iface lo inet loopback

# The primary network interface
auto enp6s0
iface enp6s0 inet dynamic    #change 'dynamic' to 'static'
address 192.168.1.10         #change to something easy, like 192.168.1.2
netmask 255.255.255.0
gateway 192.168.1.1
dns-nameservers 8.8.8.8 208.76.94.200

Then all I had to do was take that IP address out of the DHCP scope in my router. For me, that was in the LAN settings. In there I had the option to enable or disable the DHCP server, set the start and end addresses for DHCP, and (supposedly) assign static IPs (NFG).

Ports

While I was in there I figured I'd go into the NAT settings and setup port forwarding for shell access. Easy enough to do, just open 22 or whatever port you're going to use to connect. I'll skip over the details of the big mistake I made by putting my server into the DMZ of my router...you can read about it on my blog or just buy me an IPA and I'll tell you about my embarassing learning experience and cringey conversation with my ISP. Don't be ashamed of your mistakes, learn from them and teach others from them.

Don't fall into the trap thinking that you'll be secure through obscurity by using some random port for SSH. It's not that hard for someone to poke around to find open ports. If you want really secure access you should use RSA keys to access SSH, and get some packages to automatically block repeated failures to connect. SSHGuard and Fail2Ban came in handy when I was trying to troubleshoot my newbie mistake from earlier regarding DMZ. Really clear instructions are laid out on Unixmen here, but below is their description of the two tools:

  • SSHGuard is a fast and lightweight monitoring tool written in C language. It monitors and protects servers from brute force attacks using their logging activity. If someone continuously trying to access your server via SSH with many(may be four) unsuccessful attempts, the SSHGuard will block him/her for a bit by putting their IP address in iptables. Then, it will release the lock automatically after sometime. Not only SSH, it protects almost all services such as sendmail, exim, dovecot, vsftpd, proftpd and many.

  • Fail2Ban is an open-source intrusion prevention system that can be used to prevent brute force attacks and other suspicious malicious attacks. It scans log files (e.g. /var/log/apache/error_log) and bans IP’s that show the malicious signs such as too many password failures, seeking for exploits etc. Generally, Fail2Ban then used to update firewall rules to reject the IP addresses for a specified amount of time, although any arbitrary other action (e.g. sending an email, or ejecting CD-ROM tray) could also be configured. Out of the box Fail2Ban comes with pre-configured filters for various services (Apache, curier, SSH etc.).

source

And I was off and running...or, at least I could get started. I still needed some applications. I’ll go through the setup of most packages I use later, but I started out with OpenSSH Server, Samba Server, and Docker (even though I didn’t really have a plan for what containers I could even use). After a brief bout with UFW and setting folder permissions, I had Samba server running and I quickly used SCP to copy over my music library.

I did it! I was only a few hours in and I had a server up and running and I could finally access my music library that I’ve been toting around since 2001 without having it locally installed on every device I owned. A very welcome change from the streaming services and from having to locally access the files. Samba Network Music Player on Android and Music Bee on Windows are great applications for accessing network-hosted music files, by the way.

Experiementation and...

I played around a lot more with web-servers on Apache and Nginx (Ghost blogs, Wordpress Sites, static HTML), trying to build locally-hosted Rails apps, even just downloading little games from Codepen.io and throwing them up on the server. It was a lot of fun, and I got pretty good at shifting between sites on both Apache and Nginx. But, it was messy. I lost track of what packages I had installed, ended up with lots of orphaned config files, conflicts between web services, fights over ports...it got pretty ugly. I didn't want to spend my weekend picking through packages and hoping that apt autoremove would clean it all up for me. I realized that I needed a cleaner solution, and I knew I had a lot more documentation reading to do.

...do it again

I nuked it, reinstalled Ubuntu Server 16.04, got my network and Samba server setup, installed Docker, then installed Cockpit. And there was light. For most things. A few of the applications that I wanted to containerize later needed setup from the terminal to create config files or RSA keys, so I could use Cockpit to maintain them but not to set them up. It still makes starting, restarting, and updating containers quite easy.

Overall, my first setup was a pretty good success. I could use SSHuttle or ki4a to forward my web traffic similar to a VPN, I could have access to my media files from anywhere, and I could easily start and stop a Minecraft server to play around with my son. It stayed like this for a little while, because the server was working and I really didn’t feel like I needed more.

Functions

I really had to get a handle on the network functions of this server for it to be able to do more. Since it only has a single IP, I had to make some choices about what services I could run. I knew that I really needed to get all traffic to my server encrypted and I wanted to get a new container that required encryption, so SSL/TLS encryption was first on the table.

I highly recommend purchasing a domain name if you do not already have control over one you can use for personal projects. If you have a domain, you can just pick a sub-domain to forward to the static IP that you got from your ISP and you’re all set. Google will now register domains at domains.google.com, or you can check out some cheap domains at NameCheap. I have domains on both. I prefer Google if you're doing fun things like custom resource records, but if you just want a cheap name so you don't have to memorize your public IP then go with NameCheap.

SSL

An SSL certificate was super-easy to setup with LetsEncrypt and CertBot. I really do commend them on their free and open approach to encryption and security, and their contributions to the fight for net neutrality. All I had to do was follow the tutorial on their website which instructed me to download the software-properties-common package, then add their PPA and get the python-certbot-nginx package. I set it up to automatically forward all traffic to the HTTPS address and to auto-renew the certificate, just so I wouldn’t have to worry about it.

NextCloud/OwnCloud

Once SSL was setup, a few UFW allows and port forwards and I was good to go for anything needing encryption. First thing I wanted to try was Nextcloud. I really liked the idea of a cloud storage device that wasn’t Google or DropBox. I don’t fully trust Google, and DropBox has been a pain in my neck in the past. Even though I’ve deleted items from DropBox and tried to clear things out, it still claims that I’m over their free tier limit. Plus it’s just offloading files to an S3 Bucket on AWS, and I can do that fine on my own.

Anyways, Nextcloud goes smoothly if you download the container in Cockpit and follow the directions on DockerHub (Nextcloud official). I chose the persistent install because I wanted to keep my files on my server, even if I blew away the container.

One thing to note: Nextcloud requires SSL encryption, which is what prompted me to get the certificate to begin with.
Also, write down your MySQL password in case you need to recover things.

Ghost

Next came Ghost. I wanted to run a blog of my own. I’ve setup similar sites in the past for authors and non-profits that I’ve been referred to by friends and family, but I hadn’t had my own. I tried the Docker container, but I wasn’t quite sold on it. I found it a pain in the neck to have to bash into a container on a server just to change assets and update CSS. After a few rounds with different themes, I decided to scrap the container and install it directly on the server. This meant I had to ditch Nextcloud, however, or choose a different port for it. I opted to just ditch it since the files were already on the server, and I could just grab them through SSH.

With a domain and SSL encryption already taken care of, Ghost goes onto bare-metal with just a bit of tutorial following. You’ll need to setup a new user for Ghost to run on, install full MySQL, install Node.js, then install Ghost. Using the Ghost CLI has made this a much easier process than it used to be.

If you’re going to be using Ghost, though, you’re going to want to go through the painstaking steps of setting up MailChimp and properly configuring the production json file. This means setting up forwarding to and from your domain for MailChimp (lots of synthetic DNS records). After lots of reading and lots of changes to configuration files, you’ll have a wonderful, elegant, and free blogging platform ready to go. The best part is, if you forget your administrator password for Ghost you won’t have to destroy the container and start all over again (another beer-time story).

OpenVPN

OpenVPN is another application that I chose to run in a container. It’s easy to turn on and off as a container, and you could run multiple containers for multiple users in a nice and controlled manner. I setup Kyle Manna's OpenVPN container a couple times, and it's a breeze. There isn’t an official container, but this one has fantastic documentation that’s easy to follow. Using Kyle’s guide and his container you can quickly setup a TLS secure VPN with no password, opting for a 2048 bit RSA key instead.

I use OpenVPN along with the OpenVPN for Android, mostly. I do have it setup to use the same key so I can use the VPN with my laptop when I’m travelling, but a majority of the communication and browsing I do when away is done on mobile devices. When I forward DNS requests, too, I can make use of...

Pi-Hole

Pi-Hole is an application that really pushed me to use my server for more. It’s a DNS that applies filters to DNS requests blocking advertisements and general internet garbage from even getting to your device at all. Of all of the applications and services that my home server is running, this is the one that I value the most. I really do not know how I put up with the large amount of ads that made it to my screens; past AdBlock, past Ghostery script blocker, past everything else that’s browser-based.

Pi-Hole is also easy to setup. Just curl the installer as directed on pi-hole.net and pipe it to bash. You can review the code and install it yourself if you’re skeptical or a super-secure user. Pi-Hole provides a nice ncurses prompt in terminal for the setup of your new best friend.

I had two issues when installing Pi-Hole, however. The first is that the web-interface wasn’t working. Pi-Hole automatically installs a web-interface to update it’s “gravitational-pull” (list of sites for the filer), and to view statistics about the server. The issue I was having is that it wasn’t serving the PHP properly, just downloading it in the browser when I went to the address for the local web-interface instead of displaying it graphically. After a lot of forum-browsing I found it to be an issue with the Unix sockets on the server not connecting correctly for PHP. I had to install PHP-FPM and mess with too many config files for my liking.

The second issue happened after the first: the web-interface was working. This meant that anything else I wanted to use ports 80/443 for would not work (like for a blog). Pi-Hole doesn’t need the web-interface, you can do everything from terminal. Sure, the graphs are nice to look at, but they have a neat little chronograph (pihole -c) that you could fit onto a small LCD on a Raspberry Pi, as it was probably intended. I just keep the chronograph open in a terminal on my desktop at home to happily glance over and see how many DNS requests are getting blocked.

After more research and reading, it appears that there is an option on the initial install to activate this web-service. I must have overlooked it. It uses Lighttpd to serve the webpages, though, so it’s easy enough to use Systemctl to disable the service so it’s not claiming ports that belong to other services. sudo systemctl disable lighttpd.service. You can also rerun the ncurses setup with pihole -r.

Minecraft

Right now I am working in an elementary school, and I'm famous around there for using Minecraft in education. I won't go into exaclty how right now because that could be a talk on it's own, but I wanted to have a Minecraft server so that I could prepare maps on the weekend and be able to access them from school without having to copy and paste save files. I also have a six-year-old son who loves video games, and being able to build worlds with him in Minecraft is a really fun experience. Anyways, I don't think I really need to sell you further on the usefulness of a Minecraft server.

You probably know what I'm going to say..."setting up Minecraft is [blank]" Well, it is. Most of this stuff isn't that tough. You just have to be comfortable editing config files, sending docker commands through a terminal, and following through a tutorial. And, as the library teacher at my school says, "Read, read, read, read, read!"

I used the itzg/minecraft-server container image. It pulls the latest Minecraft release and gets it ready to go. It's not a one-click solution, though, so don't expect to be able to download the image in Cockpit and jump right in. You have to setup your Minecraft server parameters and ports through the terminal, so follow the instructions on the dockerhub page. If you try to run it straight from Cockpit without setting it up first it'll just get stuck in a loop because, at the very least, you have to accept the EULA in the server config.

While I'm pushing the reading thing...

Docker

One thing I learned from setting up all of these containers over and over again is that you can't expect the "clouds" to align (get it?) and make docker containers work straight from Cockpit...or even straight from Docker. While pulling new container images is easy with either of these two, you really have to read the documentation and have a moderate knowledge of what is going on to be able to use a lot of the tools available there.

If you want to try out a new container, go to the dockerhub page for it. Read the README file. Go to the linked Git repository and check out the comments and issues. Spend some time reading about it and you'll have a much better success rate, and an overall better experience. Now, I will typically have a few windows open when researching a new container: Cockpit on my server, the Dockerhub page, and the Github repository. For me, no amount of tinkering and toying has even come close to the amount of knowledge that I've gotten from reading README files and Github pages.

SteamCMD

SteamCMD is Steam's command-line client. It accesses the Steam network very similarly to the GUI client, but it's mainly used for running dedicated servers. I needed to create a separate user to run steamcmd, forward ports, and then send in CLI commands to download whatever game server I wanted using the application number that I had to lookup. It's a very cumbersome and messy process. Since more and more games are starting to operate with developer-managed servers only, I wonder how long SteamCMD will remain useful.

There are containers for SteamCMD, but I have yet to try them. I really didn't go too deep into exploration for steam dedicated servers. I did setup a Team Fortress 2 server with bots for myself and some friends to play on, but the auto-generated AI paths really leave you wanting more. There is a way to install a mod to make the AI better, but that was one step beyond my curiosity for this project.

I still have SteamCMD installed, but it's not something that I regularly use. If I took the time to properly install a better AI for TF2 or Counter-Strike maybe I would use it more, but lately I've been spending more time playing other video games.

Things I Learned

Obviously I learned a lot through this process. But, of the things I experienced a few important lessons stick out. The first is that if you are not 100% certain about how to setup an application or service, or if you're not sure if you will keep it, pull a container for it. Read the documentation for it. It is much easier to learn in the controlled enviornment of Docker and then install it propper than it is to mess things up and have to reinstall the whole OS or spend hours fixing things.

The second thing that I learned is that best-practice security is a must. Even if nothing terrible comes from it, you don't want your server bogged down rejecting bogus login attempts. Use UFW. Use RSA keys for logging in. Do not use common user names. Only forward the ports that you are using, do not use the DMZ for your server like I did (briefly).

And the third, biggliest thing that I learned is to automate as much as possible. You don't want a fun home project to become a chore. Use cron to run update jobs. Use systemd to launch the services that you add at boot. Set your containers restart policy to something that makes sense for their use. Get a domain so it's easy for you or your client applications to access your server from off-site.

My Favorites

My favorite uses for Acerserv are for convenience and security. I use my VPN whenever possible, or I use ki4a or sshuttle to forward all of my data whenever I can't use a VPN. My VPN or tunnel also forwards all DNS requests, so I can have ads blocked through pi-hole. This saves me data on my mobile devices and makes me feel at-ease checking my bank account or making any purchases while at Starbucks or In-N-Out or wherever. Knowing where my data is going and at least that it's being encrypted on that leg of the journey is a little reassuring.

I find myself listening to my music on my Samba server much more than I ever did when it was just on my desktop. I can easily play it on my phone, laptop, tablet, my wife's computer, it doesn't even matter where I am. I can access it from anywhere if I use the VPN or tunnel, too. And the best part is I don't have to deal with streaming services or advertisements.

Even though I don't use this feature as much as the others, it is really nice to be able to have a handful of Minecraft servers ready to go through different containers that I've saved. You could even have different images with different releases, or modded versions right at the ready. I can quickly launch a server with a map that I've designed for my students, or one of the many that my son and I have built. I don't have to have it running constantly, and I don't have to pay someone else to host it. It's mine and I can do what I want with it.

Should You Do It?

Yes. Absolutely. The convenience and security that you get from controlling your own data and being able to securely access your files cannot be understated. If you are considering repurposing an old device or even building a new one to be a linux home server, I would highly recommend it. It is a fantastic learning opportunity, you will get more utility out of your home network, and you can get more control over the path of your digital information (at least for one bit of it's digital path). It won't stop Google or Facebook from spying on you, but it might stop somebody else. But above all, it's fun and I'm using Linux in a way that I never would have before.

Comments: