Goals

My primary goal for this device was to hold and automate backups for me. When I made my hardware choice I was mostly looking for something to replace my aging Synology DS414j that couldn’t easily run docker or random scripts.

However, since this would be idle 95% of the time I also wanted the ability to run Docker or some other containerization software so the idle cycles could be used for something else.

So my ideal device would

  1. Be low power (since this would run 24/7)
  2. Support at least a single 2 disk mirror
  3. Have greater than gigabit ethernet

Hardware Choices

I was very interested in finding an ARM solution but I didn’t find many options with greater than gigabit ethernet and easily attached storage. The Helios64 project would have been perfect but was dead. The NanoPi 6S was the only other 2.5GbE solution I could find but would have required all storage to connected over USB. Meanwhile, something like the Odroid-HC4 could hold my two disks but would be capped by gigabit ethernet. Another issue was that I was considering using Proxmox which doesn’t support ARM.

When it came to off the shelf options I was really only familiar with Synology, which hasn’t released any models with 2.5GbE (as of Fall 2022), and QNAP, which has been facing ransomware issues with their software all year. I briefly considered TerraMaster, but their F2-221 and F2-422 were also both gigabit devices. On all three, installing custom operating systems felt very messy - not really supported and done by either installing onto USB or installing a custom OS on the storage drives. As a result, I ruled out any off the shelf solutions as well.

And finally, I wasn’t seeing that much value from my existing 4 disk array. I’d opted for 4 2 TB drives because RAID5 stops working in 2009, which was obviously overblown, but I’d rather be safe than sorry. After RAID10 (or RAID6) I had 4TB of usable space and was barely using half of that. And, fully aware that such a RAID array wasn’t a backup I figured a simple RAID 1 style mirror (ideally with ZFS to prevent bitrot) would be good enough and thus just 2 drives would be enough.

This left me with the following requirements:

  1. Low power X86
  2. 2.5GbE ethernet (10GbE is too power hungry and my network is only gigabit)
  3. 2+ SATA headers

I thought I’d found the perfect solution in the Odroid H2+ but due to chip shortages the product was effectively discontinued. Which is why when I heard that Odroid announced the H3+ I immediately purchased it, along with an eMMC boot drive, 64 GB of DDR4 RAM, the necessary power supply, and Odroid’s proprietary SATA connectors. I really wanted to get one of their NAS cases but they were out of stock at the time so I settled on a cheap two disk drive cage from Amazon.

I reused the same 2TB hard drives I had in my Synology. They were very old Western Digital Greens. I ended up copying all of the data to a handful of spare disks I had lying around but then intentionally removed two disks that would leave the array fully functioning (but degraded) as a second copy.

However, once I had that set up the H3+ with those 2 disks I realized I’d made a mistake. I’d wanted to keep a classic 3-2-1 strategy for backing stuff up but I didn’t always have 3 copies of things. I had one copy in the cloud and one copy on my NAS, with no real ability to keep a third copy. The proper solution would mean getting another device (e.g. another NAS) but I was willing to compromise and store two distinct copies on this device. But the H3+ has only 2 SATA connectors, which meant I either needed to use an M.2 to SATA adapter, alongside some sort of external power supply, or use drives connected of over USB, something that is often warned against with ZFS. Since I wasn’t sure if I needed to move my boot drive to the M.2 slot, I decided to purchase a single disk enclosure from Sabrent and add a 3rd drive there store my ”third copy” of things.

About 6 months later, one of my hard drives in the mirror failed. Luckily, I was able to recover by using the fourth disk but I did realize at this point I needed to buy new hard drives.

In an effort to intentionally avoid SMR disks and “bad batches” I decided to buy two pairs of matching sized enterprise disks from two different manufacturers in relatively large sizes. I realized that I’d likely want a little more backup space than my primary ZFS pool so I ended up with:

  • 1 WD Ultrastar 14TB
  • 1 Seagate X16 14TB
  • 1 WD Ultrastar 18TB
  • 1 Seagate X20 18TB

Which would give me around 4 extra TB to back up stuff that didn’t fit on the storage drives.

Since I still had my mostly up to date copy from the initial set up along with a working copy on my “third” external drive, I put one 18TB in the backup drive slot and followed the following forum poststo effectively copy the pool to my big new drive, offline and replace the old pool with new disks, then restore from that new drive copy.

At this point, I tried to swap the Sabrent enclosure for a 2-disk Terramaster D2-300 . It seemed to work at first but I soon found ZFS scrubs and snapshots caused random lockups that required a full system restart to recover. As a result, I tried a different single disk enclosure, only to find that it also suffered from random lockups. It was the cheapest option but I did want to try a different vendor and the aluminum construction as a heatsink felt safer than the toaster setup. After two failed enclosures I just got another Sabrent enclosure and stuck with using two of those instead.

In the end I had two drives directly connected via SATA and powered via board power and two hard drives connected via USB3 and powered by their own enclosure wall worts.

Software Choices

Going into this project, I really wasn’t sure what sort of software I wanted.

A simple, minimalistic Linux server running software raid, samba, and/or NFS would it theory provide all the basics for a NAS. I could run scripts via cron jobs as long as I could install the necessary libraries and tools on Linux. All of these were tools I’d used in the past on the servers I’d managed in school.

However, I wanted to use this opportunity to learn new things and potentially modernize my toolkit.

My first big decision was to use ZFS instead of the classic mdraid. ZFS would in theory prevent bit rot and could handle mirroring and RAID5/6 out of the box if I wanted. I’d avoided ZFS in the past (~2009) because at the time ZFS on Linux was effectively brand new and the Linux community had seemed to rally around btrfs to provide a very similar feature set. Since then, though, btrfs has seemed to stall out and still doesn’t recommend usage in RAID 5/6 configurations. On the other hand, the ZFS community has rallied around ZFS on Linux (now called OpenZFS) to the point that the BSD implementations now use that codebase. Unfortunately, my hardware doesn’t support ECC memory so I’ll just have to hope that I don’t run into a scrub of doom. Another huge benefit of ZFS is that I can use replication to make copies and backups of my data faster and more securely than with rsync.

The next big decision was which operating system to use. My main choices were:

  • “vanilla” Debian or Ubuntu - tried and true classic, just about anything else I’d consider would be based on this and could potentially be installed on top of this
  • Proxmox - based on Debian, but offers kvm VMs and LXC containers.
  • TrueNAS Core - I’d known of this project as FreeNAS back in the day - it’s actually how I learned about ZFS. Based on BSD so hardware and software support is tricky.
  • TrueNAS Scale - effectively merging the best of FreeNAS and Debian, but relatively new since it was only released earlier this year. Prefers Docker containers over LXC.
  • Unraid - Slack-based proprietary OS whose killer feature seems to be handling drives of different sizes. Doesn’t yet support ZFS, ruling it out.
  • OpenMediaVault - based on Debian, similar to TrueNAS Scale, but also allows MergerFS+SnapRaid. Not sure how those work with ZFS.

Ultimately, I went with Proxmox because I suspected that virtualization would be core to my backup strategy and offer better isolation. I could, in theory, just run TrueNAS Scale or OVM in a container if I wanted, alongside any other interesting self-hosted software. The biggest unknown would be LXC, which I was less familiar with than Docker.

Proxmox Install

My original plan was to install Proxmox on top of an existing Debian install. This was mostly due to familiarity. I grabbed the Bullseye ISO from here after using a standard install and reading the non-free timeline.

Unfortunately, I couldn’t boot after a restart.

Next up, I tried the official Proxmox ISO instead. Everything worked until I had to partition my storage, at which point Proxmox gave me the following error:

Unable to get device for partition 1 on device /dev/mmcblk0

Proxmox devs strongly recommend not installing on eMMC. However, that post links to this guide with the following workaround:

You can use debug mode in the install to edit the install script at /usr/bin/proxinstall and add an if check that matches mmcblk just like nvme:

 } elsif ($dev =~ m|^/dev/nvme\d+n\d+$|) {
     return "${dev}p$partnum";
 } elsif ($dev =~ m|^/dev/mmcblk\d+$|) {
     return "${dev}p$partnum";
 } else {
     die "unable to get device for partition $partnum on device $dev\n";
 }

After that change an eMMC device can be selected as an installation target, allowing me t move forward with the install.

However, I did want to address the concerns that Proxmox devs had with too many writes.

First, I did not create a swap partition - with 64GB of RAM I doubt I’ll ever need one. Instead, I used zram for a purely memory based swap (as opposed to zswap, which does need a backing swap partition). Installing zram-tools sets up the proper zramswap service.

Secondly, I wanted to address their statement “The OS will write some GB logs per day.” I could move my logs into memory and avoid most of the writes. log2ram has a very easy to install Debian package and since my install was brand new it was trivial to fit my logs into 50MB of RAM.

ZFS and Samba

At this point, I needed to figure out how to add my ZFS disks and manage them. I felt a little silly installing a “whole OS” effectively run Samba and/or NFS, so I decided to skip a container and just install things on the host. Luckily, there was an awesome guide on the Level1Techs forum that agreed with me. The high level important parts:

  1. ls /dev/disk/by-id to get stable names for each of the disks
  2. zpool create -f -o ashift=12 -m <mount> <pool> mirror <ids> to actually create the mirrored pools
  3. zfs set compression=on storage to enable compression
  4. Install samba and configure it properly

I originally missed setting ashift=12, the setting for the pool’s native block size. While most drives (including my ancient WD Greens) had moved to Advanced Formatblock sizes by 2011, many still report a block size of 512 (aka 512e) for backwards compatibility. Generally speaking, using ashift=12 (mapping to 2^12=4096) instead of ashift=9 (2^9=512) for most hard disk drives is recommended. Unfortunately, this meant I had to delete my pool and recreate it with the correct parameters. Luckily, I hadn’t written anything beyond a test file yet.

At this point I had a working NAS I could plug into the back of my router and use but also connect to from other devices.

While setting up my first container I realized that I didn’t have the permission to write into my share from the container unless I made it wide open (chown 777). Since I didn’t know how many services were going to use this folder, I decided instead to try to lock down access instead. I created a new group, added my user to this group, and then changed the group ownership of my share. Afterward, I followed method 2 in this guide to map that group into my container, then added users in the container to that same group which allowed them to write into the directory on the host.

ZFS Backups

At first I just ran rsync to copy all files from my ZFS storage pool to my backup pool. This was fine, but pretty slow (many minutes, but less than an hour) even when there weren’t any changes. As a result I only ran it weekly.

Once I bought new hard drives I tried to leverage ZFS itself instead. I found that sanoid and syncoid were the most recommended way to handle ZFS backups. sanoid maintains a set of historical copies of your data via ZFS snapshots while syncoid allows you to copy those snapshots to another pool.

For ease, I just installed the versions that are in the official Debian repositories instead of trying the latest and greatest. This generally means I have to double check whether the package is being upgraded when I do Proxmox updates.

I used a very simple configuration that kept 3 daily snapshots with auto pruning. I then added a cronjob to sync between pools daily. While it does still take a few minutes when there are changes, most days it only takes a few seconds.

Services

I used Proxmox Helper Scripts to help speed up installing some of these.

iCloud Photos Backup

One of the first things I wanted to back up was my collection of photos from iCloud.

First Attempt

I found a docker image based on Alpine that wraps a Python library. I don’t particularly like Alpine, especially for Python, so I opted to install the latest LTS version of Ubuntu (22.04). Unfortunately, the system version of Python was 3.10, which unfortunately had errors with the latest version of the Python library. Luckily, the issue above led to a fork that did solve the issue.

I managed to log into iCloud (even with 2FA) and download all of my photos. I added this to a cronjob and all worked well for about 3 months, when I randomly had an iCloud 2FA request pop up on all my devices. Once I realized it was due to my credentials expiring my heart stopped racing. However, when I logged into the box I ended up in a loop attempting to re-authenticate. At this point I decided to try something else.

icloudpy would always be playing a game of catch up with Apple since it doesn’t rely on public APIs. This undercut my desire to have this be fully automated and reliable.

Second Attempt

My next attempt was a pretty clear violation of Apple ToS so I did it only because I wanted to even see if it was feasible.

It turns out you can install install MacOS in a KVM-based VM. I specifically followed this guide for installing Monterey because Ventura requires AVX2 instructions which my Jasper Lake CPU does not have.

Once installed it was simple enough to mount my share over Samba and use osxphotos to export the photos from my photos library. A huge downside of this approach is that I end up with 2 copies (one within the VM and one outside). But it is extremely reliable since it is just MacOS (albeit a Hackintosh) and seemed to just work.

Third Attempt (Current)

My third attempt was very similar to the second attempt - I just use a Windows VM instead of MacOS and installed iCloud for Windows, then copied the files out like Apple suggests.

I installed Windows 11 in an appropriately sized VM and used winutil tweaks to quickly de-bloat the VM. I probably could use tiny11builder instead but I didn’t want to futz with making my own ISO.

Afterwards, I installed iCloud for Windows, enabled both iCloud Photos and iCloud Drive, and added a simple batch filed that used xcopy to move files daily with Task Scheduler. This has been the approach I’ve stuck with.

Jellyfin

I have a very small collection of media since I’ve mostly deleted or discarded stuff I don’t watch over the years. What little I still have either has sentimental value or was completely forgotten.

But, I wanted to see how well this CPU could handle streaming videos in 4K to understand it’s capabilities. I chose Jellyfin primarily because it was fully free and FOSS - no subscriptions like Emby or Plex.

Jellyfin has pretty straight forward instructions for both container (aka Docker) installs and Linux installs. I initially tried to use a helper script to speed this up. However, I couldn’t get hardware acceleration to work, despite the fact that the helper script had created a privileged LXC container. A single stream would immediately cause the device to go to 100% CPU when transcoding.

It turns out I had a few different issues:

  • Jasper Lake processors require low-power encoding to be enabled, which in turn needs specific linux firmware installed and configured properly
  • The script defaulted to Ubuntu 20.04, the former LTS, but that version does not have correct version of the firmware to actually support Jasper Lake processors
  • The latest LTS (22.04) does have modern enough firmware, but the kernel version (5.15) has issues preventing the use of low-power encoding. Unfortunately, this is also the kernel that is used as the base for Proxmox 7.

At this point, I thought I could just use Debian Bookworm (currently on a 5.10 kernel) to avoid the problem. I tore down the container then installed from scratch without the script on Debian. Only after I’d done this did I check what kernel my container was using and actually internalize (for the first time) how LXC works - it shares the kernel with the host. Any amount of futzing with containers wouldn’t change that - I needed to find a fix at the host level.

Luckily, Proxmox offered an opt-in LTS kernel upgrade to 6.2 (the base of the upcoming 23.04 non-LTS Ubuntu). After this, I could verify that GuC/HuC were working. Then I followed a combination of these two guides to properly mount the the iGPU and map the render group between my host and container. I tried to follow this guide first, which got me 99% of the way but had That way I could avoid using a privileged container. I also added the jellyfin user in the container to a few groups - render, video, and input to it had access to the device.

While I was in the lxc config I mapped through the same NAS sharing permissions so I could read and write from my storage share, granting access to the videos.

At this point, I was able to add a library in Jellyfin and play back Big Buck Bunny on another device without errors. To really make sure it was working, I used a few 8k videos from this random folder I found. I was able to run multiple simultaneous 8k HEVC to 4k H265 streams and barely stress the CPU.

The sad part, though, is that after all this work I really never use my Jellyfin container. I tend to just spin it down since I never use it. Part of the issue is that most of what media I do consume I can stream directly. Another issue is that there aren’t great client options on Apple TV yet (as of Nov 2022). It sounds like Swiftfin might be coming to the app store soon (it’s in TestFlight) and given my current lack of content I can’t justify paying a subscription for Infuse. It’s definitely a bit of a chicken and egg problem.

If I ever did acquire more content, I’d likely want to route my metadata fetching via Tor or a VPN.

Kavita

Similar to JellyFin, I used to have a very large collection of manga and comics I’d acquired over the years. As a kid, I used to download a lot of manga (and then go buy the originals I couldn’t read from Kinokuniya). Some of my formative internet habits came from hanging around forums, IRC channels, and newsgroups. But as it was more convenient to just “stream” the content from online providers I deleted most of this stuff. Now I kinda regret that since I cannot always find the series I want to re-read when I want to read them.

I picked Kavita (over Komga) primarily because it claims to be manga-focused while the latter seems to be more comic-focused.

This install was extremely straightforward - I used the newly added helper script to install and mapped a folder from my share into the container so the app could read it. Everything just worked.

If I’d had a library of content I’d probably want to use something like komf, Manga Manager, or Manga Tagger to fetch metadata. But since I never got around to it, this container also tends to be off except for when I update it.