Nextcloud setup
April 06, 2023
Overview
For quite a while now I’ve wanted to migrate away from iCloud for my file storage. This was one of the main things keeping me from using Linux on my main desktop and laptop computers. I finally decided to give Nextcloud another shot, and it has been working great so far. I’m using the apps on my iPhone and iPad as well without any major issues. In this post, I will go over setting up the Nextcloud AIO package on a dedicated VM in Proxmox. I will cover configuration in Proxmox, pfSense, and Caddy (reverse proxy).
VM Setup
I have a single server running Proxmox to manage my VMs. For Nextcloud, I installed a second network adapter and have it wired to a dedicated interface on my pfSense box that is used only for externally available machines. I won’t go in to detail on adding a new network interface in Proxmox, but it is fairly straightforward. The VM is a Fedora Server install. I alloted 50 GB for the main disk. On a separate SSD, I created a disk for the VM that is 600 GB and will be used for all data storage. This second disk is not included in Proxmox backups, as I will be backing up that data with a different method. During install, ensure the main disk has a msdos label. For formatting, I chose to use a 1 GB EXT4 partition for /boot
. I then created a BTRFS volume labeled FEDORA and created the subvolumes @ for /root
and @home for /home
.
Once the initial OS install is finished, there are a few other items that need to be completed before beginning the Nextcloud install. The hostname can be set with # hostnamectl set-hostname --static name.domain.com
. Since this is in Proxmox, it will be helpful to have the qemu guest agent install. This can be done with # dnf install qemu-guest-agent
and # systemctl enable --now qemu-guest-agent
. The Fedora Server OS comes with Cockpit installed and enabled. I find it very useful for simple admin tasks as well as visually checking on resource usage, services, etc. It can also be used to easily add ports to allow through the firewall. For Nextcloud, we’ll want to go ahead and allow ports 80, 443, 8080, and 8443 on the VM’s OS firewall.
Lastly, we’ll need to setup the disk that will be used for Nextcloud data. I chose to format it as EXT4 and mount it at /mnt/nc_data
. I believe it is also a good idea to make that mount point immutable before actually mounting the disk. This can be done with # chattr +i /mnt/nc_data
and prevents anything from being written if the disk fails to mount for some reason. Be sure to edit /etc/fstab
to auto mount the disk at boot. Once the disk is mounted, change the mount point ownership to the primary user on the system.
pfSense Setup
As I mentioned earlier, I have a dedicated interface on pfSense just for devices that need to be accessed externally. This allows me to tightly control what can communicate from this interface (DMZ) to anything else on the network. I create all my static IP mappings in pfSense. This is fairly easy to do by just going to your current DHCP leases, find the hostname you set earlier, then click the button at the end of the row that allows you to create a static mapping. For the DMZ interface as a whole on pfSense, I allow access to port 53 (DNS) on the pfSense box itself then allow all traffic that is not destined for any local subnets (destined for the internet). If there are specific machines on the DMZ that need access to specific ports on the local network (maybe for backup), those holes can be punched in the pfSense firewall on a individual machine basis.
I also have to allow port 443 (HTTPS) through the WAN interface to a particular server on the DMZ. I proxy web traffic for my domains through Cloudflare to offer some additional security. That being the case, I only want to allow traffic coming in on port 443 that is originating from Cloudflare owned servers. They have a publicly available list of their servers, and it is easy to add those as an alias in pfSense for the allowed source. This can create some issues during the initial Nextcloud/Caddy setup, but I’ll explain how to avoid that in a bit.
One final thing to mention. If you happen to have a VPN server running on pfSense to access your local network when you’re away, you may want to add the DMZ subnet to that VPN server config. If the “Redirect IPv4 Gateway” option is checked, everything is already being forced through the VPN. This will allow you to access devices on the DMZ subnet when you’re logged in through your VPN.
Nextcloud install
The Nextcloud AIO setup is all run with Docker. The Docker site has instructions on how to add the repo and install. Once installed, be sure to enable it at boot with sudo systemctl enable docker.service
and sudo systemctl enable containerd.service
. For the actual Nextcloud AIO install and setup, detailed information can be found here. I will provide the Docker incantation I used. It has a few options implemented for a different data directory as well as a reverse proxy. I added the Apache IP binding since I am running the reverse proxy on the same machine.
sudo docker run --detach --sig-proxy=false --name nextcloud-aio-mastercontainer --restart always --publish 8080:8080 -e APACHE_PORT=11000 -e APACHE_IP_BINDING=127.0.0.1 -e NEXTCLOUD_DATADIR="/mnt/nc_data" --volume nextcloud_aio_mastercontainer:/mnt/docker-aio-config --volume /var/run/docker.sock:/var/run/docker.sock:ro nextcloud/all-in-one:latest
Caddy and Cloudflare
I had to install Caddy as a static binary due to the fact I’m using a third-party plugin to interface with Cloudflare. Caddy has excellent instructions for this. One thing to note about distributions with SELinux, be sure to copy the binary to /usr/bin
instead of moving it. This will allow it to inherit the correct SELinux context. If you don’t need a custom version of Caddy, the distribution specific packages are definitely the way to go. Be sure to also setup Caddy to run as a service. Once installed, create your Caddyfile at /etc/caddy/Caddyfile
per the instructions for Nextcloud AIO. Once this is done, Caddy can be started with # systemctl enable --now caddy.service
.
Nextcloud configuration
At this point, the Nextcloud AIO interface should be available at https://my.domain:8080
. It will walk you through some initial password setup. For my setup, I set the timezone and enabled all AIO addons except for Talk. As for apps in Nextcloud, I use brute force protection and TOTP. I enable TOTP for admin as well as all users. I chose to disable the photos app as I find it annoying in that you can’t specify a folder for it to use. Rather it seems to just pull photos from any folder you sync with Nextcloud.
I did notice the weather on the dashboard seemed to only show temperature in Celsius. This seems to be a bug. If you have the locale set to English(US), you can change it to something else then immediately change it back to English(US). The temperature should now be showing in Fahrenheit.
To enable email notifications, I setup Google’s SMTP server under Settings -> Administration -> Basic Settings.
If you want to edit the retention settings for trash and file versions, make sure to stop the containers then edit the config.php file in /var/lib/docker/volumes/nextcloud_aio_nextcloud/_data/config
. The changes I made are:
'trashbin_retention_obligation' => 'auto, 30',
'versions_retention_obligation' => 'auto, 90'
Data backup
I have scheduled backups for all of my Proxmox VMs. This covers the boot drive for my Nextcloud VM but does not include the separate drive I added for data storage. To take care of this, I have a timer setup which executes a script to run rclone for syncing files to my NAS machine for backup.
My NAS has several NFS shares that it exports. One of those is mounted on my Nextcloud VM at /mnt/nc_backup
. To ensure no data is accidentally written when the NFS share isn’t mounted, use # chattr +i /mnt/nc_backup
before initially mounting the share. My rclone script runs as root and syncs just the user’s data files to the NFS backup folder. It does not sync the file versions that Nextcloud creates for its own versioning. Since rclone is running as root, the owner of the files on the NAS becomes the user “nobody”. I prefer this behavior to adding root squashing to my NFS server config. Versioning of the backup is handled via BTRFS snapshots on the NAS subvolume. The rclone script also creates a log file that gets rotated weekly.
The script, service, and timer files I’m using are provided below. For the email notification to work, Postfix should be installed and setup as described in my NAS post. I placed the script in /usr/local/bin
and the service and timer files in /etc/systemd/system
.
rclone-nc-copy.sh
#!/bin/bash
#
# must run as root
#
day=`date +%u`
# rotate log file every Sunday
if [ $day -eq 7 ]
then
rm /var/log/rclone-nc-copy.log.1
mv /var/log/rclone-nc-copy.log /var/log/rclone-nc-copy.log.1
fi
# backup nextcloud files to nas
if ! rclone sync -v --links --log-file=/var/log/rclone-nc-copy.log /mnt/nc_data/john/files /mnt/nc_backup/john/files
then
mail -Ssendwait -s "RCLONE ERROR" [email protected] <<< 'rclone error syncing nextcloud data to nas'
fi
rclone-nc-copy.service
[Unit]
Description=Backup nextcloud files
[Service]
Type=oneshot
ExecStart=/usr/local/bin/rclone-nc-copy.sh
rclone-nc-copy.timer
[Unit]
Description=Backup nextcloud files
[Timer]
OnCalendar=*-*-* 04:00:00
[Install]
WantedBy=timers.target
Misc
I find using the AppImage version of the desktop client for Linux works best for me. I use the appimaged service to manage AppImage files. GnomeTweaks can be used to add the application to start at boot.
I like to sync my documents folder separate from the rest of my Nextcloud files. This is fairly straightforward to do on the desktop. When you first setup the Nextcloud client, just deselect the Documents folder for syncing. A new folder sync connection can then be added for just the Documents folder and can be synced with your exiting folder at ~/Documents
.
On one occasion, I received a notice stating “Update needed” while trying to log in to the web interface. I found it mentioned in Nextcloud AIO documentation as something that can happen occasionally. The fix was to run the command sudo docker exec --user www-data -it nextcloud-aio-nextcloud php occ upgrade
in the VM then restart all containers via the AIO interface.
Conclusion
So far, Nextcloud has been much smoother this time around. There have been a few bumps in the road, but nothing major. It makes working with Linux on my daily desktop/laptop much easier by being able to sync all of my files. I definitely prefer using Linux to macOS, and I do that much more often these days.