6.9.0
Version 6.9.0 2021-02-27
Summary of New Features
Multiple Pools
This feature permits you to define up to 35 named pools, of up to 30 storage devices per pool. Pools are created and managed via the Main page.
- Note: A pre-6.9.0 cache disk/pool is now simply a pool named
"cache". When you upgrade a server which has a cache disk/pool
defined, a backup of
config/disk.cfg
will be saved toconfig/disk.cfg.bak
, and then cache device assignment settings are moved out ofconfig/disk.cfg
and into a new file,config/pools/cache.cfg
. If later you revert back to a pre-6.9.0 Unraid OS release you will lose your cache device assignments and you will have to manually re-assign devices to cache. As long as you reassign the correct devices, data should remain intact.
When you create a user share or edit an existing user share, you can specify which pool should be associated with that share. The assigned pool functions identically to the current cache pool operation.
Something to be aware of: when a directory listing is obtained for a share, the Unraid array disk volumes and all pools which contain that share are merged in this order:
pool assigned to share
disk1
:
disk28
all the other pools in strverscmp() order.
A single-device pool may be formatted with either xfs, btrfs, or (deprecated) reiserfs. A multiple-device pool may only be formatted with btrfs. A future release will include support for multiple "Unraid array" pools, as well as a number of other pool types.
- Note: Something else to be aware of: Let's say you have a 2-device
btrfs pool. This will be what btrfs calls "raid1" and what most
people would understand to be "mirrored disks". Well, this is
mostly true in that the same data exists on both disks but not
necessarily at the block-level. Now let's say you create another
pool, and what you do is un-assign one of the devices from the
existing 2-device btrfs pool and assign it to this pool. Now you
have x2 single-device btrfs pools. Upon array Start user might
understandably assume there are now x2 pools with exactly the same
data. However, this is not the case. Instead, when Unraid OS
sees that a btrfs device has been removed from an existing
multi-device pool, upon array Start it will do a
wipefs
on that device so that upon mount it will not be included in the old pool. This of course effectively deletes all the data on the moved device.
Additional btrfs balance options
Multiple device pools are still created using btrfs raid1 profile by default. If you have 3 or more devices in a pool you may now rebalance to raid1c3 profile (x3 copies of data on separate devices). If you have 4 or more devices in a pool you now rebalance to raid1c4 (x4 copies of data on separate devices). We also modified the raid6 balance operation to set meta-data to raid1c3 (previously was raid1).
However, we have noticed that applying one of these balance filters to a completely empty volume leaves some data extents with the previous profile. The solution is to simply run the same balance again. We consider this to be a btrfs bug and if no solution is forthcoming we'll add the second balance to the code by default. For now, it's left as-is.
SSD 1 MiB Partition Alignment
We have added another partition layout where the start of partition 1 is aligned on a 1 MiB boundary. That is, for devices that present 512-byte sectors, partition 1 will start in sector 2048; for devices with 4096-byte sectors, in sector 256. This partition type is now used when formatting all unformatted non-rotational storage (only).
It is not clear what benefit 1 MiB alignment offers. For some SSD devices, you won't see any difference; for others, perhaps big performance difference. LimeTech does not recommend re-partitioning an existing SSD device unless you have a compelling reason to do so (or your OCD just won't let it be).
To re-partition a SSD it is necessary to first wipe out any existing partition structure on the device. Of course, this will erase all data on the device. Probably the easiest way to accomplish this is, with array Stopped, identify the device(s) to be erased and use the 'blkdiscard' command:
blkdiscard /dev/xxx # for example /dev/sdb or /dev/nvme0n1 etc
WARNING: be sure you type the correct device identifier because all data will be lost on that device!
Upon next array Start the device will appear Unformatted, and since there is now no partition structure, Unraid OS will create it.
- Note: If you want to re-partition your SSD-based cache disk/pool and preserve data, please consider posting on the Unraid Community Forum for assistance with your particular configuration. Refer also to this post in the Prerelease board.
SMART handling and Storage Threshold Warnings
There is a configuration file named config/smart-one.cfg
which stores
information related to SMART, for example, the controller type to be
passed to smartctl
for purposes of fetching SMART information. Also
stored in that file are volume warning and critical free space
thresholds. Starting with this release, these configuration settings
are handled differently.
In the case of SMART configuration, settings are saved by device-ID instead of by slot-ID. This permits us to manage SMART for unassigned devices. It also permits SMART configuration to "follow the device" no matter which slot it's assigned to. The implication however, is that you must manually reconfigure SMART configuration for all devices which vary from default.
The volume warning and critical space threshold settings have been moved
out of this configuration file and instead are saved now in
config/disk.cfg
(for the Unraid array) and in the pool configuration
files for each pool. The implication is that you must manually
reconfigure these settings for all volumes which vary from default.
After upgrading you may receiving a notification such as:
Notice [TOWER] - Disk 1 returned to normal utilization level
. As
described above, all of your SMART configuration settings were reset to
default. Visit Settings -> Disk Settings
to review the defaults, and
override for individual drives on Main -> Disk X -> Settings
.
Better Module/Third Party Driver Support
Recall that we distribute Linux modules and firmware in separate
squashfs files which are read-only mounted at /lib/modules
and
/lib/firmware
. We now set up an overlayfs on each of these mount
points, making it possible to install 3rd party modules using the plugin
system, provided those modules are built against the currently running
kernel version. In addition, we define a new directory on the USB flash
boot device called config/modprobe.d
the contents of which are copied
to /etc/modprobe.d
early in the boot sequence before the Linux kernel
loads any modules.
This technique is used to install the Nvidia driver (see below) and may be used by Community Developers to provide an easier way to add modules not included in base Unraid OS: no need to build custom bzimage, bzmodules, bzfirmware and, bzroot files!
Passing Parameters to Modules
The use of conf
files in config/modprobe.d
may be used to specify
options and pass arguments to modules.
As an example: at present we do not have UI support for specifying which network interface should be "primary" in a bond; the bonding driver simply selects the first member by default. In some configurations, it may be useful to specify an explicit preferred interface, for example, if you have a bond with a 1Gbit/s (eth0) and 10Gbit/sec (eth1) interface.
Since setting up the bond involves loading the bonding kernel module, and you can specify which interface to set as primary using this method:
Create a file on the flash: config/modprobe.d/bonding.conf
which
contains this single line, and then reboot:
options bonding primary=eth1
After reboot you can check if it worked by typing this command:
cat /proc/net/bonding/bond0
where you should see the selected interface show up as "Primary Slave".
Nvidia Driver
The goal of creating squashfs overlays mounted at /lib/modules
and
/lib/firmware
, along with providing a mechanism for defining custom
module parameters, is to provide a way of integrating third-party
drivers into Unraid OS without requiring custom builds of the bz*
files. One of the most popular third-party drivers requested for Unraid
OS is Nvidia's GPU Linux driver. This driver is required for
transcoding capability in Docker containers. Providing this driver as a
plugin for Unraid OS has required a lot of work to set up a dev
environment, compile the driver and tools, and then unpack bzmodules,
add the driver, create new bzmodules, and then finally replace in USB
flash root directory. This work has been accomplished by Community
members @chbmb,
@bass_rock, and
others. Building on their work, along with member
@ich777 we now
create separate Nvidia driver packages built against each new Unraid OS
release that uses a new kernel, but not directly included in the base
bz* distribution.
A JSON file describing the driver version(s) supported with each kernel can be downloaded here:
https://s3.amazonaws.com/dnld.lime-technology.com/drivers/releases.json
Each driver package includes the Nvidia Linux GPU driver along with a set of container tools. The container tools include:
nvidia-container-runtime
nvidia-container-toolkit
libnvidia-container
These tools are useful in facilitating accelerated transcoding in Docker containers. A big Thank You! to Community member @ich777 for help and for providing the tools. @ich777 has also provided a handy plugin to facilitate installing the correct driver.
The inclusion of third-party modules into Unraid OS using the plugin system is still a work-in-progress. For example, another candidate would be to replace the Linux in-tree Intel ethernet drivers with Intel's custom Linux drivers.
Docker
It's now possible to select different icons for multiple containers of the same type. This change necessitates a re-download of the icons for all your installed docker applications. A delay when initially loading either the Dashboard or the Docker tab while this happens is to be expected prior to the containers showing up.
We also made some changes to add flexibility in assigning storage for the Docker engine. This is configured using the Settings/Docker Settings/Docker data root setting. This lets you select how to keep the Docker persistent state (image layers):
- In a btrfs-formatted vdisk loopback-mounted at /var/lib/docker. In this case the name of the image file must be 'docker.img'.
- In an xfs-formatted vdisk loopback-mounted at /var/lib/docker. In this case the name of the image file must be 'docker-xfs.img'.
- In a specified directory which is bind-mounted at /var/lib/docker. Further, the file system where this directory is located must either be btrfs or xfs.
Docker will use either the 'btrfs' storage driver in the case of btrfs-formatted vdisk/directory, or the 'overlay2' storage driver in the case of xfs-formatted vdisk/directory.
Implemented as follows: First, rc.docker
will detect the type of file
system mounted at /var/lib/docker
. We now support either btrfs or xfs
and the docker storage driver is set appropriately.
Next, mount_image
is modified to support a loopback file formatted
either with btrfs or xfs depending on the suffix of the loopback file
name. For example, the file name ends with .img
, as in docker.img
then we use mkfs.btrfs
. If file name ends with -xfs.img
, as in
docker-xfs.img
then we use mkfs.xfs
.
In addition, we added the ability to bind-mount a directory instead of
using a loopback. If the file name does not end with .img
then the
code assumes this is the name of a directory (presumably on a share)
that is bind-mounted onto /var/lib/docker
. For example, if
/mnt/user/system/docker/docker
then we first create, if necessary the
directory /mnt/user/system/docker/docker
. If this path is on a user
share we then "de-reference" the path to get the disk path which is
then bind-mounted onto /var/lib/docker
. For example, if
/mnt/user/system/docker/docker
is on "disk1", then we would
bind-mount /mnt/disk1/system/docker/docker
. Caution: the share should
be cache-only or cache-no so that 'mover' will not attempt to move the
directory, but the script does not check this.
Virtualization
We integrated changes to the Tools -> System Devices page made by user @Skitals with refinements by user @ljm42. You can now select PCI devices to isolate from Linux upon boot simply by checking some boxes. This makes it easier to reserve those devices for assignment to VM's. This technique is known as stubbing (because a stub or dummy driver is assigned to the device at boot preventing the real Linux driver from being assigned).
One might wonder, if we can blacklist individual drivers why do we need to stub those devices in order to assign to VM's? The answer is: you can. But, if you have multiple devices of the same type, where some need to be passed to a VM and some need to have the host Linux driver installed, then you must use stubbing for the devices to pass to VM's.
Note: If you had the VFIO-PCI Config plugin installed, you should remove it as that functionality is now built-in to Unraid OS 6.9. If you had manually stubbed devices by modifying your Syslinux file, consider switching to the new method as described in the vfio-pci guide.