Re: [bitfolk] Filesystems/volume management for home servers

Top Page
Author: Hugo Mills
Date:  
To: Gavin Westwood
CC: users
Subject: Re: [bitfolk] Filesystems/volume management for home servers

Reply to this message
gpg: Signature made Fri May 31 18:07:45 2019 UTC
gpg: using RSA key 585E1475E2AB1DE4
gpg: Can't check signature: No public key
On Fri, May 31, 2019 at 06:41:55PM +0100, Gavin Westwood wrote:
> With the discussion about RAID 10, that got me back to thinking about
> better/alternative system to my current RAID1+LVM+EXT4 setup on our
> Linux home server and I'm looking for advice from other members.
>
> Currently we have 13.6 TB of storage (a lot of which are photos by my
> semi-professional girlfriend, and videos from our wildlife cam which
> produces about 15 - 20GB of videos a day [email me off-list if you want
> to want the URLs of my fledgling Youtube hedgehog and bird channels]).
>
> There is some amount of file duplication, for instance where I have
> stuck old backups (copied files and folders, not tar/compressed
> archives) on there or photos/videos have been copied to different
> folders (e.g. to categorise), so filesystems with built-in deduplication
> (like I believe BTRFS has) would be nice.  However my main priorities
> are: maintaining data integrity, ease of administration, and really a
> sub-category of that: ease to expand or shrink and reallocate storage as
> required (if necessary - quotas are not required, but crashing due to a
> full disk is to be avoided).
>
> For years I have been looking at BTRFS, but it's never sounded 100%
> production ready to me (although I remember that at least one distro
> made it their default fs).  Andy's mention of ceph and stratis were
> something new to me, but I'm not sure if they are a bit much for a
> single server, and I've no experience with ZFS, but I think I read about
> some disadvantages that put me off a few years back, but I forget what
> they were now.
>
> Anyway, what do/would you use for this sort of scenario/requirement or
> what are your experiences with suitable filesystems for my
> requirements?  Just to be clear - I want to ensure that a single disk
> failure is very unlikely to result in data loss.  Also, all disks are
> currently the spinning disk type, so any features that takes advantage
> of SSDs would be wasted.


I'd go for btrfs. (Disclaimer: I'm a regular contributor to the
btrfs community, and I've written quite a bit of their documentation).
I've been running a RAID-1 btrfs data store for about 10 years. In
that time, I've hit one bug that required me to restore from
backup. That was 9 years ago, when the FS was about as stable as an MP
in a three-way marginal constituency.

I'm now up to 11TB of data on a 13TB RAID-1, and it's been fine for
years, even over power failures and several disk failures.

The primary things you need to ensure is that your hardware is
reliable (no USB storage, and disks and controllers which honour flush
instructions), and that you have backups (because your hardware *will*
fail, eventually).

Scrub regularly, avoid compression and qgroups and parity RAID for
metadata, and you should be fine.

Note that RAID-1 is *not* a backup. You still need backups. RAID-1
is an uptime mechanism. RAID-1 will not save you from an accidental
"rm -rf /mnt/my-preciousss/". RAID-1 will not save you from a fire, or
a double disk failure, or a faulty TRIM implementation, or something
writing zeroes to the device, or running mkfs on the wrong device,
or... (yes, we've seen all of these in #btrfs).

If you can't afford backup *and* RAID-1, use half the disks in the
live system with "single" profile, and half the disks in the backup
machine with "single" profile. It's going to be safer than everything
in RAID-1 all attached to one machine.

For backups of a mostly-append data store such as yours, the
cheapest option in the ~4 TB -- ~150 TB range is BD-R. Over 150 TB or
so, it's cheaper to use LTO tapes. I'll need to update my spreadsheet
if you want more precise figures on that.

Hugo.

-- 
Hugo Mills             | Great oxymorons of the world, no. 2:
hugo@... carfax.org.uk | Common Sense
http://carfax.org.uk/  |
PGP: E2AB1DE4          |