Skip to content

NAS notes @ Debian 8 "jessie"

WARNING: These notes were written in 2016-2017. Commands and usage might be outdated!
An updated version will come next time I make a new NAS setup on Linux (instead of just upgrading)

I ran this quickly through an LLM (2024), asking if this is still valid, it told me:

Several aspects of these notes need revision for modern systems:  
- Debian 8 "Jessie" is EOL - consider 12 "Bookworm" or newer
- For large arrays (>10TB), RAID6 is strongly recommended over RAID5 due to rebuild times  
- Consider XFS or BTRFS instead of ext4 for modern storage arrays  
- Samba security settings have been updated in newer versions  
- Network tuning parameters may need adjustments for newer kernel versions

Disclaimer: I am not responsible for any loss of data, security breaches, fraud, omissions, errors, misconfigurations, service interruptions, slowdowns, freezes, breakdowns, or diminishments of any kind resulting from, caused by or attributed to the following or any content on this site.

Please backup before you proceed, and don't run any command without knowing what it is for. And PLEEAAASE use a UPS!

You also need to know that the logical names (/dev/sdX) are NOT always the same. They can quickly change during a reboot! Always check their serial numbers before you format them.

SOFTRAID! (mdadm)

apt-get install mdadm parted lshw -y

Use the default answer ("all") when asked.

RAID Min. devices Max. device failure Space available Speed gain*
RAID0 2 0 devicesize*devices (devices)x read & write
RAID1 2 devices-1 devicesize (devices)x read
RAID5 3 1 devicesize*(devices-1) (devices-1)x read
RAID6 4 2 devicesize*(devices-2) (devices-2)x read
RAID10 4 && (devices % 2)==0 devices/2 devicesize*(device/2) (device)x read & (device/2) write

*I found it on the internet. A few sites disagree on it, but this is the most common calculation.

Filesystem Max. file size Max. volume size
ext2 2TB 32TB
ext3 2TB 32TB
ext4 16TB 16TB (32bit/DEFAULT) / 1EB ('-O 64bit')
NTFS 256TB 256TB
XFS 8 EB 16 EB

Quick FAQ:
Q: Durrr, just use hardware RAID!?
A: NO! When the RAID controller dies after 20 years or whatever, just imagine the price of getting an identical one then!!

FIND THE CORRECT DISKS

Don't be like me and change the partition table on the wrong drives.

lshw -class disk

This is also useful when you need to replace a disk, because you can see the serial number.

Create a RAID5

In this (tested) example, we use 3x6TB disks to create a RAID5 with. PLEASE consider RAID6 if you have 4 or more disks!! And RAID is NOT the same as backup! It will help to some degree, but you NEED a backup! (Hint: Crashplan, Amazon Glacier, etc.)

Since the disks are larger than 2TB, we need to switch the partition table to GPT instead of MBR.

WARNING: THIS WILL DESTROY ALL CONTENT ON THE CHOSEN DISKS!!

lshw -class disk && lsblk #Be sure you choose the correct drives! This can seriously mess everything up if you don't.
parted --script /dev/sda mklabel gpt
parted --script /dev/sdb mklabel gpt
parted --script /dev/sdd mklabel gpt
parted -a optimal /dev/sda mkpart primary 0% 100%
parted -a optimal /dev/sdb mkpart primary 0% 100%
parted -a optimal /dev/sdd mkpart primary 0% 100%
mdadm --create /dev/md0 --level=5 --raid-devices=3 /dev/sda1 /dev/sdb1 /dev/sdd1 # Create RAID5 with these three disks.
cat /proc/mdstat
mkfs -t ext4 -E lazy_itable_init=1 -O 64bit,sparse_super,filetype,resize_inode,dir_index,ext_attr,has_journal,extent,huge_file,flex_bg,uninit_bg,dir_nlink,extra_isize /dev/md0
mkdir /mnt/md0
mount -t ext4 /dev/md0 /mnt/md0
chown -R mathias:mathias /mnt/md0
echo "/dev/md0        /mnt/md0        ext4    defaults        0       0" >> /etc/fstab

Grow it! (Add an extra drive)

We have quickly realized that your porn collection needs more than the 11TB you got from 3x6TB!

Now it's time to expand it with another 6TB disk.

WARNING: THIS WILL DESTROY ALL CONTENT ON THE CHOSEN DISK!!

lshw -class disk && lsblk #FIND AND BE 100% SURE YOU GET THE CORRECT /dev/sXX!!!!!! CHECK USING DISK SERIAL NUMBERS!
parted --script /dev/sde mklabel gpt
parted -a optimal /dev/sde mkpart primary 0% 100%
mdadm --add /dev/md0 /dev/sde1
mdadm --detail /dev/md0 # You can now see, a spare has been added.
mdadm --grow /dev/md0 --raid-devices=4 # Replace 4 with the (new) total amount of discs..
watch -n1 'mdadm --detail /dev/md0 && echo "\n" && cat /proc/mdstat' 
# I recommend waiting until it's done reshaping..
systemctl stop smbd
umount /dev/md0
e2fsck -f /dev/md0 -C 0 # Check for errors, if you're in a hurry this is not required. 
resize2fs /dev/md0 -p
e2fsck -f /dev/md0 -C 0 # Also not needed.. Feel free to skip this step unless you're paranoid..
mount -t ext4 /dev/md0 /mnt/md0
systemctl start smbd
mdadm --detail /dev/md0 && df -h /mnt/md0 

Make it more secure! (RAID5 to RAID6)

We have quickly realized how important your porn collection is. And we have therefore bought another 6TB disk ("device") to make 2 disk failures possible.

WARNING: THIS WILL DESTROY ALL CONTENT ON THE CHOSEN DISK!!

lshw -class disk && lsblk #FIND AND BE 100% SURE YOU GET THE CORRECT /dev/sXX!!!!!! CHECK USING DISK SERIAL NUMBERS!
parted --script /dev/sdf mklabel gpt
parted -a optimal /dev/sdf mkpart primary 0% 100%
mdadm --add /dev/md0 /dev/sdf1
mdadm --detail /dev/md0 # You can now see, a spare has been added.
mdadm --grow /dev/md0 --level=6 --raid-devices=5 --backup-file=/root/mdadmbackup_md0_raid5

OH NO WE LOST A DRIVE! SHIT! F*!!!!!! PANIC!!!1

Our huge porn collection seems to be in danger, and we must therefore replace it as soon as possible with a new drive. So we can get it all back into a stable state again.

First, we remove the drive(s) from the RAID with this:

mdadm --detail /dev/md0 && echo "\n" && cat /proc/mdstat # Identify the broken drive, it's usually marked with a (F) for failed
lshw -class disk # Find the serial number for the broken drive, if it's still online. And remove it from the system. 
mdadm /dev/md0 -r failed # Remove the failed drives, output will be like: mdadm: hot removed 8:65 from /dev/md0
mdadm /dev/md0 -r detached # OR run this, if the drive didn't fail, but was just detached.

Now it's time to add the new drive(s) WARNING: THIS WILL DESTROY ALL CONTENT ON THE NEW CHOSEN DISK!!

lshw -class disk && lsblk #FIND AND BE 100% SURE YOU GET THE CORRECT /dev/sXX!!!!!! CHECK USING DISK SERIAL NUMBERS!
parted --script /dev/sdg mklabel gpt
parted -a optimal /dev/sdg mkpart primary 0% 100%
mdadm --add /dev/md0 /dev/sdg1
watch -n1 'mdadm --detail /dev/md0 && echo "\n" && cat /proc/mdstat' # Now just sit back, and watch it while it's recovering. This step is not required, but cool to look at.

Watch it build/grow/whatever

Want to see some fancy stats for the rest of the day? Alrighty!

watch -n1 'mdadm --detail /dev/md0 && echo "\n" && cat /proc/mdstat'

Now crank that text font size up to BIG and chill ;)

If that "Failed Devices : 0" changes to 1 or more, it's okay to panic.. BECAUSE YOU NEED TO PANIC AND GET A NEW DRIVE ASAP!!!! When you have the new drive, look at the section just above this one.

Encryptiiionnn

Setup /w RAID

parted --script /dev/sdc mklabel gpt
parted --script /dev/sdf mklabel gpt
parted --script /dev/sdg mklabel gpt
parted -a optimal /dev/sdc mkpart primary 0% 100%
parted -a optimal /dev/sdf mkpart primary 0% 100%
parted -a optimal /dev/sdg mkpart primary 0% 100%
mdadm --create /dev/md1 --level=5 --raid-devices=3 /dev/sdc1 /dev/sdf1 /dev/sdg1 
cat /proc/mdstat
cryptsetup -y -v luksFormat /dev/md1 # The encryption setup
cryptsetup luksOpen /dev/md1 md1luks # Run this after each boot
mkfs -t ext4 -E lazy_itable_init=1 -O 64bit,sparse_super,filetype,resize_inode,dir_index,ext_attr,has_journal,extent,huge_file,flex_bg,uninit_bg,dir_nlink,extra_isize /dev/mapper/md1luks # The bigger the longer, have patience.
mkdir /mnt/md1
mount -t ext4 /dev/mapper/md1luks /mnt/md1 # Run this after each boot
chown -R mathias:mathias /mnt/md1

Setup /wo RAID

cryptsetup -y -v luksFormat /dev/sdc # The encryption setup
cryptsetup luksOpen /dev/sdc porn # Run this after each boot
mkfs -t ext4 -E lazy_itable_init=1 -O 64bit,sparse_super,filetype,resize_inode,dir_index,ext_attr,has_journal,extent,huge_file,flex_bg,uninit_bg,dir_nlink,extra_isize /dev/mapper/porn 
mkdir /mnt/pornmount
mount -t ext4 /dev/mapper/porn /mnt/pornmount # Run this after each boot

Unmount

umount /mnt/pornmount
cryptsetup luksClose porn

Stats

cryptsetup -v status md1luks

My open script

This is the script I run manually after boot. I've disabled samba, so I know when things are down, and to avoid mistakes.

set -e
cryptsetup luksOpen /dev/md1 md1luks
mount -t ext4 /dev/mapper/md1luks /mnt/md1
systemctl start smbd

File sharing (samba)

apt-get install samba samba-common libcups2
mv /etc/samba/smb.conf /etc/samba/smb.conf.bak
nano /etc/samba/smb.conf
  • Samba can't read Linux users' passwords. So you need to add one manually by using smbpasswd before they can login.
  • Samba can read and understand Linux groups. I recommend adding "force group" when doing this to make sure everyone will always be able to access the files.
  • Samba follows the file system permissions, but I've seen it giving read permission if not configured correctly. So remember to configure "valid users" correctly.
  • Remember to set MTU to 9000 if possible (if you don't know what MTU is, then don't). Also do a speed test on your drives before getting mad at your NIC for only giving you 189Mbyte/sec (@10gbps link).

GLOBAL (required!)

The smb.conf consists of multiple parts (all in the same file). First, we need the [global] settings tag. I have fine-tuned this to be as fast as possible! If you know any other tweak that makes it faster, let me know :)

[global]
workgroup = WORKGROUP
server string = Samba Server %v
netbios name = debian
security = user
map to guest = bad user
dns proxy = no
mangled names = no
socket options = TCP_NODELAY IPTOS_LOWDELAY SO_RCVBUF=131072 SO_SNDBUF=131072
strict allocate = Yes
read raw = Yes
write raw = Yes
strict locking = No
min receivefile size = 16384
use sendfile = true
aio read size = 16384
aio write size = 16384

Then you can choose what you want next, based on what you want to do.

Everyone has access, f* security

As long as you don't add any other shares that require login, the client won't be asked to login with anything.

mkdir /mnt/md0/anonymous
chmod -R 0755 /mnt/md0/anonymous
chown -R nobody:nogroup /mnt/md0/anonymous
[Anonymous]
path = /mnt/md0/anonymous
browsable =yes
writable = yes
guest ok = yes
read only = no

Specific user only

mkdir /mnt/md0/mathias
chmod -R 0700 /mnt/md0/mathias
chown -R mathias:mathias /mnt/md0/mathias
[mathias]
 path = /mnt/md0/mathias
 valid users = mathias
 guest ok = no
 writable = yes
 browsable = yes
 create mask = 0700
 directory mask = 2700

You can add more users under the "valid users". They need to be separated by space ("user1 user2 user3"). Just be sure to update chmod/chown for it, and use a common group (& "force group".. see below) :)

Specific group

mkdir /mnt/md0/entertainment
chmod -R 0770 /mnt/md0/entertainment
chown -R mediauser:smbgroup /mnt/md0/entertainment
[entertainment]
 path = /mnt/md0/entertainment
 valid users = @smbgroup
 guest ok = no
 writable = yes
 browsable = yes
 force user = mediauser
 force group = smbgroup

Add (linux) users to group

First we need to create a group

groupadd smbgroup

Create user, and make this the primary group

useradd mediauser -G smbgroup

Add existing user

usermod -a mathias -G smbgroup

Add (linux) users to Samba

Samba can't read the passwords set in Linux, and I also don't recommend having your top secret password floating over the network.

Luckily it's easy to add them using:

smbpasswd -a mathias

File sharing (nfs)

Server

apt-get install nfs-kernel-server portmap
nano /etc/exports # Modify and insert the lines below
# /mnt/md0/nfs/esxi 10.20.40.0/24(rw,sync,no_root_squash,no_subtree_check)
# /mnt/md0/sharedfolder 10.20.30.40(rw,sync,no_root_squash,no_subtree_check)
exportfs -ra
systemctl restart nfs-kernel-server

Client

apt-get install nfs-common
mkdir /mnt/localnfsfolder
nano  /etc/fstab # Modify and insert the lines below - BUT BE CAREFUL, THIS FILE CAN BREAK YOUR SYSTEM!
# 10.20.30.123:/mnt/md0/sharedfolder /mnt/localnfsfolder nfs rw,async,hard,intr,noexec 0 0
mount -a # or: mount /mnt/localnfsfolder

Windows

Get the UID and GID from the folder you wish to mount: ls -n /mnt/md0/sharedfolder

Add the UID and GID to the registry first, so you don't have to restart the Client For NFS service after install.

REG ADD HKLM\Software\Microsoft\ClientForNFS\CurrentVersion\Default /v AnonymousUid /t REG_DWORD /d 1000 
REG ADD HKLM\Software\Microsoft\ClientForNFS\CurrentVersion\Default /v AnonymousGid /t REG_DWORD /d 1001

Turn Windows features on or off > Client for NFS

mount -o anon 10.20.30.123:/mnt/md0/sharedfolder z:

10Gbps network

The following things might help you. I haven't fully tested them yet, but I have all of the following applied.

If you're doing an iperf3 from Windows, and it just seems wrong, remember to do parallel connections. Because Windows is Windows.. (iperf3 -c iphere -P 20)

Find the PCI id (in my case 80861528) and turn up the bandwidth on it - I had no speed increase from this:

setpci -v -d 8086:1528 e6.b=2e

Add MTU 9000:

allow-hotplug eth2
iface eth2 inet static
  address 10.11.12.13
  netmask 255.255.255.0
  mtu 9000

Also make sure your switch is not limiting the MTU. I needed to set mine to 10k in L2 MTU before I could do 9k MTU. If you want to quickly switch between MTU's, ifconfig eth0 mtu 9000 up is pretty useful

Apply some fancy things to /etc/sysctl.conf, if MTU9000 doesn't achieve the results you want (reload with sysctl -p):

net.ipv4.tcp_reordering = 16
net.ipv4.tcp_fack = 0
net.ipv4.tcp_dsack = 0
net.ipv4.tcp_allowed_congestion_control = htcp reno highspeed scalable lp
net.ipv4.tcp_congestion_control = highspeed
net.ipv4.tcp_low_latency = 1
net.ipv4.tcp_slow_start_after_idle = 0
net.ipv4.tcp_timestamps = 0
net.ipv4.tcp_window_scaling = 1
net.ipv4.tcp_adv_win_scale = 4
net.ipv4.tcp_mem = 16777216 33554432 67108864
net.ipv4.tcp_rmem = 8388608 16777216 33554432
net.ipv4.tcp_wmem = 8388608 16777216 33554432
net.ipv4.udp_mem = 2097152 8388608 16777216
net.ipv4.udp_rmem_min = 262144
net.ipv4.udp_wmem_min = 262144
net.core.rmem_default = 4194304
net.core.wmem_default = 8388608
net.core.rmem_max = 16777216
net.core.wmem_max = 33554432
net.core.optmem_max = 4194304
net.core.somaxconn = 8192
net.core.netdev_max_backlog = 3000000