After 4 years, I have also accumulated 3 x 2TB external SATA HDDs (2 of which are from someone who has moved on to using cloud storage, and 1 from my Alienware Aurora R5).
I have not given up on my pi and to satisfy my tinkering itch, I thought to myself: "Why not build a NAS?". And so I did. What follows are steps which I borrowed by sifting through information on the internet (as I am not the first one to do this). This link shows how the author had done it for 3 x 3TB SATA HDDs.
sudo apt-get update
sudo apt-get install mdadm
3 disks is the minimum required count for a RAID5. I'm not sure if having different sizes will work but fortunately, all my disks are 2TBs.
To identify the disks, one can do
cat /proc/partitions
or
lsblk -o NAME,SIZE,FSTYPE,TYPE,MOUNTPOINT
My 3 disks were identified as sda1, sdb1, sdc2. Preparing them for RAID use requires some partitioning. Here's the sequence of commands:
sudo parted
select /dev/sda
mklabel gpt
mkpart primary ext4 1049KB 2TB
select /dev/sdb
mklabel gpt
mkpart primary ext4 1049KB 2TB
select /dev/sdc
mklabel gpt
mkpart primary ext4 1049KB 2TB
quit
After which, the RAID5 can now be created (note the double dashes):
sudo mdadm --create --verbose --force --assume-clean /dev/md0 --level=5 --raid-devices=3 /dev/sda1 /dev/sdb1 /dev/sdc1
To confirm that they are running (note that this command is also useful to know whether the RAID is active after a reboot):
cat /proc/mdstat
Create the ext4 filesystem with (it takes a while to complete as this formats the disks):
sudo mkfs.ext4 -F /dev/md0
Let's create a directory which will be used later as mountpoint:
sudo mkdir -p /mnt/raid5
And mount with:
sudo mount /dev/md0 /mnt/raid5
To check if everything is ok, issue the the following (note that if lost+found is missing, then there's something wrong)
ls -al /mnt/raid5
To check capacity:
df -h -x devtmpfs -x tmpfs
Save the configuration so it starts up at boot time automatically:
sudo mdadm --detail --scan | sudo tee -a /etc/mdadm/mdadm.conf
Raspberry pi uses ramdisk when booting up so we want to include the RAID:
sudo update-initramfs -u
Add the RAID to the filesystem table so it will be mounted automatically when booting up:
echo '/dev/md0 /mnt/raid5 ext4 defaults,nobootwait,nofail 0 0' | sudo tee -a /etc/fstab
Cross your fingers and issue:
sudo reboot
Log back to your pi and again check if lost+found is present:
ls /mnt/raid5
The best thing about experimentation is learning. What do you do when things go out of hand? How do you fix stuff?
(1) Check the state of your RAID:
cat /proc/mdstat
(2) To stop your RAID:
sudo mdadm --stop /dev/md0
(3.1) If your raid doesn't turn up one day first check the result of (take note most specially the /dev/md***):
sudo mdadm --examine --scan
(3.2) Append the result to the config:
sudo mdadm --examine --scan | sudo tee -a /etc/mdadm/mdadm.conf
(3.3) Edit the entries in your /etc/fstab with the result of (3.1)
(3.4) Reassemble
sudo mdadm --assemble --scan --force -v /dev/md** /dev/sda1 /dev/sdb1 /dev/sdc1
(4) Identify the disks you have:
cat /proc/partitions
QED
(1) Check the state of your RAID:
cat /proc/mdstat
(2) To stop your RAID:
sudo mdadm --stop /dev/md0
(3.1) If your raid doesn't turn up one day first check the result of (take note most specially the /dev/md***):
sudo mdadm --examine --scan
(3.2) Append the result to the config:
sudo mdadm --examine --scan | sudo tee -a /etc/mdadm/mdadm.conf
(3.3) Edit the entries in your /etc/fstab with the result of (3.1)
(3.4) Reassemble
sudo mdadm --assemble --scan --force -v /dev/md** /dev/sda1 /dev/sdb1 /dev/sdc1
(4) Identify the disks you have:
cat /proc/partitions
QED
No comments:
Post a Comment