| freaxeh2 | I'm having mdadm issues | 00:25 |
|---|---|---|
| freaxeh2 | https://paste.debian.net/1349551/ | 00:26 |
| rwp | mdadm: no recogniseable superblock on /dev/sda1 mdadm: /dev/sda1 has no superblock - assembly aborted | 00:27 |
| freaxeh2 | yeah I just created the array | 00:27 |
| rwp | And then you try using /dev/sda which tells me that you are guessing at what's there. That's not good. | 00:27 |
| rwp | What command did you use to create the array? | 00:28 |
| rwp | Normally one would create an array with a command similar to this one: mdadm --verbose --create /dev/md1 --level=mirror --raid-devices=2 /dev/hdX1 /dev/hdY1 | 00:29 |
| freaxeh2 | mdadm --create /dev/md0 --level=5 --raid-devices=4 /dev/sda1 /dev/sdb1 /dev/sdc1 /dev/sdd1 | 00:29 |
| freaxeh2 | actually i didn't use that command | 00:30 |
| rwp | As a hint I suggest keeping the md number to be the same as the partition numbers. It is arbitrary. But it keeps things easier to remember later. So md1 when working with sda1 and so forth. | 00:30 |
| freaxeh2 | heres the command i used from history: | 00:30 |
| freaxeh2 | mdadm --create /dev/md0 --level=5 --raid-devices=4 /dev/sda /dev/sdb /dev/sdc /dev/sdd | 00:31 |
| rwp | Seems okay. In that case you should have an array running right at that moment. Run this to see what is the status of it: cat /proc/mdstat | 00:31 |
| freaxeh2 | Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] | 00:32 |
| freaxeh2 | unused devices: <none> | 00:32 |
| freaxeh2 | i can't seem to find it though in /dev | 00:32 |
| rwp | The /proc/mdstat is probably too large to copy into here and would need it's own pastebin. | 00:32 |
| rwp | You can examine an individual partition to see how the configuration went using: mdadm --examine /dev/sda | 00:33 |
| rwp | If the md is created then you can get the details on it: mdadm --detail /dev/md0 | 00:33 |
| freaxeh2 | nope it just doesn't exist | 00:34 |
| freaxeh2 | /dev/md0 just doesn't exist | 00:34 |
| freaxeh2 | i did do a reboot after creating the array before saving its configuration | 00:34 |
| rwp | Then the array creation failed. Try it again. Look for errors. Browse /var/log/syslog to see if anything was logged there. | 00:34 |
| freaxeh2 | to mdadm.conf | 00:35 |
| freaxeh2 | ok | 00:35 |
| rwp | If the individual disks have been set up then "mdadm --examine /dev/sda" and on through the set will show something. | 00:35 |
| rwp | I suggest always using partitions because it makes things easier to understand later after having forgotten how they were set up. | 00:36 |
| freaxeh2 | https://paste.debian.net/1349552/ | 00:36 |
| freaxeh2 | thats the output of mdadm --examine | 00:36 |
| freaxeh2 | and mdadm --detail | 00:36 |
| rwp | It's certainly okay to use /dev/sda but then later when trying to poke at things with a stick on a different system it is easier if it is /dev/sda1 such that we can get a normal result easier. | 00:36 |
| rwp | That --examine does not show an array. Let me paste an example. https://paste.debian.net/1349553/ | 00:37 |
| freaxeh2 | thank you | 00:37 |
| rwp | Here is cat /proc/mdstat example: https://paste.debian.net/1349554/ | 00:38 |
| rwp | Example of --detail output: https://paste.debian.net/1349555/ | 00:39 |
| freaxeh2 | syslog just shows a bunch of gobbldygook | 00:55 |
| freaxeh2 | ccouldn't see anything related to mdadm | 00:55 |
| freaxeh2 | i'll recreate the array anyway | 00:56 |
| freaxeh2 | even though its going to take like 6 hours | 00:56 |
| rwp | Hmm... By default mdadm will create version 1.2 arrays that includes a bitmap. You should be able to reboot immediately and it will pick up sync'ing the array from where it left off. | 00:57 |
| rwp | Though I admit I 99.44% of the time use RAID1 mirrors and you are creating a RAID5 which I almost never do. | 00:57 |
| rwp | I am in the camp that RAID5 is not a good RAID level to use. I either use RAID1 mirrors or I use RAID6 with two disks of redundancy. Seen too many catastrophes when people use RAID5. | 00:58 |
| rwp | Search around and you will see a lot of reports about problems with RAID5. | 00:58 |
| rwp | It might be fine for you on a personal system though. The problem is that things are great while everything is working. But then a drive fails. If it is an active production site now the system is running in degraded mode which uses more cpu. That might be enough more that the site can't keep up and falls over. | 00:59 |
| rwp | Also another problem is that long term stored files might not have been read for a long time and during array rebuild a second problem is found which means the array no longer has enough information to recreate the storage array and fails then. | 00:59 |
| freaxeh2 | ok i'll use raid6 | 01:00 |
| freaxeh2 | the files i'm storing on it are semi-important and non important | 01:00 |
| freaxeh2 | a mixture | 01:00 |
| rwp | RAID6 on 4 devices is fine. But for simplicity then you want RAID10 which is 2x RAID1 mirrors. | 01:00 |
| rwp | I have never created a RAID6 on fewer than 6x devices. (shrug) | 01:01 |
| freaxeh2 | ok i'll use raid10 then | 01:01 |
| freaxeh2 | lol | 01:01 |
| rwp | The advantage of RAID6 over RAID10 is that with RAID6 any two of the devices can fail in any combination. In RAID10 one device in each mirror can fail which adds up to two but if both sides of one mirror fail then the array is lost. | 01:02 |
| rwp | But RAID10 is much simpler and simple is usually a good way to go. | 01:02 |
| freaxeh2 | yep | 01:02 |
| freaxeh2 | yes K.I.S.S | 01:03 |
| rwp | Also remember that when creating a big array then you really need yet a second storage place for backup. Because RAID is not backup. And at some point something will break and you will need backup. Because you will have a lot of data. | 01:04 |
| rwp | Additionally I always set up smartmonutils package with daily self tests in order to keep poking at the drives. SMART won't predict a failure but it will confirm a failure. | 01:04 |
| rwp | If you have a failure don't delay in doing something about it. I have twice now been involved in array recoveries where I was called in because one drive failed and everything worked so they did nothing. Then too many drives failed and they did not have enough to recover. It was too late by then. | 01:05 |
| rwp | It's good to start on the simple side of things, build up experience using it, then graduate to more complicated configurations building upon the experience with the simpler ones. | 01:06 |
| rwp | Good luck! Have fun! :-) | 01:07 |
| * UsL adds even more wisdom to his rwp_thing.txt | 01:24 | |
| UsL | thanks. | 01:24 |
| UsL | *things | 01:24 |
| * rwp feels warm all over and thanks UsL for the kind words! | 01:25 | |
| UsL | last edited in 2023-01-21 I've been afk for way too long. | 01:28 |
| freaxeh2 | yep raid is not backup, learnt that a long time ago. | 02:06 |
| Xenguy | rwp, smartmontools ? | 03:02 |
| rwp | https://packages.debian.org/sid/smartmontools | 03:08 |
| rwp | And then use something like this in the smartd.conf file: /dev/sda -a -I 190 -I 194 -o on -S on -s (S/../../[1-5]/03|L/../../6/03) -m root -M exec /usr/share/smartmontools/smartd-runner | 03:10 |
| rwp | This paste has explanations of the arcane syntax to some extent. https://paste.debian.net/1349573/ | 03:11 |
| Xenguy | tx | 03:12 |
| * rwp is distracted while at a user group meeting IRL | 03:13 | |
Generated by irclog2html.py 2.17.0 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!