How to install on Hardware RAID?

I have a brand new system built around an ASUS Prime Z690-P D4 motherboard and a Core i9-12900KS CPU.
This motherboard offers hardware RAID5, so I wanted to give it a try. That seems simpler than doing it in software (as I usually do).

However, upon booting into setup (USB drive) for Alma Linux 8.6 Minimal, I find that no hard drives show up. I double-checked and a nice RAID5 array has been set up in the BIOS. Total size 8 TB, and I have (3) 4 TB drives in the machine so that 8 TB number is perfect.

It would seem the only problem is that the Alma Linux installer is not loading the drivers for RAID, so it can’t see the RAID array that’s there.

How can I fix this?

Thanks in advance –

Matthew

That board does not have hardware RAID. What it has is Intel® Rapid Storage Technology on the chipset, aka fake RAID. Such RAID is in practice implemented in software.

Linux software RAID should be the easier option.

yup, never trust consumer grade RAID controllers, hell, never trust enterprise ones!

Is software raid really to be preferred over motherboard-based RAID? Seems like it would be simpler to control it at a BIOS level, and then Linux would just see a nice monolithic hard drive to work with. What are some of the advantages of software raid?

I’ve never heard that consumer-grade RAID controllers are that bad or unreliable. But mere software is rock solid? I guess I haven’t wrapped my brain around that one yet.

It doesn’t. Here is an example:

$ lsblk
NAME         MAJ:MIN RM   SIZE RO TYPE  MOUNTPOINT
sda            8:0    0   1.8T  0 disk  
└─md126        9:126  0   1.8T  0 raid1 
  ├─md126p1  259:11   0   800G  0 md    
  └─md126p2  259:12   0   600G  0 md    
sdb            8:16   0   1.8T  0 disk  
└─md126        9:126  0   1.8T  0 raid1 
  ├─md126p1  259:11   0   800G  0 md    
  └─md126p2  259:12   0   600G  0 md    
sr0           11:0    1  1024M  0 rom   
nvme0n1      259:0    0 745.2G  0 disk  
...

$ cat /proc/mdstat 
Personalities : [raid1] 
md126 : active (auto-read-only) raid1 sda[1] sdb[0]
      1953511424 blocks super external:/md127/0 [2/2] [UU]
      
md127 : inactive sda[1](S) sdb[0](S)
      6320 blocks super external:imsm
       
unused devices: <none>

$ lspci -nn -s 00:17.0
00:17.0 RAID bus controller [0104]: Intel Corporation SATA Controller [RAID mode] [8086:2822] (rev 31)

Kernel sees every disk directly and mdadm assembles the arrays. There used to be separate softwares for fakeRAIDs but now they are integrated into the Linux software RAID package.

On this example machine the array is RAID-1, mirror, which is trivial – one can read single disk without assembling the array. The striping RAID modes are not so easy. I have had this array “fall apart” a couple times; no array was assembled and OS saw two disks with identical content – really bad when “both filesystems” have same UUID. Luckily, this array is only for data; no OS nor homes.

The only thing that the “RAID firmware” does is initialization of array metadata on the drives (and perhaps some support for boot). Let say you have striped RAID-5. EFI must load bootloader from the drive(s). Is the bootloader on single drive or is it striped over multiple drives? The bootloader must then load the OS kernel and initramfs that surely span multiple disks. How can it do that? Kernel uses initramfs to assemble the array and mount it. I have no idea how fakeRAID gets all this done. (Then again, something does load kernel and initramfs from hardware RAID too – the hw controllers do indeed have their own drivers.)

In earlier days (outside of Linux) there used to be question: Are Intel chipset RAID X and Y compatible? Can one migrate array from one board to another or is “wipe and create” necessary? That is, (backward) compatibility was not ensured even within one brand.

I once had RAID-1 array on NVidia chipset fakeRAID. I moved the disks to another motherboard, where they were connected to LSI’s chip. Linux did continue to assemble the NVidia array, because it did saw the metadata created by the first motherboard. This did reveal that the stuff “in BIOS” is utterly meaningless. However, LSI firmware cannot make changes to metadata written by Intel firmware. At worst it partially overwrites the foreign metadata.

With Linux software RAID you have complete control of what you have and it is independent from the disk controllers that the system has. You can use whole disks for array and create partitions within the array or (the more common) partition the disks and create arrays from select partitions. It is much less mind-boggling to have ESP and /boot on RAID-1 and the other filesystems on “fancier” RAID modes.

I think you’ve convinced me.

Once more question – HOW do I get to a prompt of some sort in AlmaLinux so I can create my software RAID array? There’s no “live CD” element so I can’t just do a temp install of mdadm, etc. and set up the RAID drive(s). Do I have to use a LiveCD of some other distro to do this before I boot into the Alma Linux installer?

Thanks!