Hi Almalinux people, specialists and not to forget jlehtone who helped me in the past with some serious magic.
Again there is an an issue with the perc H200, I wish I never started using it in the first place. However it worked for 2 years so that is not too bad I guess.
I simply don’t understand to use it.
I had a problem on my T310 Dell poweredge server with a second RAID 1 virtual disk set of 2 drives. Not the system disk. I have another RAID1 that containes the system. so total of 4 drives and 2 RAID virtual disks.
Or, 2 RAID1 sets of 2 physical drives each.
I lost one of these RAID configurations. I did something stupid and now I have 2 drives of healthy data that form a RAID together, however the raid configuration is lost.
In the perc H200 bios tool I can see 2 separated drives. The data is still accessible.
I assumed there is a method to import a foreign set of disks on the same H200 controller. In fact it is not foreign at all, the lost RAID was created on the same H200 controller. However, there is no option to import a set of drives containing an earlier created RAID1.
Dell says there is the Open Manage Administrator tool that should be able to handle imports for this H200 card. Weird, can that software do more than the bios for the H200 card itself? I didn’t understand.
(For poweredge T310 in combination with perc H200i)?
I tried to download this version:
File Format: A gnu zip file for software installation
File Name: OM-SrvAdmin-Dell-Web-LX-11.1.0.0-5747.RHEL8.x86_64_A00.tar.gz
File Size: 163.79 MB
It reports all kind of dependency problems.
So, in the end my question is.
Is there another version that will work? (Dell only supports Red Hat and for Dell T310 only Red Hat 6 officially…)
And when if it could get installed would I be able to import my lost configured raid drives back as a virtual RAID 1 drive with it, on the H200 controller, the way it was?
Is there perhaps another way to restore my set of RAID1 drives, I didn’t think of? I can’t be the only one facing this issue. Is there a recommended (or almalinux) procedure?
I really start dislike the H200 . After these experiences I have found it can only create a RAID or Delete it. There is simply no other function on it. There is a button manage, but that only works on an existing accepted healthy raid. To easy to destroy a healthy RAID1 , let alone migrate it or move it and set the drives back in the gates.
Sorry for this long text.
The simplest solution would be to upgrade the H200 to a H700 card. These are much more capable and should be able to import your RAID configuration. The H200 is a very basic card.
The H700 also works out of the box with genuine RHEL 8.5+ and 9.x without needing non-standard kernels. With battery and PCI bracket, they run to under £25 delivered in the UK on eBay. Bear in mind that when we get to 10.x your system is not going to make the x86-64 v3 cut.
“The H700 also works out of the box with genuine RHEL 8.5+ and 9.x without needing non-standard kernels”
Does this mean I can simply change the card to H700, connect the raid drives to the H700, migrate the raid drives as virtual disks on the H700 and after simply boot the machine up, without any hassle with incompatible perc controller drivers like happened in this article?:
and boot up the almalinux system instantly?
“Bear in mind that when we get to 10.x your system is not going to make the x86-64 v3 cut”
Does this mean form 10.0 The old T310 security updates can not be updated anymore?
Which year will this be?
Thanks. For the outstanding advice. I will check out prices for perc H700.
I am pretty sure that when I upgraded the R710 we had with an H200 to an H700 to match the others, it imported the disk array. I can’t be 100% sure, as it was a couple of years ago, so my memory is a bit fuzzy. It promptly got reimaged anyway. I recently upgraded the cards in some R330s from H330 to H730 and could import the RAID arrays (some trickledown machines with a mix of H330 and H730 all upgraded to the same specification).
Regarding RHEL 10.x, in the same way that RHEL 8 and 9 require an x86-64 v2 CPU, RHEL 10+ requires an x86-64 v3 compatible CPU. That is you are going to need a 13th gen Dell machine I believe. The 11th and 12th gen doesn’t support a sufficiently recent CPU as you need AVX2. Even on the 13th gen, you might need a processor upgrade. We have an R330 with a Pentium G4600 for licensing purposes (it only has two cores), which is not an x86-64 v3 CPU, so it will either need a CPU replacement or a new server.
All that said, we are still on 8.x as we need to get rid of NIS (we are an HPC site, and the NIS database is ~20 years old at this point) first, and it will be a couple of years at least before we need to consider ditching the remaining hardware we have that is too old. By this time, most of it will have gone anyway. All the C6220 nodes are scheduled for decommissioning next year to get some trickledown C6420s instead.
The el8 works on all x86-64. It is the el9 that requires x86-64-v2 (e.g. SSE4), and el10 will require x86-64-v3 (e.g. AVX2).
However, Alma Kitten (10) – a development branch for AlmaLinux 10 – does currently have both x86-64-v2 and x86-64-v3 builds. If there will be a x86-64-v2 build of AlmaLinux 10, then the old gear can live a bit longer. Just don’t expect third-party, like EPEL, to target that platform (el10 x86-64-v2).
My bad, I thought it was RHEL8 that required x86-64 v2. It all gets a bit fuzzy as that change had zero impact on us at work. Though I was sure the Atom based server at home didn’t work on RHEL8.
However, supporting x86-64 v2 in any RHEL10 rebuild is idiotic in the extreme. You would be dafter than a brush to run it as therein lies a pit of woe that should be obvious to even the stupidest of admins.
In Dell terms, you are talking about a Gen 12 machine or older, so it’s already over a decade old. By the time RHEL9 goes EOL, that machine will be the best part of 20 years old. You would have to be a lunatic to be running that in production.
To further put it in perspective, if you were using a server as old as that today, it would have SCSI hard drives.
We don’t use the latest and greatest hardware either. That is why I have four Dell PowerEdge R710’s as file servers. A rack full of C6220s and a bunch of Dell R420s. The R710’s are already a pain to deal with because the iDRAC won’t talk to a modern browser and needs Java for the KVM. By the time RHEL9 support runs out, these machines will be 20 years old. I would not just like there to be no requirement to run the latest and greatest hardware; there is no requirement to run the latest and greatest software on a machine.
However, if you think running production workloads on 20 year old hardware is remotely sensible, you are completely off your trolley.
Even if AlmaLinux is stupid enough to do a production x86-64 v2 rebuild of RHEL10, you would be even stupider to actually run it. Let’s put it another way. I have been using RHEL and various rebuilds now for over 20 years across thousands of machines, and I can count on my fingers the number of those that have not used a third-party repository. So unless Alma does x86-64 v2 rebuild of EPEL etc. it is even more useless. By using it one is basically pissing into the wind and Alma are wasting time and resources.
However, if you think running production workloads on 20 year old hardware is remotely sensible, you are completely off your trolley.
Your production workloads are different from someone else’s production workloads.
There will be an AlmaLinux 10 x86_64 v2 build. That has already been decided by ALESCo.
It will not take anything away from the main AlmaLinux 10 build which will be x86_64 v3.
Rebuilding EPEL for v2 hasn’t been decided yet, but I’d say that it is highly likely to happen. There’s a strong feeling within ALESCo that we should do it there just hasn’t been a decision made yet.
Building for v2 takes little to no extra effort versus v3. It’s just a build-time flag so it’s not like we’re pissing away valuable resources to do it It’s a small amount of work that makes AlmaLinux as accessible as possible to the masses
The largest “resource” is probably space on repo mirrors?
There are “the masses” and users of el10 x86_64 v2 build could be “stupid”, but what is certain is that “stupid masses” do what they do whether we enable them or not. I like to think that the x86_64 v2 build is a lesser evil. Could be wrong.
Thank you all for this complete analysis of Almalinux future. It gives a very warm feeling when asking a question concerning a RAID controller, the complete Almalinux architecture is reconsidered. I took the advice. I ordered a H700 card which was very affordable, inclusive a set of mini sas cables for the price of 60 euro’s inclusive shipment from Germany. Thanks jabuzzard! Right now, the old poor Dell bugger is still initialising de newly created Virtual Disk. Maybe this process AND the import of a previous healthy existing set of RAID drives will get much easier and quicker. We will see, it is only hobby in my case. Just a loss of many hours of work if I would have lost the system partition.
Maybe another question for that. I included the o.s. system partition on the RAID drive together with some other DATA partitions. Reading further on forums, it seems experienced admins choose for a separated drive for example a SSD or whatever to install the o.s. system on. I have 2 RAID virtual drives. I installed the o.s. on the first one on a separated partition and I created a acronis system backup of the o.s. root partition on the second RAID. Is that good practice?
Thanks all and have warm days with family , friends and/or important ones.