"At least 48MB more space needed on the / filesystem" running leapp upgrade

Hi,

I’m trying to upgrade from Rocky Linux 8.9 to Rocky Linux 9 using the leapp upgrade. At some point the process stops and shows the following error:

Error Summary
-------------
Disk Requirements:
   At least 48MB more space needed on the / filesystem.

We had a similar problem in the past when upgrading from CentOS 7 to Rocky Linux 8, and learnt that this was due to the OVL size rather than the space available in the filesystem, so got this fixed by increasing the size assigned to OVL (i.e. export LEAPP_OVL_SIZE=5000). Although this worked for the previous migration, increasing the OVL size does not seem to fix the issue for this migration.

This is the output when I run ‘df -h’:

Filesystem      Size  Used Avail Use% Mounted on
devtmpfs         16G     0   16G   0% /dev
tmpfs            16G     0   16G   0% /dev/shm
tmpfs            16G  488K   16G   1% /run
tmpfs            16G     0   16G   0% /sys/fs/cgroup
/dev/nvme0n1p1  100G   41G   60G  41% /
/dev/nvme2n1    5.0G  666M  4.4G  14% /var/lib/docker
tmpfs           3.1G     0  3.1G   0% /run/user/1003

Any ideas of what else I could try?

Thanks,
Tedi

I also ran into this. For me it was “an overlay issue” A lot more digging got me this:

Try setting the following in your shell before running the leapp commands.
export LEAPP_OVL_SIZE=4096

You can do this manually or put it in your root .bash_profile.

It was indeed my fault since I missed the ‘preserve-env’ prefix when running the upgrade command. For the benefit of others having the same problem, this should work:

sudo --preserve-env leapp upgrade

I too am having this issue with CentOS → Alma 8 leapp upgrade. I get a different number, 749MB, but the same problem.

  • I was originally running in tcsh as root so did a
    setenv LEAPP_OVL_SIZE 4096
    but it still failed
  • thinking tcsh might be a problem I switched to bash and the
    export LEAPP_OVL_SIZE=4096
    and still no joy. I even tried tedi’s mode and even though I was root I used the full command
    sudo --preserve-env leapp upgrade
    and it is still failing out.

I’m running on a 1TB raid set so I really don’t think I’m actually running out of space. Any other thoughts?

Is there really no work-around of fix for this error in Leapp? I thought I’d try the Elevate/leapp migration rather than full new install and build out but it isn’t looking so good if this is a show stopper.

Here is the screen shots from the last attempt:

246 dunwellguitar3:/root 
> whoami
root
247 dunwellguitar3:/root 
> setenv LEAPP_OVL_SIZE 2048
248 dunwellguitar3:/root 
> echo $LEAPP_OVL_SIZE
2048
249 dunwellguitar3:/root 
> leapp preupgrade
==> Processing phase `configuration_phase`
====> * ipu_workflow_config
        IPU workflow config actor
==> Processing phase `FactsCollection`
.
.
. Lots of stuff here
.
.
Debug output written to /var/log/leapp/leapp-preupgrade.log

============================================================
                           REPORT           THIS IS GREEN
============================================================

A report has been generated at /var/log/leapp/leapp-report.json
A report has been generated at /var/log/leapp/leapp-report.txt

============================================================
                       END OF REPORT       THIS IS GREEN   
============================================================

Answerfile has been generated at /var/log/leapp/answerfile

Started leapp upgrade, no scree clip because it exceeded
the scroll memory

- skipped over previous packages
- downloaded packages
--------------------------------------------------------------------------------
Total                                           2.3 MB/s | 2.1 GB     15:50     
Running transaction check
Transaction check succeeded.
Running transaction test
The downloaded packages were saved in cache until the next successful transaction.
You can remove cached packages by executing 'dnf clean packages'.

STDERR:
No matches found for the following disable plugin patterns: subscription-manager
Repository extras is listed more than once in the configuration
Repository extras-source is listed more than once in the configuration
Warning: Package marked by Leapp to install not found in repositories metadata: python3-nss
Warning: Package marked by Leapp to upgrade not found in repositories metadata: leapp-upgrade-el7toel8 gpg-pubkey leapp python2-leapp
RPM: warning: Generating 6 missing index(es), please wait...
Error: Transaction test error:
  installing package linux-firmware-20240610-122.git90df68d2.el8_10.noarch needs 13MB on the / filesystem
.
. 
. installed lots of packages
.
.
  installing package libstdc++-8.5.0-22.el8_10.i686 needs 2441MB on the / filesystem
  installing package libattr-2.4.48-3.el8.i686 needs 2441MB on the / filesystem
  installing package pcre-8.42-6.el8.i686 needs 2442MB on the / filesystem
  installing package nss-softokn-freebl-3.90.0-7.el8_10.i686 needs 2443MB on the / filesystem

Error Summary
-------------
Disk Requirements:
   At least 2588MB more space needed on the / filesystem.



============================================================
                       END OF ERRORS                        
============================================================


Debug output written to /var/log/leapp/leapp-upgrade.log

============================================================
                           REPORT                           
============================================================

A report has been generated at /var/log/leapp/leapp-report.json
A report has been generated at /var/log/leapp/leapp-report.txt

============================================================
                       END OF REPORT       THIS IS RED
============================================================

Answerfile has been generated at /var/log/leapp/answerfile
2024-07-24 13:56:35.397 ERROR    PID: 12578 leapp: Upgrade workflow failed, check log for details

------------------ End Of Screen Clips -----------
 Just FYI, Install made eleven 2.1 GB Volumes on root desktop
in the process at some point.

Post install failure the filesystem looks like

> lsblk
NAME      MAJ:MIN RM   SIZE RO TYPE  MOUNTPOINT
sda         8:0    0 931.5G  0 disk  
├─sda1      8:1    0   500M  0 part  
├─sda2      8:2    0 904.1G  0 part  
│ └─md125   9:125  0   904G  0 raid1 /home
├─sda3      8:3    0    25G  0 part  
│ └─md127   9:127  0    25G  0 raid1 /
├─sda4      8:4    0     1K  0 part  
└─sda5      8:5    0   1.9G  0 part  
  └─md126   9:126  0   1.9G  0 raid1 [SWAP]
sdb         8:16   0 931.5G  0 disk  
├─sdb1      8:17   0   500M  0 part  /boot
├─sdb2      8:18   0 904.1G  0 part  
│ └─md125   9:125  0   904G  0 raid1 /home
├─sdb3      8:19   0    25G  0 part  
│ └─md127   9:127  0    25G  0 raid1 /
├─sdb4      8:20   0     1K  0 part  
└─sdb5      8:21   0   1.9G  0 part  
  └─md126   9:126  0   1.9G  0 raid1 [SWAP]
sr0        11:0    1  1024M  0 rom   
loop0       7:0    0     2G  0 loop  /run/media/root/bf6aaa95-e5e8-4fa7-80cb-cd878c778053
loop1       7:1    0     2G  0 loop  /run/media/root/89c43a3e-dcd4-492d-8bff-2e3d4f9f9330
loop2       7:2    0     2G  0 loop  /run/media/root/d79dc041-af20-49d3-a6c8-07242282cd13
loop3       7:3    0     2G  0 loop  /run/media/root/856b3ec1-582a-4409-a6a5-d534735cc195
loop4       7:4    0     2G  0 loop  /run/media/root/c370c465-8150-4575-91a1-4f2e2255dc1c
loop5       7:5    0     2G  0 loop  /run/media/root/5b7a673d-0079-4b0a-9710-7d249260ff21
loop6       7:6    0     2G  0 loop  /run/media/root/38275831-8d65-4faf-b4ff-61953e98e46d
loop7       7:7    0     2G  0 loop  /run/media/root/e02c827c-c358-44fb-a1ee-49ddc07e8739
loop8       7:8    0     2G  0 loop  /run/media/root/d7d9a296-7209-4a89-baa9-7fadcc877eb3
loop9       7:9    0     2G  0 loop  /run/media/root/f6e3decd-8cef-4071-9d62-526417126f22
loop10      7:10   0     2G  0 loop  /run/media/root/50c59ad8-6601-42f8-a7f7-2fd1953b3b81


sda and sdb are the raid 1 disks and one assumes the Loop# are the 
eleven volumes showing up on the desktop

looks like you’ve set it to 2048 not 4096:

i wonder if it could also be the software raid setup…?

Good catch on the size there, I completely overlooked that with this iteration. I’ll re-run and verify overlay size again tomorrow and report back.

I’m not sure how to make it non-raid for the upgrade. I’ll eventually need it back to raid 1 again but I suppose I could do that after attaining AL 9 status.


Back again. I have tried multiple ways and it still fails out with the more space needed error. I have logged in as root and

  • in tchsh set the env var and tried
  • in tchsh set the env and also did sudo --preserve-env
  • repeated the two modes in bash
    In every case I verified the environment var was in fact set to 4096

In all cases it continues to fail with the space needed error.

One last report here.
I got to figuring that this whole system was probably pretty gorffed up from all these tries so I went back and did a fresh dd clone from the original system. Then with that I tried the whole process again. Here are the steps I took

- Do dd clone of sda and sdb to make system image. See the dd clone sheet for details

- log in as root
- 	sudo curl -o /etc/yum.repos.d/CentOS-Base.repo https://el7.repo.almalinux.org/centos/CentOS-Base.repo
	sudo yum upgrade -y
	sudo reboot
        cat /etc/redhat-release
     Reported          CentOS Linux release 7.9.2009 (Core)

- 	sudo yum install -y http://repo.almalinux.org/elevate/elevate-release-latest-el7.noarch.rpm

- 	sudo yum install -y leapp-upgrade leapp-data-almalinux

- rpm -qa kernel\* to list all the kernels. Use yum remove to get rid of all packages for 
versions other than the latest.

- sudo leapp preupgrade 
to check things out. Then
   Made 6 2.1GB volumes on DT
     Inhibited because of Answer file.

- check these to see if they have been done
 Use yum list installed |grep pata_acpi. If it is still there then run
      sudo rmmod pata_acpi
--- not needed, it was uninstalled already
 then look at sshd_config to see if the permit root logins is set. If not then run
   echo PermitRootLogin yes | sudo tee -a /etc/ssh/sshd_config
---- not needed, already there
Then look in /var/log/leapp/ area for the answers file and check for the 
remove_pam line in the command below. If not there then run the command below to install the line.
   sudo leapp answer --section remove_pam_pkcs11_module_check.confirm=True
---- needed so I ran the line. Maybe leapp preupgrade should do this line too
- rerun the preupgrade till you get green
        Made more 2.1GB volumes, total 12 on DT.Got the green OK

- set to bash
    got a message when going to bash
      ABRT has detected 2 problem(s). For more info run: abrt-cli list --since 1449612248
      went back to tcsh and then back to bash again w/o problems

   export LEAPP_OVL_SIZE=4096
   echo $LEAPP_OVL_SIZE  and check it is 4096
   sudo --preserve-env leapp upgrade
       Made another three 4.3GB DT Volume

Fails out with
Error Summary
-------------
Disk Requirements:
   At least 749MB more space needed on the / filesystem.

So as you ca see it still failed out with the same error. I can only conclude that the whole leapp upgrade process is half baked and not really a working option. That is too bad because running the preupgrade and upgrade you can see that there are about a gazillion downloads and installs and scriptletts and processes and checks that all go w/o a hitch. That is some serious good work from the code monkeys!

If anyone has any further ideas I’d be glad to hear them. Othewise it looks like I’ll be doing a full install of 9 and a build out and tweak of the whole blasted system.