Quantcast
Channel: Active questions tagged mount+fstab - Ask Ubuntu
Viewing all articles
Browse latest Browse all 699

I cannot boot if mdadm RAID1 array is in FSTAB

$
0
0

Context:

Due to a failed update from 20.04 to a 22.04, I decided to reinstall from zero the 22.04.All things went fine, but I’m with an unbootable system if I left RAID entry at FSTAB.I have a 2 disk RAID1 at sdc and sdd, that is clean and working perfectly when mounted, but at boot I enter in Emergency Mode, and can only resume boot capacity when nano /etc/fstab and comment /mnt/raid1 entry.

Here the STEPS that I followed, based in this, this, this, this and that:

$ sudo mdadm --assemble /dev/md0 /dev/sd[cd]1mdadm: /dev/md0 has been started with 2 drives.$ sudo mdadm --detail --scan --verboseARRAY /dev/md0 level=raid1 num-devices=2 metadata=1.2 name=desktopubuntu:0 UUID=feba8274:e93bd0e4:9a754d82:65c19cf8   devices=/dev/sdc1,/dev/sdd1$ sudo mdadm --examine /dev/sd[cd]1/dev/sdc1:          Magic : a92b4efc        Version : 1.2    Feature Map : 0x1     Array UUID : feba8274:e93bd0e4:9a754d82:65c19cf8           Name : desktopubuntu:0  Creation Time : Sat Apr  2 19:42:37 2022     Raid Level : raid1   Raid Devices : 2 Avail Dev Size : 3906762928 sectors (1862.89 GiB 2000.26 GB)     Array Size : 1953381440 KiB (1862.89 GiB 2000.26 GB)  Used Dev Size : 3906762880 sectors (1862.89 GiB 2000.26 GB)    Data Offset : 264192 sectors   Super Offset : 8 sectors   Unused Space : before=264112 sectors, after=48 sectors          State : clean    Device UUID : e673bc23:383a992c:45571ac2:3b3e73d1Internal Bitmap : 8 sectors from superblock    Update Time : Thu Sep  8 01:44:26 2022  Bad Block Log : 512 entries available at offset 16 sectors       Checksum : 4df6a1a2 - correct         Events : 7694   Device Role : Active device 0   Array State : AA ('A' == active, '.' == missing, 'R' == replacing)/dev/sdd1:          Magic : a92b4efc        Version : 1.2    Feature Map : 0x1     Array UUID : feba8274:e93bd0e4:9a754d82:65c19cf8           Name : desktopubuntu:0  Creation Time : Sat Apr  2 19:42:37 2022     Raid Level : raid1   Raid Devices : 2 Avail Dev Size : 3906762928 sectors (1862.89 GiB 2000.26 GB)     Array Size : 1953381440 KiB (1862.89 GiB 2000.26 GB)  Used Dev Size : 3906762880 sectors (1862.89 GiB 2000.26 GB)    Data Offset : 264192 sectors   Super Offset : 8 sectors   Unused Space : before=264112 sectors, after=48 sectors          State : clean    Device UUID : 8491858d:9a1cf323:d579b65f:167bbe73Internal Bitmap : 8 sectors from superblock    Update Time : Thu Sep  8 01:44:26 2022  Bad Block Log : 512 entries available at offset 16 sectors       Checksum : ef01010d - correct         Events : 7694   Device Role : Active device 1   Array State : AA ('A' == active, '.' == missing, 'R' == replacing)

So, I did this:

sudo mkdir /mnt/raid1sudo mount /dev/md0 /mnt/raid1/sudo chmod g+s /mnt/raid1sudo chown -R marcelo:fileshareforall /mnt/raid1

At this point I can use it, write and read any file or folder.

Now, I proceeded with that:

$ sudo mdadm --detail --scan --verbose | sudo tee -a /etc/mdadm/mdadm.confARRAY /dev/md0 level=raid1 num-devices=2 metadata=1.2 name=desktopubuntu:0 UUID=feba8274:e93bd0e4:9a754d82:65c19cf8   devices=/dev/sdc1,/dev/sdd1

With that, I unmount the RAID partition and tested fstab with:

$ sudo mount -av/                        : ignored/boot/efi                : already mounted/mnt/raid1               : successfully mounted

And:

$ lsblk -f NAME FSTYPE FSVER LABEL UUID FSAVAIL FSUSE% MOUNTPOINTSloop0     squash 4.0                                                    0   100% /snap/bare/5...loop20     squash 4.0                                                    0   100% /snap/notepad-plus-plus/374sda                                                                         ├─sda1│    vfat   FAT32       B421-8498                              59,5M    38% /boot/efi├─sda2│├─sda3│    ntfs         Win_SSD│                       D280529F805289BD                                    ├─sda4│    ntfs               149E0BCF9E0BA876                                    ├─sda5│    ntfs               54306E95306E7DBC                                    ├─sda6│    ntfs               749AEA539AEA1182                                    └─sda7     ext4   1.0         733b6c76-48d1-4d26-85bb-697075f27b0d                sdb                                                                         ├─sdb1│    ext4   1.0         0bd7c897-38c6-4d18-ad8c-4a3d1270cfe4  158,8G     8% /├─sdb2│    ntfs               FC8675048674C126                                    └─sdb3sdc                                                                         └─sdc1     linux_ 1.2   desktopubuntu:0                        feba8274-e93b-d0e4-9a75-4d8265c19cf8                └─md0     ext4   1.0         6f72b003-711e-47d7-8028-99b9d001ba99    1,6T     5% /mnt/raid1sdd                                                                         └─sdd1     linux_ 1.2   desktopubuntu:0                        feba8274-e93b-d0e4-9a75-4d8265c19cf8                └─md0     ext4   1.0         6f72b003-711e-47d7-8028-99b9d001ba99    1,6T     5% /mnt/raid1

With all pieces in place, I did:

$: sudo update-initramfs -uupdate-initramfs: Generating /boot/initrd.img-5.15.0-47-generic

Without any feedback, bash got back to $ blinking cursor. I‘m assuming this is the expected behavior.

I complete these steps now 5 times, and in every round I got a boot in Emergency Mode, and can only resume boot capacity when nano /etc/fstab and comment /mnt/raid1 entry.

Any directions? What I’m missing?TIA


Viewing all articles
Browse latest Browse all 699

Trending Articles



<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>