This problem has happened across many of our servers that were upgraded to 18.04 from 16.04. The common configuration is that root is an LVM, and either there is a /boot partition, or a /boot and /boot/efi partition. For example:
$ lsblk -fNAME FSTYPE LABEL UUID MOUNTPOINTsda ├─sda1 vfat C45F-2000 /boot/efi ├─sda2 ext2 a906fd59-cb58-4c94-8560-5d426e4 /boot└─sda3 LVM2_member 1P3Rxv-VZMx-gcs9-PlxM-DCI8-kIqr├─node--007--vg-root ext4 316678d5-aaaf-43bd-bac6-cc3aeb1 /├─node--007--vg-swap_1 swap 0724b0b0-9f2d-42aa-bbe2-7b8aa31 [SWAP]├─node--007--vg--na ext4 7d42481b-f7fb-4ac6-9cf5-5df3ca17 /cache/na├─node--007--vg-c ext4 e38d96f8-6afb-4d2c-94cc-28a02e90 /cache/c└─node--007--vg-t ext4 44559b67-869e-4454-b792-792c1a16 /cache/d
The with kernel debug logging I always see this kind of logging about timing out waiting for the device
Mar 30 16:14:22 ns1 systemd-udevd[539]: seq 3206 '/devices/pci0000:00/0000:00:1f.2/ata2/host1/target1:0:0/1:0:0:0/block/sda' is taking a long timeMar 30 16:14:22 ns1 systemd[1]: systemd-udevd.service: Got notification message from PID 539 (WATCHDOG=1)Mar 30 16:14:50 ns1 systemd[1]: dev-disk-by\x2duuid-a906fd59\x2dcb58\x2d4c94\x2d8560\x2d5d426e4.device: Job dev-disk-by\x2duuid-a906fd59\x2dcb58\x2d4Mar 30 16:14:50 ns1 systemd-journald[501]: Forwarding to syslog missed 70 messages.Mar 30 16:14:50 ns1 systemd[1]: dev-disk-by\x2duuid-a906fd59\x2dcb58\x2d4c94\x2d8560\x2d5d426e4.device: Job dev-disk-by\x2duuid-a906fd59\x2dcb58\x2d4Mar 30 16:14:50 ns1 systemd[1]: Timed out waiting for device dev-disk-by\x2duuid-a906fd59\x2dcb58\x2d4c94\x2d8560\x2d5d426e4.device.Mar 30 16:14:50 ns1 systemd[1]: boot.mount: Job boot.mount/start finished, result=dependencyMar 30 16:14:50 ns1 systemd[1]: Dependency failed for /boot.
The swap and LVMs are mounted by this point. Emergency mode is entered, and pressing control-D continues the boot and then all is good.
If I tar all the files under /boot, umount /boot and /boot/efi and then untar them, change the fstab, update the initramfs, and reboot without those partitions then the node boots.
I have noticed that the log saying that /dev/sda is plugged does not happen before the emergency mode is entered, even though the lvms are mounted successfully. After pressing control-D to continue the boot the log happens, and everything including sda2 mounts just fine.
Mar 30 16:15:33 ns1 systemd[1]: dev-sda.device: Changed dead -> plugged
Any help much appreciated.