Any idea why Debian complains about a start job failure on boot after a new install?
-
@biggen said in Any idea why Debian complains about a start job failure on boot after a new install?:
@scottalanmiller Not sure what MPT is. I’m reusing this disk from an earlier Debian 9 install a year or so go. I set it up as MBR/Dos label back then so I just didn’t change the disk label for this install.
Oh okay, should be fine.
-
@biggen said in Any idea why Debian complains about a start job failure on boot after a new install?:
I initially tried to use this disk and another identical disk for my standard md -> LVM -> XFS install.
Older info, but might be related. XFS, LVM, Grub, and UEFI can all be culprits here.
-
I installed debian 10 on a VM with LVM and XFS.
Debian puts a swap partition in by default and runs EXT4 by default but after changing that I had this:
This is the block list:
Only a 10GB disk for the VM but it works fine.
Normally I don't mess with the swap partition though.
-
@Pete-S Yeah I tested it with a VM yesterday on my desktop in Virtual Box after I had my problems on the physical machine. Worked fine. I even did my md -> lvm > XFS setup using two VHDs. Installed and fired right up with a nice RAID 1 array.
I can’t figure it out. It’s like there is something up with both disks. I’ve blown them out with Gparted and DD. I guess I can change the disk label to GPT and see if that makes a difference. I’m at a total loss...
-
@Pete-S I’ll try later with the default install that the partitioner wants to do and see if that changes things. Man, this is a real head scratcher.
-
I did another try with setting things up manually in the installer with raid1 as well.
Also works.If I were you I would have a look at uefi settings in your bios.
I usually just disable it so I don't have to deal with any problems, but maybe you need it.Some BIOS also have bugs in their uefi implementation. So maybe upgrade the BIOS as well.
-
@biggen said in Any idea why Debian complains about a start job failure on boot after a new install?:
@Pete-S Yeah I tested it with a VM yesterday on my desktop in Virtual Box after I had my problems on the physical machine. Worked fine. I even did my md -> lvm > XFS setup using two VHDs. Installed and fired right up with a nice RAID 1 array.
I can’t figure it out. It’s like there is something up with both disks. I’ve blown them out with Gparted and DD. I guess I can change the disk label to GPT and see if that makes a difference. I’m at a total loss...
Label shouldn't make any difference.
When reusing drives that has been in md raid you can use
mdadm --zero-superblock /dev/sdX
to wipe the raid info from it. -
@Pete-S Yeah I have the Bios set to Legacy Boot which I assume means that UEFI is turned off,
When I say “disk label” I mean partition type. So DOS = MBR. That is how the disk is partitioned now.
I appreciate you testing in a VM. I’ll try it again later with the default installer partitioning. If it fails to work then I don’t know...
I’ve tried to zero the md superblock, after that fact, I’m not sure it works anymore. If I boot into Debian (after waiting for the failed job start) on that disk and run that command, I get “couldn’t open for write. Not zeroing” for that drive /dev/sda”.
I swear I’ve never had issues with Debian. Very odd indeed.
-
@biggen said in Any idea why Debian complains about a start job failure on boot after a new install?:
I’ve tried to zero the md superblock, after that fact, I’m not sure it works anymore. If I boot into Debian (after waiting for the failed job start) on that disk and run that command, I get “couldn’t open for write. Not zeroing” for that drive /dev/sda”.
If you created the raid on the device, I think you should zero the superblocks on /dev/sda
But if you created the raid on the partition I think you need to zero the superblocks on /dev/sda1
-
@biggen said in Any idea why Debian complains about a start job failure on boot after a new install?:
When I say “disk label” I mean partition type. So DOS = MBR. That is how the disk is partitioned now.
What you say is confusing to me. What does
fdisk -l
look like on the system? -
@Pete-S I’m not there now but shows disklabel as “dos” if I remember correctly. So partition type should be plain ole MBR I believe.
-
@biggen said in Any idea why Debian complains about a start job failure on boot after a new install?:
@Pete-S I’m not there now but shows disklabel as “dos” if I remember correctly.
Please post it when you have access to the system.
And also the exact error you get in the log.
-
@Pete-S Ok, I went ahead and let the installer partition it using the defaults for LVM. Everything is working!
The installer creates a small primary partition and installs /boot to it. It then creates an extended partition with the remainder of the drive and slices up logicals out of that for the LVM. It puts “/“ in vg1 as “lv root” and puts /swap in vg1 as well as lv swap”.
I was not creating a /boot. Never have. I was just creating a primary for the “/“ and then saving some of it for an extended /swap. I’ve done this forever. It even works in a VM. I have no idea why I couldn’t get it to work on the physical machine.