ML
    • Recent
    • Categories
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login

    Any idea why Debian complains about a start job failure on boot after a new install?

    IT Discussion
    linux debian
    4
    22
    883
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • black3dynamiteB
      black3dynamite
      last edited by

      Never had any issue with my using the default partitioning scheme and using a partitioning scheme like /boot, /, and a swapfile.

      B 1 Reply Last reply Reply Quote 0
      • B
        biggen @black3dynamite
        last edited by

        @black3dynamite Me neither. I normally never use /boot. Just / and swap. Been doing it for years.

        1 Reply Last reply Reply Quote 0
        • B
          biggen @scottalanmiller
          last edited by

          @scottalanmiller Not sure what MPT is. I’m reusing this disk from an earlier Debian 9 install a year or so go. I set it up as MBR/Dos label back then so I just didn’t change the disk label for this install.

          scottalanmillerS 1 Reply Last reply Reply Quote 0
          • B
            biggen
            last edited by biggen

            I initially tried to use this disk and another identical disk for my standard md -> LVM -> XFS install. But the installer would always fail to install grub near the end. Would say something like “failed to install grub to /dev/mapper”. So when that didn’t work I decided to just cut my losses and install to one drive. Now this issue cropped up.

            I’ve been working on this for days. I’m thinking there is something up with these two drives.

            scottalanmillerS 1 Reply Last reply Reply Quote 0
            • scottalanmillerS
              scottalanmiller @biggen
              last edited by

              @biggen said in Any idea why Debian complains about a start job failure on boot after a new install?:

              @scottalanmiller Not sure what MPT is. I’m reusing this disk from an earlier Debian 9 install a year or so go. I set it up as MBR/Dos label back then so I just didn’t change the disk label for this install.

              Oh okay, should be fine.

              1 Reply Last reply Reply Quote 0
              • scottalanmillerS
                scottalanmiller @biggen
                last edited by

                @biggen said in Any idea why Debian complains about a start job failure on boot after a new install?:

                I initially tried to use this disk and another identical disk for my standard md -> LVM -> XFS install.

                https://askubuntu.com/questions/945337/ubuntu-17-04-will-not-boot-on-uefi-system-with-xfs-system-partition#945397

                Older info, but might be related. XFS, LVM, Grub, and UEFI can all be culprits here.

                1 Reply Last reply Reply Quote 0
                • 1
                  1337
                  last edited by 1337

                  I installed debian 10 on a VM with LVM and XFS.

                  Debian puts a swap partition in by default and runs EXT4 by default but after changing that I had this:
                  deb10_lvm_xfs.png

                  This is the block list:
                  deb10_lsblk.png

                  Only a 10GB disk for the VM but it works fine.

                  Normally I don't mess with the swap partition though.

                  B 2 Replies Last reply Reply Quote 0
                  • B
                    biggen @1337
                    last edited by

                    @Pete-S Yeah I tested it with a VM yesterday on my desktop in Virtual Box after I had my problems on the physical machine. Worked fine. I even did my md -> lvm > XFS setup using two VHDs. Installed and fired right up with a nice RAID 1 array.

                    I can’t figure it out. It’s like there is something up with both disks. I’ve blown them out with Gparted and DD. I guess I can change the disk label to GPT and see if that makes a difference. I’m at a total loss...

                    1 1 Reply Last reply Reply Quote 0
                    • B
                      biggen @1337
                      last edited by

                      @Pete-S I’ll try later with the default install that the partitioner wants to do and see if that changes things. Man, this is a real head scratcher.

                      1 Reply Last reply Reply Quote 0
                      • 1
                        1337
                        last edited by 1337

                        I did another try with setting things up manually in the installer with raid1 as well.
                        Also works.

                        deb10_raid_lvm.png

                        If I were you I would have a look at uefi settings in your bios.
                        I usually just disable it so I don't have to deal with any problems, but maybe you need it.

                        Some BIOS also have bugs in their uefi implementation. So maybe upgrade the BIOS as well.

                        1 Reply Last reply Reply Quote 0
                        • 1
                          1337 @biggen
                          last edited by

                          @biggen said in Any idea why Debian complains about a start job failure on boot after a new install?:

                          @Pete-S Yeah I tested it with a VM yesterday on my desktop in Virtual Box after I had my problems on the physical machine. Worked fine. I even did my md -> lvm > XFS setup using two VHDs. Installed and fired right up with a nice RAID 1 array.

                          I can’t figure it out. It’s like there is something up with both disks. I’ve blown them out with Gparted and DD. I guess I can change the disk label to GPT and see if that makes a difference. I’m at a total loss...

                          Label shouldn't make any difference.

                          When reusing drives that has been in md raid you can use mdadm --zero-superblock /dev/sdX to wipe the raid info from it.

                          B 1 Reply Last reply Reply Quote 0
                          • B
                            biggen @1337
                            last edited by

                            @Pete-S Yeah I have the Bios set to Legacy Boot which I assume means that UEFI is turned off,

                            When I say “disk label” I mean partition type. So DOS = MBR. That is how the disk is partitioned now.

                            I appreciate you testing in a VM. I’ll try it again later with the default installer partitioning. If it fails to work then I don’t know...

                            I’ve tried to zero the md superblock, after that fact, I’m not sure it works anymore. If I boot into Debian (after waiting for the failed job start) on that disk and run that command, I get “couldn’t open for write. Not zeroing” for that drive /dev/sda”.

                            I swear I’ve never had issues with Debian. Very odd indeed.

                            1 2 Replies Last reply Reply Quote 0
                            • 1
                              1337 @biggen
                              last edited by 1337

                              @biggen said in Any idea why Debian complains about a start job failure on boot after a new install?:

                              I’ve tried to zero the md superblock, after that fact, I’m not sure it works anymore. If I boot into Debian (after waiting for the failed job start) on that disk and run that command, I get “couldn’t open for write. Not zeroing” for that drive /dev/sda”.

                              If you created the raid on the device, I think you should zero the superblocks on /dev/sda

                              But if you created the raid on the partition I think you need to zero the superblocks on /dev/sda1

                              1 Reply Last reply Reply Quote 0
                              • 1
                                1337 @biggen
                                last edited by

                                @biggen said in Any idea why Debian complains about a start job failure on boot after a new install?:

                                When I say “disk label” I mean partition type. So DOS = MBR. That is how the disk is partitioned now.

                                What you say is confusing to me. What does fdisk -l look like on the system?

                                B 1 Reply Last reply Reply Quote 1
                                • B
                                  biggen @1337
                                  last edited by biggen

                                  @Pete-S I’m not there now but shows disklabel as “dos” if I remember correctly. So partition type should be plain ole MBR I believe.

                                  1 1 Reply Last reply Reply Quote 0
                                  • 1
                                    1337 @biggen
                                    last edited by 1337

                                    @biggen said in Any idea why Debian complains about a start job failure on boot after a new install?:

                                    @Pete-S I’m not there now but shows disklabel as “dos” if I remember correctly.

                                    Please post it when you have access to the system.

                                    And also the exact error you get in the log.

                                    B 1 Reply Last reply Reply Quote 0
                                    • B
                                      biggen @1337
                                      last edited by

                                      @Pete-S Ok, I went ahead and let the installer partition it using the defaults for LVM. Everything is working!

                                      The installer creates a small primary partition and installs /boot to it. It then creates an extended partition with the remainder of the drive and slices up logicals out of that for the LVM. It puts “/“ in vg1 as “lv root” and puts /swap in vg1 as well as lv swap”.

                                      I was not creating a /boot. Never have. I was just creating a primary for the “/“ and then saving some of it for an extended /swap. I’ve done this forever. It even works in a VM. I have no idea why I couldn’t get it to work on the physical machine.

                                      1 Reply Last reply Reply Quote 0
                                      • 1
                                      • 2
                                      • 1 / 2
                                      • First post
                                        Last post