ML
    • Recent
    • Categories
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login

    CLVM for CentOS 7

    IT Discussion
    3
    17
    5.5k
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • AlyRagabA
      AlyRagab @scottalanmiller
      last edited by

      @scottalanmiller i want to configure cluster between 3 nodes running with CentOS7 using pacemaker and shared storage ,
      i have created the shared LUN on the storage and i can access it through all nodes,
      i have configured the nodes with the basic configuration of the pacemaker and pcs status command shows me that everything is very good
      all nodes are online and there is no any error ,

      in one node from them i have created a partition "sdb1" so that i can create a GFS2 file system to that and then complete the cluster resource setup on it but have faced this error

      scottalanmillerS 1 Reply Last reply Reply Quote 0
      • AlyRagabA
        AlyRagab @scottalanmiller
        last edited by

        @scottalanmiller said in CLVM for CentOS 7:

        @AlyRagab said in CLVM for CentOS 7:

        @travisdh1 said in CLVM for CentOS 7:

        How is the block device being shared? Shouldn't be any different than a normal local drive to the system.

        the block device is shared from iSCSI SAN Storage

        And the iSCSI device has /dev/sdb? Are you sure? Just double check that.

        yes it has

        1 Reply Last reply Reply Quote 0
        • scottalanmillerS
          scottalanmiller @AlyRagab
          last edited by

          @AlyRagab said in CLVM for CentOS 7:

          @scottalanmiller i want to configure cluster between 3 nodes running with CentOS7 using pacemaker and shared storage ,
          i have created the shared LUN on the storage and i can access it through all nodes,
          i have configured the nodes with the basic configuration of the pacemaker and pcs status command shows me that everything is very good
          all nodes are online and there is no any error ,

          in one node from them i have created a partition "sdb1" so that i can create a GFS2 file system to that and then complete the cluster resource setup on it but have faced this error

          Why a partition when you are using LVM?

          AlyRagabA 1 Reply Last reply Reply Quote 1
          • AlyRagabA
            AlyRagab @scottalanmiller
            last edited by

            @scottalanmiller sorry , i mean i want to create the volume group based on this pyhiscal partition using the CLVM but i can not at this moment to do that

            travisdh1T 1 Reply Last reply Reply Quote 0
            • travisdh1T
              travisdh1 @AlyRagab
              last edited by

              @AlyRagab said in CLVM for CentOS 7:

              @scottalanmiller sorry , i mean i want to create the volume group based on this pyhiscal partition using the CLVM but i can not at this moment to do that

              What does pvdisplay currently list?

              AlyRagabA 1 Reply Last reply Reply Quote 0
              • AlyRagabA
                AlyRagab @travisdh1
                last edited by

                @travisdh1 said in CLVM for CentOS 7:

                @AlyRagab said in CLVM for CentOS 7:

                @scottalanmiller sorry , i mean i want to create the volume group based on this pyhiscal partition using the CLVM but i can not at this moment to do that

                What does pvdisplay currently list?

                here is the output of the pvs command

                [root@node1 ~]# pvs
                connect() failed on local socket: No such file or directory
                Internal cluster locking initialisation failed.
                WARNING: Falling back to local file-based locking.
                Volume Groups with the clustered attribute will be inaccessible.
                PV VG Fmt Attr PSize PFree
                /dev/sda2 centos lvm2 a-- 29.51g 44.00m
                /dev/sdb1 lvm2 --- 60.00g 60.00g

                1 Reply Last reply Reply Quote 0
                • AlyRagabA
                  AlyRagab
                  last edited by

                  pvdisplay command output too

                  [root@node1 ~]# pvdisplay
                  connect() failed on local socket: No such file or directory
                  Internal cluster locking initialisation failed.
                  WARNING: Falling back to local file-based locking.
                  Volume Groups with the clustered attribute will be inaccessible.
                  --- Physical volume ---
                  PV Name /dev/sda2
                  VG Name centos
                  PV Size 29.51 GiB / not usable 3.00 MiB
                  Allocatable yes
                  PE Size 4.00 MiB
                  Total PE 7554
                  Free PE 11
                  Allocated PE 7543
                  PV UUID kI6RJ1-ZfLZ-vSNW-teO3-dheF-aK8j-OKSole

                  "/dev/sdb1" is a new physical volume of "60.00 GiB"
                  --- NEW Physical volume ---
                  PV Name /dev/sdb1
                  VG Name
                  PV Size 60.00 GiB
                  Allocatable NO
                  PE Size 0
                  Total PE 0
                  Free PE 0
                  Allocated PE 0
                  PV UUID P1CkZw-jq49-a6T6-mwO5-EQtz-y4dX-BYcUWq

                  travisdh1T 1 Reply Last reply Reply Quote 0
                  • travisdh1T
                    travisdh1 @AlyRagab
                    last edited by

                    @AlyRagab That's looking like something is wrong with the iSCSI mount.

                    AlyRagabA 1 Reply Last reply Reply Quote 0
                    • AlyRagabA
                      AlyRagab @travisdh1
                      last edited by

                      @travisdh1 said in CLVM for CentOS 7:

                      @AlyRagab That's looking like something is wrong with the iSCSI mount.

                      i am using the below commands to access the storage :
                      the IP Address of the Target is : 192.168.1.40

                      iscsiadm -m discovery -t sendtargets -p 192.168.1.40
                      iscsiadm -m node -T iqn.2017-05.local.server:target -p 192.168.1.40:3260 --login
                      
                      AlyRagabA 1 Reply Last reply Reply Quote 0
                      • AlyRagabA
                        AlyRagab @AlyRagab
                        last edited by

                        @AlyRagab and from the storage side :

                        # targetcli sessions
                        alias: node1.server.local       sid: 4 type: Normal session-state: LOGGED_IN
                        alias: node2.server.local       sid: 5 type: Normal session-state: LOGGED_IN
                        alias: node3.server.local       sid: 2 type: Normal session-state: LOGGED_IN
                        
                        1 Reply Last reply Reply Quote 0
                        • 1 / 1
                        • First post
                          Last post