CLVM for CentOS 7
-
@travisdh1 said in CLVM for CentOS 7:
How is the block device being shared? Shouldn't be any different than a normal local drive to the system.
the block device is shared from iSCSI SAN Storage
-
@AlyRagab said in CLVM for CentOS 7:
@travisdh1 said in CLVM for CentOS 7:
How is the block device being shared? Shouldn't be any different than a normal local drive to the system.
the block device is shared from iSCSI SAN Storage
And the iSCSI device has /dev/sdb? Are you sure? Just double check that.
-
@scottalanmiller said in CLVM for CentOS 7:
You are sure that you have a /dev/sdb1 device?
yes , since i have one disk in this server " /dev/sda " and after iscsiadm login to the shared lun from iSCSI Target i can see /dev/sdb
-
Any reason that you are using sdb1 not sdb?
-
@scottalanmiller i want to configure cluster between 3 nodes running with CentOS7 using pacemaker and shared storage ,
i have created the shared LUN on the storage and i can access it through all nodes,
i have configured the nodes with the basic configuration of the pacemaker and pcs status command shows me that everything is very good
all nodes are online and there is no any error ,in one node from them i have created a partition "sdb1" so that i can create a GFS2 file system to that and then complete the cluster resource setup on it but have faced this error
-
@scottalanmiller said in CLVM for CentOS 7:
@AlyRagab said in CLVM for CentOS 7:
@travisdh1 said in CLVM for CentOS 7:
How is the block device being shared? Shouldn't be any different than a normal local drive to the system.
the block device is shared from iSCSI SAN Storage
And the iSCSI device has /dev/sdb? Are you sure? Just double check that.
yes it has
-
@AlyRagab said in CLVM for CentOS 7:
@scottalanmiller i want to configure cluster between 3 nodes running with CentOS7 using pacemaker and shared storage ,
i have created the shared LUN on the storage and i can access it through all nodes,
i have configured the nodes with the basic configuration of the pacemaker and pcs status command shows me that everything is very good
all nodes are online and there is no any error ,in one node from them i have created a partition "sdb1" so that i can create a GFS2 file system to that and then complete the cluster resource setup on it but have faced this error
Why a partition when you are using LVM?
-
@scottalanmiller sorry , i mean i want to create the volume group based on this pyhiscal partition using the CLVM but i can not at this moment to do that
-
@AlyRagab said in CLVM for CentOS 7:
@scottalanmiller sorry , i mean i want to create the volume group based on this pyhiscal partition using the CLVM but i can not at this moment to do that
What does
pvdisplay
currently list? -
@travisdh1 said in CLVM for CentOS 7:
@AlyRagab said in CLVM for CentOS 7:
@scottalanmiller sorry , i mean i want to create the volume group based on this pyhiscal partition using the CLVM but i can not at this moment to do that
What does
pvdisplay
currently list?here is the output of the pvs command
[root@node1 ~]# pvs
connect() failed on local socket: No such file or directory
Internal cluster locking initialisation failed.
WARNING: Falling back to local file-based locking.
Volume Groups with the clustered attribute will be inaccessible.
PV VG Fmt Attr PSize PFree
/dev/sda2 centos lvm2 a-- 29.51g 44.00m
/dev/sdb1 lvm2 --- 60.00g 60.00g -
pvdisplay command output too
[root@node1 ~]# pvdisplay
connect() failed on local socket: No such file or directory
Internal cluster locking initialisation failed.
WARNING: Falling back to local file-based locking.
Volume Groups with the clustered attribute will be inaccessible.
--- Physical volume ---
PV Name /dev/sda2
VG Name centos
PV Size 29.51 GiB / not usable 3.00 MiB
Allocatable yes
PE Size 4.00 MiB
Total PE 7554
Free PE 11
Allocated PE 7543
PV UUID kI6RJ1-ZfLZ-vSNW-teO3-dheF-aK8j-OKSole"/dev/sdb1" is a new physical volume of "60.00 GiB"
--- NEW Physical volume ---
PV Name /dev/sdb1
VG Name
PV Size 60.00 GiB
Allocatable NO
PE Size 0
Total PE 0
Free PE 0
Allocated PE 0
PV UUID P1CkZw-jq49-a6T6-mwO5-EQtz-y4dX-BYcUWq -
@AlyRagab That's looking like something is wrong with the iSCSI mount.
-
@travisdh1 said in CLVM for CentOS 7:
@AlyRagab That's looking like something is wrong with the iSCSI mount.
i am using the below commands to access the storage :
the IP Address of the Target is : 192.168.1.40iscsiadm -m discovery -t sendtargets -p 192.168.1.40 iscsiadm -m node -T iqn.2017-05.local.server:target -p 192.168.1.40:3260 --login
-
@AlyRagab and from the storage side :
# targetcli sessions alias: node1.server.local sid: 4 type: Normal session-state: LOGGED_IN alias: node2.server.local sid: 5 type: Normal session-state: LOGGED_IN alias: node3.server.local sid: 2 type: Normal session-state: LOGGED_IN