@scottalanmiller sorry , i mean i want to create the volume group based on this pyhiscal partition using the CLVM but i can not at this moment to do that
Posts made by AlyRagab
-
RE: CLVM for CentOS 7
-
RE: CLVM for CentOS 7
@scottalanmiller said in CLVM for CentOS 7:
@AlyRagab said in CLVM for CentOS 7:
@travisdh1 said in CLVM for CentOS 7:
How is the block device being shared? Shouldn't be any different than a normal local drive to the system.
the block device is shared from iSCSI SAN Storage
And the iSCSI device has /dev/sdb? Are you sure? Just double check that.
yes it has
-
RE: CLVM for CentOS 7
@scottalanmiller i want to configure cluster between 3 nodes running with CentOS7 using pacemaker and shared storage ,
i have created the shared LUN on the storage and i can access it through all nodes,
i have configured the nodes with the basic configuration of the pacemaker and pcs status command shows me that everything is very good
all nodes are online and there is no any error ,in one node from them i have created a partition "sdb1" so that i can create a GFS2 file system to that and then complete the cluster resource setup on it but have faced this error
-
RE: CLVM for CentOS 7
@scottalanmiller said in CLVM for CentOS 7:
You are sure that you have a /dev/sdb1 device?
yes , since i have one disk in this server " /dev/sda " and after iscsiadm login to the shared lun from iSCSI Target i can see /dev/sdb
-
RE: CLVM for CentOS 7
@travisdh1 said in CLVM for CentOS 7:
How is the block device being shared? Shouldn't be any different than a normal local drive to the system.
the block device is shared from iSCSI SAN Storage
-
CLVM for CentOS 7
i am working on CentOS 7
and want to configure lvm2-cluster for a shared block device
when i run the commandpvcreate /dev/sdb1
" i have the below error "
connect() failed on local socket: No such file or directory
Internal cluster locking initialisation failed.
WARNING: Falling back to local file-based locking.
Volume Groups with the clustered attribute will be
inaccessible.
Physical volume "/dev/sdb1" successfully created.
so any advice -
RE: Multiple Containers
@scottalanmiller said in Multiple Containers:
@AlyRagab said in Multiple Containers:
Since Docker is designed not to be like the traditional VT so there will be 2 ways :
1- One process per One container.
2- One container with many processes but using superviosrd to do the same task of the systemd inside the container.the below link explains how can we link containers with each other assuming we have a DB Server in a container and Web Server installed in other container and we need to link the two containers :
https://rominirani.com/docker-tutorial-series-part-8-linking-containers-69a4e5bf50fb#.hn73efm1p
Yes, basically you are just turning the containers into individual processes. But your OS already does that. That doesn't appear to do you any useful service - it's just complication. What is the purpose of the container, everything you are mentioning we have without containers.
But we can use the lightweight size of the docker images from using docker other than using traditional VT.
we will save resources. -
RE: Multiple Containers
Since Docker is designed not to be like the traditional VT so there will be 2 ways :
1- One process per One container.
2- One container with many processes but using superviosrd to do the same task of the systemd inside the container.the below link explains how can we link containers with each other assuming we have a DB Server in a container and Web Server installed in other container and we need to link the two containers :
https://rominirani.com/docker-tutorial-series-part-8-linking-containers-69a4e5bf50fb#.hn73efm1p
-
RE: Zimbra Attachment and Mailbox Configuration Changes
for the attachment size the below video may help
Attachment Size -
RE: Multiple Containers
@scottalanmiller so we need to run one process per container and link all with each other
-
RE: Multiple Containers
@scottalanmiller since the container is designed to host one process or we can use supervisord to start many processes in that container
-
Multiple Containers
Hi All,
i am just trying to build my custom image to run OSTicket so i will pull the centos image then create multiple containers to host " Nginx , MariaDB , OSticket" and link all the containers with each others so any guide to do that ?.Thanks
-
RE: OpenVPN issues
in the server.conf try add the below parameter:
client-to-client
then restart the OpenVPN service , now all OpenVPN clients will ping each others.
-
RE: Docker Commit
@msff-amman-Itofficer said in Docker Commit:
@AlyRagab said in Docker Commit:
any advice to solve such thing ?
ThanksMy advice is to manage Docker from web interface UI, their you can see more of what docker is doing, and very helpful for new docker starters.
Launch or Install Docker UI on port 9000
docker pull uifd/ui-for-docker
docker run -d -p 9000:9000 --privileged -v /var/run/docker.sock:/var/run/docker.sock uifd/ui-for-dockerThen navigate from web browser to http://docker IP:9000
Also I think the issue is that when you exit centos, you exit via ctrl-c thus you end the session all together, what you want to do is run the centos as backrgound, and you can connect to it using another ssh client like putty or even from docker via connect:
Connect to an running Container
docker exec -it Ubuntu_1 bash
or
docker exec -it 7e153095b94f bashYes , i will try test everything the Web UI offers , i am now working with the docker exec command.
but another question is do you prefer working with the supervisord or running each process in a separate container ?.
-
RE: Docker Commit
@aidan_walsh i think it can be done through
docker exec -it CONTAINER_NAME /bin/bash
i did it and installed the httpd package and when i exit and return to it again the apache is still installed.
-
RE: Docker Commit
@aidan_walsh When i logged into the container by
docker run -it centos /bin/bash
and install the apache , then i logged out
when i tried to log in again i did not find the apache installed so why ? -
RE: Docker Commit
@aidan_walsh yes , any command executed or installed packages and so on.
-
Docker Commit
Hi All,
In Docker , there is an issue i have that when i pull an image like the below command
docker pull centos
then i will be in need to run it to create a Container
docker run -it centos /bin/bash
the issue here is if i return back to the docker host any changes will not be saved into that container and the solution for that is to use the " docker commit " command
that command will create another instance of that image again and thus will take another space in the storageso is there any advice to solve such thing ?
Thanks