@francesco-provino said in $450 Desktop Challenge:
optiplex
Dont do it, I think my shoulder impingement is caused due to carrying alot of Optiplex(s) in my old work, my god there were like tank, but build to last I might say.
@francesco-provino said in $450 Desktop Challenge:
optiplex
Dont do it, I think my shoulder impingement is caused due to carrying alot of Optiplex(s) in my old work, my god there were like tank, but build to last I might say.
So I did ask around before posting this and I did receive warnings, but I have been testing this and it is very simple and dumb yet works (tested only with Linux VM Distro). sharing it here so you guys can bash me about my method, as well as enlighten me.
So you have your 2 KVM hosts okay + 1 Fedora machine with Virt Manager
You create all the VMs on the KVM and call it primary, let us call it KVM 1
the second KVM 2 will be just the slave incase KVM 1 drops.
So what we do is schedule a script to :
(optional )Stop Software VM activity like DB or WebServer sometime when it is not being utilized at midnight for example, systemctl stop httpd ..etc
Freeze the FileSystem (guest agent needs to be installed.)
virsh domfsfreeze VMNAME
Rsync the VM image to another KVM host (KVM 2) periodically each night.
rsync --progress --inplace -h
or
rsync --progress --inplace -h -W (safer)
Prepare previously in KVM 2 the VM xml (RAM/CPU)and make it same as the one in KVM1, also copy the MAC address. but keep VM2 shutdown.
You can script Fedora Machine to keep pinging KVM 1 or VM 1 and if it fails to receive thus it is down, so if that happens KVM 2 will virsh start VM2 , which will have the same everything even the same internal IP.
What do you think ? Too cheezy ? Does it show how much I am afraid to play with GLusterFS?
btw the tags are acting weird, whenever I type one it removes itself.
Hear me out for second, so I tried Docker on both Centos + VMware Photon to integrate into an already existing ESXi infrastructure for about 4 months on prod environment.
I loved the allure of quickly deploying stuff like every body else, I dont work in an organization/company that needs 500 MySQL installation to be honest, but many times i face the need to deploy LAMP stack or LEMP one, so I gave it ago and started to use in production.
And all the ease of setup came back and bit me in the ass. Managing the files in docker containers are much more difficult, but I am willing to learn this. What surprised me is that sometimes containers needs restart every now and then, especially frequently used ones. And that never happened in a VM, also there is performance hit, maybe small but I feel it. And dont get me started on cleaning up obsolete volumes, you are better off not cleaning them to avoid making mistake, and it is not very clear the naming scheme they use which looks like hashes to figure out everything apart.
Of-course I understand that docker really shines when you have alot of repetition and want to isolatation, but it seems to me that is limited to companies of software development, The rest of the world should stick to VMs (IMHO).
This reminded me alot about topic regarding would you rather want commercial NAS box, or centos server with NAS role. Why do you want to have the crippled one.
My opinion and it may be an un-experienced one, is that if you can afford VM, go for it. and forget about the hype surrounding Docker.
ps: I have updated and used the latest version of docker, and kept updating it. But it still sucked.
Using combination of :
https://www.collaboraoffice.com/code/
and
https://nextcloud.com/collaboraonline/
Using latest up-to-date Centos Minimal 7
My goal is just to create the CODE instance and open it in my browser, I know it is not functional site more like a service, but it should open up in my browser with the help of Apache or Nginx, neither is working cause I feel the installation steps are missing something.
CentOS 7 (at least 7.2)
Please type the following commands into the shell as root:
# import the signing key
wget https://www.collaboraoffice.com/repos/CollaboraOnline/CODE-centos7/repodata/repomd.xml.key && rpm --import repomd.xml.key
# add the repository URL to yum
yum-config-manager --add-repo https://www.collaboraoffice.com/repos/CollaboraOnline/CODE-centos7
# perform the installation
yum install loolwsd CODE-brand
First of all I noticed even when doing the above steps I have to use yum --nogpgcheck, and after installating and setup apache, it is like the application of loolwsd or CODE is not starting, I tried to debug by running :
loolwsd
but it complained that it needs the lool user account, so I logged it using that and I ran it, and it asked for SSL key/cert/chain which I supplied then it gave me this:
-bash-4.2$ loolwsd
File not found: /usr/bin/discovery.xml
<shutdown>-00970 20:24:03.973598 [ loolwsd ] WRN Waking up dead poll thread [accept_poll], started: false, finished: false| ./net/Socket.hpp:507
<shutdown>-00970 20:24:03.973751 [ loolwsd ] WRN Waking up dead poll thread [accept_poll], started: false, finished: false| ./net/Socket.hpp:507
<shutdown>-00970 20:24:03.973767 [ loolwsd ] WRN Waking up dead poll thread [websrv_poll], started: false, finished: false| ./net/Socket.hpp:507
<shutdown>-00970 20:24:03.973781 [ loolwsd ] WRN Waking up dead poll thread [websrv_poll], started: false, finished: false| ./net/Socket.hpp:507
<shutdown>-00970 20:24:03.973797 [ loolwsd ] WRN Waking up dead poll thread [accept_poll], started: false, finished: false| ./net/Socket.hpp:507
Surely I am doing something wrong, the site instructions seemes to be very easy, I have setup Apache server as proxy, and when I go to it and HTTP the normal site it works, but HTTPS states:
This site can’t be reached
192.168.1.13 refused to connect.
Thanks in advance.
@stacksofplates said in Centos Power Profiles?:
@emad-r said in Centos Power Profiles?:
So I was playing with Kimchi and I noticed this, I supposed to be running Centos KVM host and not guest.
I researched and I dont want to read 1000 page from RedHat, I wondered anybody have more info about this, and is there CLI command where you can manage this ?
It's probably just an interface for tuned. The names look similar.
Correct, the file is
/etc/tuned/active-profile
And there is service called tuned, thanks for this.
And to check on profiles available:
tuned-adm list
after reading my post again, I saw that it suggested that I was able to do this, but I was not able.
I really want something easy like this but it needs more research and web development skills, so posting it here encase someone that has that can provide easier steps and successfully accomplish this.
Hi,
I am trying to setup VNC using the below steps on Fedora 26, and I can connect to the machine but seeing a blank screen:
nano /etc/selinux/config = disabled
dnf -y install tigervnc-server
firewall-cmd --add-service=vnc-server --permanent
firewall-cmd --reload
su - medo
vncpasswd
cp /lib/systemd/system/[email protected] /etc/systemd/system/[email protected]
or
nano /etc/systemd/system/[email protected]
replace User=<USER> with User=medo twice
Anyone have an heads up ?
@tim_g
I like , and I do use this but a couple of pointers:
Minimal setup can be:
wget http://kimchi-project.github.io/wok/downloads/latest/wok.el7.centos.noarch.rpm
wget http://kimchi-project.github.io/kimchi/downloads/latest/kimchi.el7.centos.noarch.rpm
Cause you are downloading this on server, and gingerbase is good and all, but it just provides the monitoring and shutdown of Host, which can be optional.
Secondly Kimchi is very modular, and you can stop it :
systemctl stop wokd
systemctl stop nginx
And you are back to Vanilla KVM, what I do is I rely on cockpit and have those packages ready if needed. and start them manually / on demand. cause they do consume some RAM. but still very neat ESXi contender.
@jaredbusch said in KVM Setup:
There is not a simple single command line choice for CentOS 7 after initial installation, but it is not difficult to follow any of the numerous guides on there on the subject.
there is :
yum groupinstall Virtualization "Virtualization Platform" "Virtualization Tools"
Right from minimal install.
Solved, depending on your Desktop Environment you need to adjust this file:
~/.vnc/xstartup
#!/bin/sh
exec startlxqt
the above is for Fedora LXQT 26 edition.
Solution found here:
https://wiki.archlinux.org/index.php/TigerVNC
My file is contains:
#!/bin/sh
unset SESSION_MANAGER
unset DBUS_SESSION_BUS_ADDRESS
exec startlxqt
Previously it was:
#!/bin/sh
unset SESSION_MANAGER
unset DBUS_SESSION_BUS_ADDRESS
exec /etc/X11/xinit/xinitrc
otherwise you will get gray screen.
@nashbrydges said in How to setup Nginx TLS certificate based Authentication (VPN alternative):
@Emad-R Am I understanding this correctly? Is this to prevent access to a site to anyone who doesn't have the cert installed in their browser? If so, do you think this can be restricted in scope to only a single page or set of pages? For example, a public site with some admin functions via a login page, could this be used to continue to allow public access to the public pages but used to restrict access to the login and admin pages to only those with the cert? I suppose I've have to use Nginx and this config only for those restricted pages and a different config for the public space (if that's even possible).
Hey Nash,
Correct, no one can see the site without installed the p12 file in their browser. Will get error page instead.
Well I dont use this to restrict pages persay, more like be front end for whole site, whole HTTP unsecure server. and nginx can be installed on the same machine or another and act as gateway for it.
Imagine good knowledge base article for company that resides in the Intranet, local machine in LAN using Apache/MySQL (Think Wordpress) and some people said it is good if they can access it remotely.
So my previous options was to use VPN or if the users are another site with static IP (rare and limited) you can create firewall rules, however using this new method I can just install nginx and setup TLS certificate authentication and provide users with p12 file and run nginx on https and make it a front end proxy for that KB site.
@aaronstuder said in How to setup Nginx TLS certificate based Authentication (VPN alternative):
Couldn't I change the number of days to something longer?
yh ofcourse, replace all the 365 in the above commands to whatever you want.
Closest thing till yet is :
C:\Windows\System32>net use * https://IP:PORT/Public/
System error 1397 has occurred.
Mutual Authentication failed. The server's password is out of date at the domain controller.
So added the self signed ssl cert from the NAS side to my computer trusted root authority.
Tried adding the HTTPS IP to this reg key:
https://support.microsoft.com/en-in/help/941050/error-message-on-a-windows-vista-based-computer-when-you-try-to-access
Also tried using this format:
net use * \my.site.com@SSL@my_no-default-port\DavWWWRoot\my_folder <password> /USER:<my_user> /persistent:no
If i supply the user password I get this :
The operation being requested was not performed because the user has not been authenticated.
Without it I get this:
Mutual Authentication failed. The server's password is out of date at the domain controller.
I will give this a rest now, hope it helps somebody that wants to push this forward. I no longer interested to pursue this. will use WinSCP and thats that.
@creayt said in Is sharing a single network connection between two servers dumb?:
Is there much of a performance penalty for sharing the network connection into one server to another server from a port on the same card? Datacenter wants to charge me more money for a 3rd power outlet to run a switch and are only providing one Internet jack, would love to be able to get away w/
The cord into server a
A cord from server a to server b to share the connection.Each server has 4 ports.
but you can use those 4 ports to make bond and enhance the connection reliability. so o need for the switch IMO
@creayt said in Is sharing a single network connection between two servers dumb?:
Can you explain this in simple terms? If all 4 ports are depending on the same upstream single cable, how does it enhance reliability? Just in case one of the ports on my card fails, the other's'd still work? Does that happen or does the whole card fail?
Can you explain this in simple terms? If all 4 ports are depending on the same upstream single cable, how does it enhance reliability?
It doesnt in your case, cause if that single cable was cut, you will lose the connection to the server.
That said how many times a good isolated cable fails, especially if it was CAT6A and is already in tube housing... I dont think that I ever witnessed that.
Just in case one of the ports on my card fails, the other's'd still work? Does that happen or does the whole card fail?
Well, this is where networking bonding can help in. I reckon scott made an article, basically you can bond for performance or bond for redundancy (and if you have 4 ports I reckon you can do both), scott made article about bonding in Centos if I recall correctly, and ESXi has easy GUI option to deal with it.
@eddiejennings said in Cant get Zoiper to register user on FreePBX:
I'm not at my computer at the moment, and I can't visualize the account registration screen. From what I remember when I've played with Zoiper, when you register an extension, you don't use the UCP credentials. You'll use the extension number and it's secret. For the SIP domain, just enter the IP of your FreePBX VM or FQDN if you have a DNS record for it.
solved
Thanks for both answers I was able to make successful calls between 2 mobiles.
So I updated my standalone host ESXi to the latest.
Anyone noticed that Flex UI is still slow, especially when you are using it remotely. I noticed that if i connect to the UI directly using Public IP:Port and opened VM setting page, it takes much longer than
Logged in remotely like above then opened console VM of any VM hosted then opened the private IP of that host then edited the VM settings.
Seems rather stupid. I really hoped the latest update would help things. Actually it made it worse, sometimes I use the Vsphere C# client and in the latest update they completely blocked it, viewing VMs was much faster using it, now I get this :
NVM, forgot about VMware Remote Console (VMRC), which is the optimized way of connection to VM GUI console, download it now.
I think vsphere C# client currently is VMRC in some minimal way. At least one is still actively developed. (You cant connect to an .iso file using webclient flex without uploading the whole .iso file to the host, however with the older C# client and newer method VMRC this can be still done).
That said, we lost the export to .ova or .ovf easier method compared to web ui export which gives you the .vmdk file as is.