Everyone, please welcome my daughter Diana into the world.
She arrived at 7:26am today!
Everyone, please welcome my daughter Diana into the world.
She arrived at 7:26am today!
This guide assumes you already have a running Ubuntu 15.10 system on which you want to configure Xen Orchestra, everything else is documented below.
TL;DR: Run this command as root on Ubuntu 15.10...
sudo curl https://raw.githubusercontent.com/scottalanmiller/xenorchestra_installer/master/xo_install.sh | bash
During the installation of your OS, you'll want to create a user outside of root, I made my user account xoadmin
How to Install Xen Orchestra Source on Ubuntu 15.10 (self compiled) AKA MANUAL installation
sudo apt-get install --yes nfs-common
curl -sL https://deb.nodesource.com/setup_5.x | sudo -E bash -
sudo apt-get install --yes nodejs
curl -o /usr/local/bin/n https://raw.githubusercontent.com/visionmedia/n/master/bin/n
chmod +x /usr/local/bin/n
n stable
node -v
npm -v
sudo apt-get install --yes build-essential redis-server libpng-dev git python-minimal
git clone -b stable https://github.com/vatesfr/xo-server
git clone -b stable https://github.com/vatesfr/xo-web
cd xo-server
sudo npm install && npm run build
cp sample.config.yaml .xo-server.yaml
nano .xo-server.yaml
#Edit and uncomment it to have the right path to XO-Web, because XO-Server embeds an HTTP server (we assume that XO-Server and XO-Web are on the same directory). It's near the end of the file:
# mounts: '/': '../home/xoadmin/xo-web/dist/
# save and exit
cd ~
cd ../xo-web
sudo npm i [email protected]
sudo npm install
sudo npm run build
cd ../xo-server
sudo npm start
The scripted installation thanks to @scottalanmiller
Below is the preferred Installation method. It includes the systemctl xo-server.service written by @Danp
sudo bash
<password>
sudo curl https://raw.githubusercontent.com/scottalanmiller/xenorchestra_installer/master/xo_install.sh | bash
<password>
In your favorite web-browser go to this VM's IP Address, login with the default user: [email protected] and "admin" for the password. Update your Login Details!!
Add your Xen Server(s) and go to town.
Automatically Start XO at Boot - See this Post by @Danp Also copied below.
Create a file in /etc/systemd/system/xo-server.service and enter the below into it.
# systemd service for XO-Server.
[Unit]
Description= XO Server
After=network-online.target
[Service]
WorkingDirectory=/opt/xo-server/
ExecStart=/usr/local/bin/node ./bin/xo-server
Restart=always
SyslogIdentifier=xo-server
[Install]
WantedBy=multi-user.target
Save the file, and then run to enable the service at start up.
sudo systemctl enable xo-server.service
To monitor the service you can then run
journalctl -u xo-server -f -n 50
For everyone on the newer releases of the "stable build" at least as of April-8-2016 there appears to be a bug when attempting to mount an NFS share; to resolve this follow the short process below
Replacing nfs-server-ip-address with the actual IP of the remote server and the remote-# with whatever is listed on your console.
Register for today's live demo April 6, 11 am PT / 2 pm ET.
Host: Alex Bykovskyi, Solutions Architect, StarWind
New StarWind Virtual SAN Free comes completely unrestricted delivering all the features you get in the commercial version. Unlike the previous free version, new VSAN Free delivers unlimited node count, features, and capacity served. It can now be used in any deployment scenario, be it Hyper-Converged, “Compute and Storage Separated”, or even a combination of both. Thanks to multiprotocol support, featuring iSCSI, SMB3, and NFS, including RDMA-capable iSER, NVMf, and SMB Direct, StarWind VSAN easily integrates into any infrastructure, be it virtualized or not. In addition, support for VVOLs, SCVMM, and ready-to-use PowerShell scripts help users to speed up and simplify automated deployment, management, and monitoring of their Virtual SAN infrastructure.
Join our Live Demo to learn how to build a completely free multi-node hyper-converged environment with StarWind and your hypervisor of choice!
*Virtualize responsibly!
Hey all!
On November 11th I'll be gaming for 24 hours straight to raise money for Golisano Children's Hospital,please hit my page up here and donate to help raise funds.
Thanks,
DustinB
@nerdydad said in Fake Wall or Wall Closet?:
Any suggestions?
Tell your CFO to stop giving tours of your server room and focus on saving money for more important things.
Since Scott is our resident expert on said topic, I figure here would be as good a place as any to post SW articles regarding IT people who are in a position where they are unknowingly building an IPOD.
@BradfromxByte thank you for dealing with all of the back and forth and kick ass servers.
Okay for anyone still around, I was able to get this sorted, it appears that the initial file I was using was either corrupted or maybe a patch for an existing installation.
I've documented the process, copied below for reference. I won't be sharing IBMs RPM's on this post. You should be able to get these directly from IBM's website free of charge, but your mileage may vary.
Minimum System Requirements
• 4 vCPU
• 16 GiB RAM
• 80 GiB Disk Space
• 4 Network Interfaces – with DHCP or Statically Assigned IPs
• 2 Available Loop devices – Documented Below
• Default Partitioning will work, can be configured to meet any security requirements (separate LV for VAR for example)
• Installation without a GUI recommended with these below features
◦ “Server Installation” Option
Guest Agents (Drivers for Hypervisor/Cloud recommended)
Remote Management for Linux recommended – SSH and or Cockpit
• Root only account – User accounts are unnecessary
• Security Policy to adhere to any State/Fed requirements (may effect Installation Destination configuration – not documented here).
Configure Timezone and any other settings as required – no specific documentation needed
Sample User: root
Password: your-password
Upon installation check for updates and install a few required repositories.
sudo dnf update -y
sudo dnf install https://dl.fedoraproject.org/pub/epel/epel-release-latest-8.noarch.rpm
sudo dnf update -y
sudo dnf search schroot
sudo dnf install schroot ipvsadm kmod telnet -y
Post installation of dependencies we need to confirm our loop devices are configured.
Confirm what loop devices exist (likely there is only 1) so we’ll need to create some with the below.
List your loop devices:
ls -l /dev/loop*
brw-r----- 1 rootls disk 7, 0 Jul 24 17:49 /dev/loop-control
We only have the loop-control device, so create two more loop devices with the below.
mknod -m660 /dev/loop1 b 7 8
mknod -m660 /dev/loop2 b 7 8
Confirm the devices are listed.
ls -l /dev/loop*
brw-rw----. 1 root root 7, 8 Nov 27 08:10 /dev/loop1
brw-rw----. 1 root root 7, 8 Nov 27 08:10 /dev/loop2
crw-rw----. 1 root disk 10, 237 Nov 27 07:51 /dev/loop-control
Now transfer or download the Datapower and LibgCrypt RPMs to this system using something line wget or WinSCP depending on access. You can find libgcrypt here (https://rpmfind.net)
Once transferred, you may have to decompress the installation files.
tar -xf idg_lx10540.cd.ASL.prod.tar
Now we can install the program
sudo yum install idg_lx.10540.image.x86_64.rpm idg_lx10540.common.x86_64.rpm
Once installed, you’ll connect to the system via telnet on the system’s loopback address
telnet 127.0.0.1 2200
Initial login is: admin
Initial Password is: admin
Confirm to all prompts with Y and then run/create and confirm a new password
You must restart the DataPower Gateway to make the Common Criteria policies effective.
idg# configure terminal;web-mgmt;admin-state enabled;local-address 0 9090;exit
Global mode
Modify Web management service configuration
Now you can go to the web console via your computer and using the primary IP address. In our example
https://ip-address:9090
You’ll use the login password you created while connected via SSH. You’ll have to create yet another new password.
Once the password is updated, you’ll be able to login and complete the setup by accepting the license agreement.
After accepting the licensing agreement the system will need to reboot. After logging in via SSH you’ll need to restart the web interface.
telnet 127.0.0.1 2200
admin
<password>
idg<config>
idg <config> configure terminal;web-mgmt;admin-state enabled;local-address 0 9090;exit
That's the complete installation process from start to finish. The last step would be to setup initialization of the datapower service upon restart. I'll be working on this sometime this week probably so that the environment is fault tolerant.
@EddieJennings said in IBM Datapower on Linux:
I've never dealt with Datapower, but I suspect there's a configuration file related to
datapower-control
that may need some editing.
So there is a configuration file, but there is no reference at all within the conf file (/var/ibm/datapower/datapower.conf) regarding the LUKS partition.
@CCWTech I wish I had that monitor setup
As for getting everything to open on a separate monitor and with the content you had open I'm not sure of off hand. I only use two monitors... and all of my content is constantly changing.
@CCWTech said in How to get Chrome to remember which monitors to open on:
I am using Windows 11. This worked fine on Ubuntu but I needed to switch to Windows in order to support other apps.
Maybe I am asking too much but I have 4 screens. I have Chrome open on each screen and about 8 tabs open on each screen.
But, when I reboot, Chrome remembers the tabs I have open, but puts them all on one screen and I have to then re-arrange each time. It's quite a pain, especially with how often Windows makes you reboot.
Is there anything I can do to get Chrome to open as I want it to?
Are your monitors selected as Primary and Secondary, right click on the desktop, Display Settings.
If you want your "right" monitor to be the primary, change it so it is and then move Chrome to that screen. Close it, and reopen to see if the issue is fixed.
Does anyone have any experience with Datapower on Linux?
Simply put, it should be an installation through RPM, which I have all of the RPM. What I'm getting hung up on is the LUKS partitions which are apparently required, but not specified what needs to be done to configure these.
From IBM:
Resource requirements on Linux hosts
To install the DataPower Gateway, the host must meet the following requirements.
To install the RPM packages, the host must be running a supported 64-bit version of Linux.
2 GiB of free storage must be available on /opt.
5 GiB of free storage must be available on /var.
At least two free loop devices are needed, with another loop device when RAID storage is used.
RAID storage, if used, must be configured in the datapower.conf file.
I'm not using raid, here I'm showing the disk layout and the loop devices.
The installation which is a simply yum install xxx.image.x86_64.rpm xxx.common.x86_64.rpm
Which I then should have a stopped "datapower.service", but the service keeps crashing because it's looking for these LUKS partitions.
Nov 07 15:17:52 appconnect.localdomain systemd[1]: datapower.service: Scheduled restart job, restart counter is at 183.
-- Subject: Automatic restarting of a unit has been scheduled
-- Defined-By: systemd
-- Support: https://access.redhat.com/support
--
-- Automatic restarting of the unit datapower.service has been scheduled, as the result for
-- the configured Restart= setting for the unit.
Nov 07 15:17:52 appconnect.localdomain systemd[1]: Stopped DataPower Service.
-- Subject: Unit datapower.service has finished shutting down
-- Defined-By: systemd
-- Support: https://access.redhat.com/support
--
-- Unit datapower.service has finished shutting down.
Nov 07 15:17:52 appconnect.localdomain systemd[1]: Starting DataPower Service...
-- Subject: Unit datapower.service has begun start-up
-- Defined-By: systemd
-- Support: https://access.redhat.com/support
--
-- Unit datapower.service has begun starting up.
Nov 07 15:17:53 appconnect.localdomain kernel: loop0: detected capacity change from 0 to 3774873600
Nov 07 15:17:55 appconnect.localdomain bash[105464]: Thu Nov 07 2024 15:17:55 ERR dpControl [pre-start][105464] Cannot unlock LUKS partition 'var_opt_ibm_datapower_datapower_img': Function not implemented (error 38)
Nov 07 15:17:57 appconnect.localdomain systemd[1]: datapower.service: Control process exited, code=exited status=38
Nov 07 15:17:57 appconnect.localdomain datapower-control[105506]: Thu Nov 07 2024 15:17:57 ERR dpControl [post-stop][105506] Cannot open lockfile '/var/opt/ibm/datapower/datapower.img.lck': No such file or directory
Nov 07 15:17:57 appconnect.localdomain datapower-control[105506]: Thu Nov 07 2024 15:17:57 ERR dpControl [post-stop][105506] Cannot close LUKS partition 'var_opt_ibm_datapower_datapower_img': No such device (error 19)
Nov 07 15:17:58 appconnect.localdomain datapower-control[105506]: Thu Nov 07 2024 15:17:58 ERR dpControl [post-stop][105506] No Datapower loop mounts were found. Please reboot the system and verify tha the Datapower service starts up co>
Nov 07 15:17:58 appconnect.localdomain systemd[1]: datapower.service: Control process exited, code=exited status=3
Nov 07 15:17:58 appconnect.localdomain systemd[1]: datapower.service: Failed with result 'exit-code'.
-- Subject: Unit failed
-- Defined-By: systemd
-- Support: https://access.redhat.com/support
--
-- The unit datapower.service has entered the 'failed' state with result 'exit-code'.
Nov 07 15:17:58 appconnect.localdomain systemd[1]: Failed to start DataPower Service.
-- Subject: Unit datapower.service has failed
-- Defined-By: systemd
-- Support: https://access.redhat.com/support
--
-- Unit datapower.service has failed.
--
-- The result is failed.
Nov 07 15:17:58 appconnect.localdomain systemd[1]: datapower.service: Service RestartSec=100ms expired, scheduling restart.
Nov 07 15:17:58 appconnect.localdomain systemd[1]: datapower.service: Scheduled restart job, restart counter is at 184.
-- Subject: Automatic restarting of a unit has been scheduled
-- Defined-By: systemd
-- Support: https://access.redhat.com/support
--
-- Automatic restarting of the unit datapower.service has been scheduled, as the result for
-- the configured Restart= setting for the unit.
Nov 07 15:17:58 appconnect.localdomain systemd[1]: Stopped DataPower Service.
-- Subject: Unit datapower.service has finished shutting down
-- Defined-By: systemd
-- Support: https://access.redhat.com/support
--
-- Unit datapower.service has finished shutting down.
Nov 07 15:17:58 appconnect.localdomain systemd[1]: Starting DataPower Service...
-- Subject: Unit datapower.service has begun start-up
-- Defined-By: systemd
-- Support: https://access.redhat.com/support
--
-- Unit datapower.service has begun starting up.
Nov 07 15:17:59 appconnect.localdomain kernel: loop0: detected capacity change from 0 to 3774873600
Nov 07 15:18:01 appconnect.localdomain bash[105509]: Thu Nov 07 2024 15:18:01 ERR dpControl [pre-start][105509] Cannot unlock LUKS partition 'var_opt_ibm_datapower_datapower_img': Function not implemented (error 38)
Building out a VM for customer support work, nothing special.
@black3dynamite said in Miscellaneous Tech News:
I saw that and just had to laugh, because these people and governments don't understand what encryption means and is meant to do.
@Obsolesce said in CrowdStrike blames kernel level access on last month Microsoft outage, claims to:
@DustinB3403 said in CrowdStrike blames kernel level access on last month Microsoft outage, claims to:
@Obsolesce said in CrowdStrike blames kernel level access on last month Microsoft outage, claims to:
@DustinB3403 said in CrowdStrike blames kernel level access on last month Microsoft outage, claims to:
want to find a non-kernel based solution and that the EU is at fault.
I still say it could have been avoided if CrowdStrike had tested the change on a single device prior to releasing it publicly. It could have been a simple automated test as part of their release pipeline.
Even a better rollout strategy could have prevented it from going too far.
What's funny is that CS is now saying that they have decided to start testing their releases with the use of "besides showing interest in working with Microsoft to work on the “kernel-level restrictions” development, is also taking a new approach to certify each new sensor release through the “Windows Hardware Quality Labs."
Whats also funny is that if you look at almost any open source software of similar caliber, they do all that stuff in their build and release pipelines or other work flows before public releases.
Exactly!
@Obsolesce said in CrowdStrike blames kernel level access on last month Microsoft outage, claims to:
@DustinB3403 said in CrowdStrike blames kernel level access on last month Microsoft outage, claims to:
want to find a non-kernel based solution and that the EU is at fault.
I still say it could have been avoided if CrowdStrike had tested the change on a single device prior to releasing it publicly. It could have been a simple automated test as part of their release pipeline.
Even a better rollout strategy could have prevented it from going too far.
What's funny is that CS is now saying that they have decided to start testing their releases with the use of "besides showing interest in working with Microsoft to work on the “kernel-level restrictions” development, is also taking a new approach to certify each new sensor release through the “Windows Hardware Quality Labs."
want to find a non-kernel based solution and that the EU is at fault.