The xe-guest-utilities package in Fedora, CentOS, RHEL official repo isn't enabled by default. If you update and lose PV, just run systemctl enable --now xe-linux-distribution
This version 4 feels snappy, like you are running on a "real" computer.
But storage speed is more like an old harddrive compared to a modern server with SSD drives. To bad they didn't put a SATA or something on it.
Serving up some pages on apache or nginx is no problem. Ansible installs fine. Packages in Raspbian seems to be just about the entire Debian repository for whatever odd thing you want.
It's for sure the best Raspberry Pi so far. Can be used for a lot more things than the old version 3.
I don't think bridging will cause any problems in this case. Traffic is intermittent and low speed so even if there are more broadcast traffic sent over the VPN links, compared to a routing solution, I don't think it will have any impact.
But I'll probably set up some kind of test to make sure before deploying.
Us too, as the one exception to the above "direct" piece. It's a highly secured jump box in a data center. And the customer systems are tied solely to it, not open in general.
So equipment that is not a PC (for instance switches, network appliances, printers) are managed through the computers on-site or through the jump box? Or perhaps not managed at all?
Either through tooling (e.g. not directly), or via an on site machine (local jump station.) In lots of cases for us, Ubiquiti gear can be managed through its own centralized consoles.
@Pete-S I bought the digital version a while ago, but never got around to reading it.
Well, I just plowed through it. I like to do that to quickly get up to speed and then I'll go back later.
I'm not sure how useful the book is going to be for someone already doing MSP work. But on the other hand, seeing it from another persons perspective is always useful.
I'd be curious to see if there's information that could apply to a one-man operation / side job kind of thing.
Someone needs to write a Dummies Guide to... kind of book.
I had a feeling that monitoring was "underutilized" compared to what is technically possible. But as always, it's the business needs and the effort (cost) that determines the service level.
totally underutilized compared to what is possible. you are 100% spot on with that.
Edit: I'm going to put my Zabbix instance on it later and see how it does.
Databases should not be compressed!
Details as to why databases should not be compressed?
Basically because they are always open and written to incrementally. They aren't loaded and rewriteen like most files are. And they tend to be very large, so a very intensive usage pattern.
True. But this compression is being done on the Host OS, not inside the Zabbix VM. I wonder what kind of strangeness this can cause. I don't have a lot of traffic on this particular server.
That doesn't affect anything. Compression is compression.
I'll find out what kind of performance hits I take with it on ZFS. So far, I'm seeing some nice space savings and no problems with anything else.
This write up sums it up well. It's a "it depends." It depends because it impacts you potentially and generally isn't very important. So therefore, there is no hard and fast guideline.
I had a client on RS through NTG - it was great for years.. but then they started killing off features, like - couldn't add ActiveSync to specific mailboxes anymore. Plus I think the price was increasing...
So they moved to Zoho - and they hate it.
But Scott will be quick to point out they hate it because they are using the wrong tools for the job.
On the phone - phone built in email app, not the Zoho mobile App
On the desktop - They use Outlook 2016 in IMAP - there is a constant sync issue between the outlook client/webclient and phone client.
other desktops - Zoho webmail - this interface looks like crap IMO. I personally like OWA, others like Gmail, etc.. so this is generally a personal thing. whatev's...
I really like their interface. Shows how people are different. I especially like the docs.
Yes, the Startech device is an industrial embedded server.
I am checking to make sure of the model of the ones that are in use now. But basically the devices that I am thinking of, all they do is take whatever comes in over the serial port and send it to the server ip and port you set it for.
If your program is a special program that requires using a COM port, it can be set up for that too. (for instance, if your scale has to communicate over COM3 in Windows).
I understand what you are saying. It's a fact though that every device that does this is a linux/bsd computer of some kind. It doesn't take much processing power but you need a complete tcp/ip stack inside. There are a bunch of manufacturers for these devices.
Also if we talk about scales that you would normally use in some kind of production or quality control, nowadays they commonly have an ethernet port either as standard or as an option. Still any real-time processing will be on-prem. Results might be sent to the cloud though for presentation and final storage.
That's what I'm seeing... can't imagine when we'd want a server "somewhere" storing a bunch of random scale data.
Basically all appliance makers use SuperMicro. SM is the appliance chassis provider to the world.
Quanta has a decent run rate also. SuperMicro offers more form factors than anyone. Their T41/42 platforms were used for VxRAIL prior to Dell buying EMC.
In this case, I think Apollo and their hyper scaler stuff came from SGI who might have OEM'd SM.
Oh yeah, SM offers pretty much as many chassis as the rest of the industry combined! And they make great stuff, too.
Regarding OEM I saw just the other day that Oracle uses Supermicro in their hardware as well. And Citrix did too in their Netscaler, Wanscaler etc products.
They also have small IoT devices, so the whole range from low power embedded to hyper scale.