@AdamF said in Mesh Central:
if I don't want to put this behind any proxy
That doesn't do much anyway. There's really very little to do. It's a web page, so basically think of it link a bank website.
@AdamF said in Mesh Central:
if I don't want to put this behind any proxy
That doesn't do much anyway. There's really very little to do. It's a web page, so basically think of it link a bank website.
@AdamF said in Mesh Central:
@scottalanmiller I am missing the 2FA option in the my account settings. I am missing something I suppose?
Because the name is dumb?
My Account >> Manage Authenticator App
@Pete-S said in Save shell session to disk?:
The problem is that I want to save the unix shell session on the server. Screen buffers, environment variables, history, current directory etc. So I can resume my work later from the same point.
So there are two ways to do this...
Work in an idempotent way and be stateless. Basically doing functional programming. Huge pain and no one does this. But this is how this would be handled.
Live without the ability to survive a SERVER side reboot, and just use screen and it is designed to do this (except for the reboot thing.) You disconnect your session and can pick it back up in situ from anywhere.
@AdamF said in Mesh Central:
@scottalanmiller said in Mesh Central:
@AdamF said in Mesh Central:
Well, this tool is amazing and just works. Nice job @Ylian !
Yeah, it's definitely the best tool for this on the market. It's blown past everyone else. We are doing the AMT integration now and rolling out vPro anywhere that we can. It's just amazing.
I know you use it for remote agents that are always installed (or at least I assume so), but are you also able to use it for "one off" remote sessions? For example, sometimes I will open a screen connect session for a quick support session. Then when finished, close the session, the end. Can we do that as well with MC?
Yes, works fine for that. The end user just chooses "Run" instead of "install" and it works that way.
@siringo said in New server q's:
My main question is what RAID level are people using these days & if I chose a server with spinning disks, would I look like an idiot who didn't know anything?
RAID is dependent on many factors. It's not chosen in a vacuum but in conjunction with the choice of type, controller, and disks. You don't lead with RAID, all of those choices are a singular whole
And yes, in general, choosing spinning disks for a small system would be pretty crazy.
@siringo said in New server q's:
As an example of what I mean, the server had 32GB of RAM and I got that from 2 x 8GB and 1 x 16GB. From memory the advice was I should have used 4 x 8GB sticks.
Can anyone confirm that for me??
You generally want matching sticks and often they work in pairs or tuples. But, like the RAID, memory cannot be planned in a vacuum. You have to know your processor, motherboard and RAM options together. It's a singular choice.
@siringo said in New server q's:
Software RAID. Gee I'm outa touch, that used to be frowned upon.
It was "frowned upon" only as a myth in the Windows world. This came from the RAID in Windows being total crap and uselessly buggy. So many Windows Admins, not knowing RAID or systems administration or the broader world of computing, misassociated the problem with the concept rather than the implementation and started a myth that Windows Admins repeated to the point that no one ever questioned or evaluated the logic. Logically, how could software RAID be bad since hardware RAID uses software RAID? IF software RAID was bad, why did every enterprise storage system and server use it, always? All the big SAN systems that the same admins depended on almost universally use(d) software RAID. So in one breath people said it was bad, and also said it was the only thing they would use.
The issue was exacerbated by the FakeRAID market that preyed on Windows Admins as well. Since storage and computing concepts were so poorly taught in the Windows world, the entire market for third party software products that gave a high level impression of happening on hardware (but are easily detectable as not) arose to trick admins into paying a lot for something that wasn't really a thing. So in the WIndows world, FakeRAID also make admins who couldn't identify what they had blame software RAID instead of their own confusion.
@Pete-S said in SSH jump server access control?:
Or is there a possibility to limit network access depending on the user account as well? If that is the case, how is that done?
I bet you can, but we don't. So I'm not sure how. Generally you assume that access "to" the jump box means it is a trusted person already, then the additional access to the next device is limited to user access rather than network access. It's not that you trust them completely, but you don't limit their ability to launch a DoS attack or something at a network level.
@Pete-S said in SSH jump server access control?:
When we use VPN for remote access, each user is assigned his own unique IP address. Network access is then controlled by network firewall rules.
So this is application level. Meaning, the port is open everywhere, access is blocked AFTER the connection. Which is how it would have to work, so that's fine. Anyone can attack your VPN, but access after getting into the VPN is limited by IP. So that's different than limiting at the SSH layer, it would be within the SSH transaction.
@siringo said in New server q's:
How important is the CPU? Would I need a blazingly fast one or something slower but with more cores?
We don't know. THat totally depends on your workload. 99% of companies don't need fast OR lots of cores.
@siringo said in New server q's:
Software RAID. Gee I'm outa touch, that used to be frowned upon. Are we talking software RAID as supplied by Windows OS or is it specialised by the OEM????
Software RAID has never been frowned on by anyone that knew anything about RAID. Software RAID was the only thing that there was in the early days and the highest end enterprise systems have always been exclusively software RAID. Only in the Windows and VMware worlds did hardware RAID ever get a foothold and only because they were deployed on smaller systems that lacked resources and those platforms lacked (and still lack) viable Software RAID. There has never been a time that software RAID was bad.
However, if you are considering Hyper-V, that rules software RAID out right there. But not because software RAID is bad in any way, but because Hyper-V never figured it out to a point that you'd put it into production. But as long as you avoid Hyper-V and VMware (which you should do anyway), then you have enterprise software RAID options and you are good to use whatever makes sense for you.
All enterprise software RAID is part of the OS, it will never come from a third party, ever. Not that it couldn't, in theory, but market pressures says it won't. NEver has, never will. The best RAID has always been built into every OS platform for production except Windows and VMware, so there's never been any market for a third party to compete. It just doesn't make sense.
I understand why you'd deploy Hyper-V because there's probably no benefit to doing a good job and a lot of risk in not doing what everyone else does. The sad state of politics over results. Education in the US is the same, they could care less if things are done well, only care if it makes someone else look bad or funnels money to wherever they are laundering it. So in your case, you aren't dealing with anything resembling IT best practices or standards or really anything you could consider production. Again, not that Hyper-V is bad, it's just.... done. And done by years, last release was three years ago and no more are coming. That's not ancient, it's just really, really old to be deploying something whose future came to a full stop years ago.
Hyper-V in your environment is technical debt. But likely they will run it long, long after it is safe because, really, who cares, and likely you will not be around to deal with any issues it causes. But it is technical debt that never should have existed (it was never a GREAT choice, only an acceptable one) and should have been discontinued immediately as the "new" deployment choice as soon as the product was discontinued as a production release. So now it's nothing but debt, problems for their own sake without any benefit. LIterally, zero.
But you probably need to do it. So you have to work within those confines of not deploying production level systems. Hyper-V has no production level software RAID so since that is the choice, obviously you rule our Software RAID because you are stuck with a system that lacks it. That Software RAID is the better technology and costs a lot less is completely irrelevant because your issues have nothing to do with RAID types but with the availability of implementations given your pre-chosen deployment systems.
Likewise, you used to have no option of hardware RAID on big RISC and EPIC systems because hardware RAID wasn't just not considered good there, it was never offered. GIant systems have never had hardware RAID options, not ever. They were always limited to small x86 and AMD64 systems. Even ARM based systems have never had RAID hardware offered. So in the past if you chose those big iron systems (and still today with mainframes) you ruled out hardware RAID because it didn't exist. So with choosing Hyper-V, you rule out software RAID because while it exists, it doesn't exist in a production viable form.
All of that is to say...
Knowing that software RAID is excellent and that hardware RAID exists for the last two decades for questionable reasons, that you were given bad info and so forth is good to know. But it ultimately doesn't change what you are going to deploy.
You have to deploy hardware RAID on Hyper-V because those choices were made for you ahead of time not based on what is good, but on something else. It is what it is.
Your statement that software RAID was frowned upon was wrong (as far as actually storage engineers goes), that it was bad was always a myth. Now you know the truth. But the truth isn't relevant here because it's not part of your decision matrix, if you even have one.
Either you deploy what everyone else does and you are stuck with their decisions. You can't rethink individual decisions without reconsidering the whole - nothing in a system can be changed in a vacuum. Or you start over and follow best practices and good decision guidelines and you'll come up with systems with absolutely no resemblance to what they had before. I doubt you want to do that, so all your choices are already made for you as each depends on the last like dominos.
@Pete-S said in SSH jump server access control?:
So someone could potentially move laterally efter they have logged in to the target server. But other servers will probably only accept connections from jump servers so it would be hard. Which is on purpose of course.
If that's the limitation you/they are looking for, outside edge IP detection to network access as a whole, then it's a totally different game and I think it makes total sense. THAT you can control with SSH itself no problem.
@pattonb said in ps2 to usb adapters:
Has anybody had success using ps2 to usb adapters ? ( specifically for keyboards)
It's been DECADES, but this is how they used to all be and it was 100% reliable. Early USB days every computer was PS/2 and they just shipped these tiny USB adapters to make them work. I've probably used thousands of them. I can picture it in my mind so clearly. I know I have bins of the things in storage somewhere.
@bbigford said in Misc go-to FOSS options:
Server OS: I've bounched back and forth with CentOS before Stream (the split between 6 and 7 was weird), Ubuntu Server (seems to get a lot of hate, no idea why), Fedora Server (also seems to get some hate, not sure why), RHEL (only when the customer absolutely requires the support and can't convince them otherwise), Debian (not used a ton, not sure why, pretty barebones)
We moved to Ubuntu. The hate mostly comes from using the LTS release rather than the current one. Current is very good.
@bbigford said in Misc go-to FOSS options:
@scottalanmiller said in Misc go-to FOSS options:
@bbigford said in Misc go-to FOSS options:
TSQL: Defaulted to MySQL until some devs spun off concerned with the Oracle acquisition and started defaulting to MariaDB
Again, it's about workloads. Doing a website, MariaDB. Doing a robust application, PostgreSQL. Doing a traditional workload with only one application touching it, SQLite.
Where does MariaDB fall down with a more robust application compared to PostgreSQL? Wondering when you start to lean toward PostgreSQL.
Basically anytime that I need to be doing anything other than a cached read of a site. Basic websites like WordPress are built around MariaDB. If I'm building my own code, it's always SQLite or PostgreSQL. PostgreSQL is faster, more robust, and has more features. MariaDB is targetted at read heavy websites and blogs.
@siringo said in Another new server question:
So my question is, would you run the host OS instance and the VM OS instances on the SSDs (or VD1) and the storage for the VMs on spinning media?
There are VERY VERY VERY few cases where you'd use spinning media, ever. Consider that spinning disks are easily 1% OR LESS than the speed of a cheap laptop hard drive. So when would you want your expensive server to be an itty, bitty fraction the speed of a cheap laptop? Never, basically.
Spinning drives are ONLY for super low performance, archival storage and special cases like that. Backups, perhaps. But even then, super rarely.
@pmoncho said in Another new server question:
I second @notverypunny with separating the Hypervisor on its own RAID 1.
If using Dell servers, BOSS card is one possible option.
If that was free and didn't lose us storage, I'd agree. But it costs money and lowers usable storage. Except in extremely special cases where performance or reliability have to be absolutely maximized (and NO situation like that would ever, ever, ever consider Hyper-V, Windows, or making decisions based on politics over business value - so it cannot in any way apply here) I would not do that, even with spinning disks. That's why "OBR10" was something we talked about so much a decade ago. The need to split file systems just isn't a thing today.
https://smbitjournal.com/2012/11/one-big-raid-10-a-new-standard-in-server-storage/
That was a full decade ago.
@notverypunny said in Another new server question:
Someone is no doubt going to chime in to say that Hyper V is basically a dead product at this point and suggest KVM, possibly xcp-ng or proxmox.
We covered that thoroughly in his first post on the subject. He's aware. There's no business or technical decision here, it's purely politics.