Managing Publicly hosted Linux Servers through Cockpit
-
@dustinb3403 said in Managing Publicly hosted Linux Servers through Cockpit:
@travisdh1 said in Managing Publicly hosted Linux Servers through Cockpit:
@dustinb3403 said in Managing Publicly hosted Linux Servers through Cockpit:
This is just something that came to mind as I am working on something else, which is if I had multiple linux servers hosted in client environments (just pretend 1 each) that I had to manage.
Wouldn't using cockpit to manage each of these servers be the best approach?
So my question is for the @scottalanmiller and @JaredBusch's etc for these customers, do you simply access the client site through a remote tool or VPN and then do whatever? Or do you publicly host Cockpit from the client site (using a static ip) to then access cockpit (either directly to the host) or to one master Cockpit Administration server?
And if you are publicly hosting cockpit, I assume you're doing so individually for each system and not tying them all together through a single administrative cockpit interface.
#ProbablyInsane
Ansible or Salt would be my go-to instead of cockpit, but sure you could. I'd assume you already have a means to access the client's networks, so just use that access.
Yeah, but Ansible or Salt would be for people who know how to use those tools, many MSP/ITSP still have service desk type folks who would be tasked with minor things like "reboot this server"
Are you really asking for those people? or for yourself?
that said - I don't know how to use Ansible or Salt, so I know my place there -
@dashrender said in Managing Publicly hosted Linux Servers through Cockpit:
@dustinb3403 said in Managing Publicly hosted Linux Servers through Cockpit:
@travisdh1 said in Managing Publicly hosted Linux Servers through Cockpit:
@dustinb3403 said in Managing Publicly hosted Linux Servers through Cockpit:
This is just something that came to mind as I am working on something else, which is if I had multiple linux servers hosted in client environments (just pretend 1 each) that I had to manage.
Wouldn't using cockpit to manage each of these servers be the best approach?
So my question is for the @scottalanmiller and @JaredBusch's etc for these customers, do you simply access the client site through a remote tool or VPN and then do whatever? Or do you publicly host Cockpit from the client site (using a static ip) to then access cockpit (either directly to the host) or to one master Cockpit Administration server?
And if you are publicly hosting cockpit, I assume you're doing so individually for each system and not tying them all together through a single administrative cockpit interface.
#ProbablyInsane
Ansible or Salt would be my go-to instead of cockpit, but sure you could. I'd assume you already have a means to access the client's networks, so just use that access.
Yeah, but Ansible or Salt would be for people who know how to use those tools, many MSP/ITSP still have service desk type folks who would be tasked with minor things like "reboot this server"
Are you really asking for those people? or for yourself?
that said - I don't know how to use Ansible or Salt, so I know my place thereI'm just asking in general, I'm doing some lab work, and thought, damn it sucks having to touch 1000 cockpit pages individually. Which of course still would occur with them all tied to a single cockpit "server".
But then I thought, how is this being managed on a public facing server. SSH I assume using keypairs is the obvious thing, but even that seems tedious.
-
@dustinb3403 said in Managing Publicly hosted Linux Servers through Cockpit:
@dashrender said in Managing Publicly hosted Linux Servers through Cockpit:
@dustinb3403 said in Managing Publicly hosted Linux Servers through Cockpit:
@travisdh1 said in Managing Publicly hosted Linux Servers through Cockpit:
@dustinb3403 said in Managing Publicly hosted Linux Servers through Cockpit:
This is just something that came to mind as I am working on something else, which is if I had multiple linux servers hosted in client environments (just pretend 1 each) that I had to manage.
Wouldn't using cockpit to manage each of these servers be the best approach?
So my question is for the @scottalanmiller and @JaredBusch's etc for these customers, do you simply access the client site through a remote tool or VPN and then do whatever? Or do you publicly host Cockpit from the client site (using a static ip) to then access cockpit (either directly to the host) or to one master Cockpit Administration server?
And if you are publicly hosting cockpit, I assume you're doing so individually for each system and not tying them all together through a single administrative cockpit interface.
#ProbablyInsane
Ansible or Salt would be my go-to instead of cockpit, but sure you could. I'd assume you already have a means to access the client's networks, so just use that access.
Yeah, but Ansible or Salt would be for people who know how to use those tools, many MSP/ITSP still have service desk type folks who would be tasked with minor things like "reboot this server"
Are you really asking for those people? or for yourself?
that said - I don't know how to use Ansible or Salt, so I know my place thereI'm just asking in general, I'm doing some lab work, and thought, damn it sucks having to touch 1000 cockpit pages individually. Which of course still would occur with them all tied to a single cockpit "server".
But then I thought, how is this being managed on a public facing server. SSH I assume using keypairs is the obvious thing, but even that seems tedious.
How is that tedious? it should be a once and done thing...
-
@dashrender said in Managing Publicly hosted Linux Servers through Cockpit:
@dustinb3403 said in Managing Publicly hosted Linux Servers through Cockpit:
@dashrender said in Managing Publicly hosted Linux Servers through Cockpit:
@dustinb3403 said in Managing Publicly hosted Linux Servers through Cockpit:
@travisdh1 said in Managing Publicly hosted Linux Servers through Cockpit:
@dustinb3403 said in Managing Publicly hosted Linux Servers through Cockpit:
This is just something that came to mind as I am working on something else, which is if I had multiple linux servers hosted in client environments (just pretend 1 each) that I had to manage.
Wouldn't using cockpit to manage each of these servers be the best approach?
So my question is for the @scottalanmiller and @JaredBusch's etc for these customers, do you simply access the client site through a remote tool or VPN and then do whatever? Or do you publicly host Cockpit from the client site (using a static ip) to then access cockpit (either directly to the host) or to one master Cockpit Administration server?
And if you are publicly hosting cockpit, I assume you're doing so individually for each system and not tying them all together through a single administrative cockpit interface.
#ProbablyInsane
Ansible or Salt would be my go-to instead of cockpit, but sure you could. I'd assume you already have a means to access the client's networks, so just use that access.
Yeah, but Ansible or Salt would be for people who know how to use those tools, many MSP/ITSP still have service desk type folks who would be tasked with minor things like "reboot this server"
Are you really asking for those people? or for yourself?
that said - I don't know how to use Ansible or Salt, so I know my place thereI'm just asking in general, I'm doing some lab work, and thought, damn it sucks having to touch 1000 cockpit pages individually. Which of course still would occur with them all tied to a single cockpit "server".
But then I thought, how is this being managed on a public facing server. SSH I assume using keypairs is the obvious thing, but even that seems tedious.
How is that tedious? it should be a once and done thing...
Yeah, but once and done means setup once for every possible system.
-
@dustinb3403 said in Managing Publicly hosted Linux Servers through Cockpit:
@dashrender said in Managing Publicly hosted Linux Servers through Cockpit:
@dustinb3403 said in Managing Publicly hosted Linux Servers through Cockpit:
@dashrender said in Managing Publicly hosted Linux Servers through Cockpit:
@dustinb3403 said in Managing Publicly hosted Linux Servers through Cockpit:
@travisdh1 said in Managing Publicly hosted Linux Servers through Cockpit:
@dustinb3403 said in Managing Publicly hosted Linux Servers through Cockpit:
This is just something that came to mind as I am working on something else, which is if I had multiple linux servers hosted in client environments (just pretend 1 each) that I had to manage.
Wouldn't using cockpit to manage each of these servers be the best approach?
So my question is for the @scottalanmiller and @JaredBusch's etc for these customers, do you simply access the client site through a remote tool or VPN and then do whatever? Or do you publicly host Cockpit from the client site (using a static ip) to then access cockpit (either directly to the host) or to one master Cockpit Administration server?
And if you are publicly hosting cockpit, I assume you're doing so individually for each system and not tying them all together through a single administrative cockpit interface.
#ProbablyInsane
Ansible or Salt would be my go-to instead of cockpit, but sure you could. I'd assume you already have a means to access the client's networks, so just use that access.
Yeah, but Ansible or Salt would be for people who know how to use those tools, many MSP/ITSP still have service desk type folks who would be tasked with minor things like "reboot this server"
Are you really asking for those people? or for yourself?
that said - I don't know how to use Ansible or Salt, so I know my place thereI'm just asking in general, I'm doing some lab work, and thought, damn it sucks having to touch 1000 cockpit pages individually. Which of course still would occur with them all tied to a single cockpit "server".
But then I thought, how is this being managed on a public facing server. SSH I assume using keypairs is the obvious thing, but even that seems tedious.
How is that tedious? it should be a once and done thing...
Yeah, but once and done means setup once for every possible system.
ummm... yeah? I guess I'm missing something - sure it's a ton of work for someone who has lots of clients/client machine/endpoints, whatever... that's just life of moving to a new tool.
now if you deployed Salt/Ansible at the same time, you might be able to save a shit ton of work in the future when a tool change is made. -
What most people seems to miss is that the Solarwind attack was a supply chain attack.
That means that the tool itself was compromised.That means that any tool, regardless of how you use it, is at risk for this kind of attack. It certainly doesn't have to be anything that is centrally hosted/administered.
Even ssh itself is at risk, but it's more likely to occur in tools where you have lots of source code from many sources. For instance ansible or devops tooling.
-
Cockpit looks nice and all that, but the version I tried didn't seem to have as many features or as much control like webmin does.
-
@pete-s said in Managing Publicly hosted Linux Servers through Cockpit:
What most people seems to miss is that the Solarwind attack was a supply chain attack.
That means that the tool itself was compromised.That means that any tool, regardless of how you use it, is at risk for this kind of attack. It certainly doesn't have to be anything that is centrally hosted/administered.
Even ssh itself is at risk, but it's more likely to occur in tools where you have lots of source code from many sources. For instance ansible or devops tooling.
nobody was mentioning Solardwinds. They were referencing specific MSPs being breached and all of their clients being on the same networks.
The Solarwinds hack was from an injection during a pipeline where they modified the actual binary that was built. Ansible wouldn't be compromised that way since it's a Python package and you can just pull the Ansible source and run it. It doesn't need compiled.
Solarwinds is far from "devops tooling" and that feels like a weird thing to say since most devops tooling is open source and not built in private like Solarwinds.
-
There's a big movement now around SBOM with tools like in-toto, SPIFFE/SPIRE, TUF, and a lot more. We are working with gov't clients and they are headed towards requiring SBOM information for each release.
-
@stacksofplates said in Managing Publicly hosted Linux Servers through Cockpit:
There's a big movement now around SBOM with tools like in-toto, SPIFFE/SPIRE, TUF, and a lot more. We are working with gov't clients and they are headed towards requiring SBOM information for each release.
It's been mandated that software now include a SBOM (see my recent post in IT news).
-
@dustinb3403 said in Managing Publicly hosted Linux Servers through Cockpit:
@stacksofplates said in Managing Publicly hosted Linux Servers through Cockpit:
There's a big movement now around SBOM with tools like in-toto, SPIFFE/SPIRE, TUF, and a lot more. We are working with gov't clients and they are headed towards requiring SBOM information for each release.
It's been mandated that software now include a SBOM (see my recent post in IT news).
Yeah but that mandate is only for open source (for whatever dumb reason). I'm all for SBOMs for open source software, but it's ignoring the fact that the issue has historically come from closed source software. An SBOM is much less effective when you already have access to 99% of what's included in the product.
-
We are working with Platform One and some others and they want to require it for everything. Hopefully that gets more traction.
-
@stacksofplates said in Managing Publicly hosted Linux Servers through Cockpit:
@dustinb3403 said in Managing Publicly hosted Linux Servers through Cockpit:
@stacksofplates said in Managing Publicly hosted Linux Servers through Cockpit:
There's a big movement now around SBOM with tools like in-toto, SPIFFE/SPIRE, TUF, and a lot more. We are working with gov't clients and they are headed towards requiring SBOM information for each release.
It's been mandated that software now include a SBOM (see my recent post in IT news).
Yeah but that mandate is only for open source (for whatever dumb reason). I'm all for SBOMs for open source software, but it's ignoring the fact that the issue has historically come from closed source software. An SBOM is much less effective when you already have access to 99% of what's included in the product.
Well it mentions open source specifically, but also targets close source
-
@dustinb3403 said in Managing Publicly hosted Linux Servers through Cockpit:
@stacksofplates said in Managing Publicly hosted Linux Servers through Cockpit:
@dustinb3403 said in Managing Publicly hosted Linux Servers through Cockpit:
@stacksofplates said in Managing Publicly hosted Linux Servers through Cockpit:
There's a big movement now around SBOM with tools like in-toto, SPIFFE/SPIRE, TUF, and a lot more. We are working with gov't clients and they are headed towards requiring SBOM information for each release.
It's been mandated that software now include a SBOM (see my recent post in IT news).
Yeah but that mandate is only for open source (for whatever dumb reason). I'm all for SBOMs for open source software, but it's ignoring the fact that the issue has historically come from closed source software. An SBOM is much less effective when you already have access to 99% of what's included in the product.
Well it mentions open source specifically, but also targets close source
Ah I read the first part. It made it sound like it was only open source.
-
@stacksofplates said in Managing Publicly hosted Linux Servers through Cockpit:
@dustinb3403 said in Managing Publicly hosted Linux Servers through Cockpit:
@stacksofplates said in Managing Publicly hosted Linux Servers through Cockpit:
@dustinb3403 said in Managing Publicly hosted Linux Servers through Cockpit:
@stacksofplates said in Managing Publicly hosted Linux Servers through Cockpit:
There's a big movement now around SBOM with tools like in-toto, SPIFFE/SPIRE, TUF, and a lot more. We are working with gov't clients and they are headed towards requiring SBOM information for each release.
It's been mandated that software now include a SBOM (see my recent post in IT news).
Yeah but that mandate is only for open source (for whatever dumb reason). I'm all for SBOMs for open source software, but it's ignoring the fact that the issue has historically come from closed source software. An SBOM is much less effective when you already have access to 99% of what's included in the product.
Well it mentions open source specifically, but also targets close source
Ah I read the first part. It made it sound like it was only open source.
Not that anyone but the US Government will know what is actually included in any specific closed source software
-
@dustinb3403 said in Managing Publicly hosted Linux Servers through Cockpit:
@stacksofplates said in Managing Publicly hosted Linux Servers through Cockpit:
@dustinb3403 said in Managing Publicly hosted Linux Servers through Cockpit:
@stacksofplates said in Managing Publicly hosted Linux Servers through Cockpit:
@dustinb3403 said in Managing Publicly hosted Linux Servers through Cockpit:
@stacksofplates said in Managing Publicly hosted Linux Servers through Cockpit:
There's a big movement now around SBOM with tools like in-toto, SPIFFE/SPIRE, TUF, and a lot more. We are working with gov't clients and they are headed towards requiring SBOM information for each release.
It's been mandated that software now include a SBOM (see my recent post in IT news).
Yeah but that mandate is only for open source (for whatever dumb reason). I'm all for SBOMs for open source software, but it's ignoring the fact that the issue has historically come from closed source software. An SBOM is much less effective when you already have access to 99% of what's included in the product.
Well it mentions open source specifically, but also targets close source
Ah I read the first part. It made it sound like it was only open source.
Not that anyone but the US Government will know what is actually included in any specific closed source software
If enterprises are smart they will require it too. And at that point it would hopefully just be publically available.
-
@stacksofplates said in Managing Publicly hosted Linux Servers through Cockpit:
@dustinb3403 said in Managing Publicly hosted Linux Servers through Cockpit:
@stacksofplates said in Managing Publicly hosted Linux Servers through Cockpit:
@dustinb3403 said in Managing Publicly hosted Linux Servers through Cockpit:
@stacksofplates said in Managing Publicly hosted Linux Servers through Cockpit:
@dustinb3403 said in Managing Publicly hosted Linux Servers through Cockpit:
@stacksofplates said in Managing Publicly hosted Linux Servers through Cockpit:
There's a big movement now around SBOM with tools like in-toto, SPIFFE/SPIRE, TUF, and a lot more. We are working with gov't clients and they are headed towards requiring SBOM information for each release.
It's been mandated that software now include a SBOM (see my recent post in IT news).
Yeah but that mandate is only for open source (for whatever dumb reason). I'm all for SBOMs for open source software, but it's ignoring the fact that the issue has historically come from closed source software. An SBOM is much less effective when you already have access to 99% of what's included in the product.
Well it mentions open source specifically, but also targets close source
Ah I read the first part. It made it sound like it was only open source.
Not that anyone but the US Government will know what is actually included in any specific closed source software
If enterprises are smart they will require it too. And at that point it would hopefully just be publically available.
While I would agree, the reality is that so many software companies are in business solely because their software is closed source.
The RHEL's of the world are far and few in-between
-
We use Cockpit very limitedly. It's only on internally and not machines are grouped together even at clients with multiple Cockpit installs. It's nice and all, but it's not as fast as SSH and there's no real need for a GUI like this so... why bother.
-
@scottalanmiller said in Managing Publicly hosted Linux Servers through Cockpit:
We use Cockpit very limitedly. It's only on internally and not machines are grouped together even at clients with multiple Cockpit installs. It's nice and all, but it's not as fast as SSH and there's no real need for a GUI like this so... why bother.
completely agree.
-
@stacksofplates said in Managing Publicly hosted Linux Servers through Cockpit:
The Solarwinds hack was from an injection during a pipeline where they modified the actual binary that was built. Ansible wouldn't be compromised that way since it's a Python package and you can just pull the Ansible source and run it. It doesn't need compiled.
Supply chain attack doesn't have to modify binaries. You could modify anything. In Ansible's case they say that the weak link is the community developed modules. That it's built on Python changes nothing.