ML
    • Recent
    • Categories
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login
    1. Topics
    2. 1337
    3. Best
    1
    • Profile
    • Following 0
    • Followers 0
    • Topics 273
    • Posts 3,519
    • Groups 0

    Posts

    Recent Best Controversial
    • RE: RAID 6 in my backup VM host on spinning rust?

      @phlipelder said in RAID 6 in my backup VM host on spinning rust?:

      @pete-s said in RAID 6 in my backup VM host on spinning rust?:

      @phlipelder said in RAID 6 in my backup VM host on spinning rust?:

      Nah, in my mind KISS applies here.
      Add the drives, expand the array, and call it a day.

      Might not be so simple. Not every perc controller / version can grow a RAID10 array.

      I believe the entire array needs to be restriped when doing that.

      Yeah, brain skipped after a speed bump. ;0)

      Verify the current PERC can indeed expand that array and do so keeping things as they are.

      You could copy out the contents of the SOBR, blow away the array, add the drives, set up the RAID 6 array, format, copy the data back, and finally get Veeam set up and the backups imported but that kills KISS big time.

      Mention of "replacing the server" is there so keep the time cost ($150/Hour minimum here) in mind for any changes to be made relative to the budget for the new rig in the not to distant future.

      That's why it faster to just put in two SSDs (we know the server has two bays free). And set up a new array.

      You have all the time in the world since both the new and old array are up and running.

      As a stop gap measure you could potentially do it with two smaller SSDs and keep both arrays in use.

      posted in IT Discussion
      1
      1337
    • RE: Time Tracking

      @gjacobse said in Time Tracking:

      It's been discussed before, and as I read back over the threads - I wonder if there is anything new and more recommended than previous solutions.

      Even with a note pad next to the phone / keyboard I have a poor habit of not managing my time logging. As it's being audited, I would like to see if there is a tool that is better suited.

      Any new thoughts on the matter as I re-read previous threads (going back to 2015).

      If you are working on shorter "issues" -> Ticketing system that has automatic time tracking.
      If you are working on longer projects -> Project management tool with time sheets.

      posted in IT Discussion
      1
      1337
    • RE: KVM or VMWare

      @notverypunny said in KVM or VMWare:

      @dbeato said in KVM or VMWare:

      @jaredbusch It is supported you can either pay for support or run OpenSource.
      https://xcp-ng.com/

      It has been super stable compared to Xenserver/Citrix XenServer.

      Not looking to take over or diverge too much, but what stability issues did you have on Citrix? We're a 95% Citrix shop and rarely have issues with the hypervisor knock wood Just wondering if we're lucky or if there's something else at play.

      We have both xenserver and xcp-ng servers but don't notice any difference. It's the same code base after all.

      Never had any stability issues with either. If we did would have looked for something else right away.

      posted in IT Discussion
      1
      1337
    • RE: AD/AAD and VPN integration

      @stacksofplates said in AD/AAD and VPN integration:

      @dashrender said in AD/AAD and VPN integration:

      @stacksofplates said in AD/AAD and VPN integration:

      @dashrender said in AD/AAD and VPN integration:

      @irj said in AD/AAD and VPN integration:

      @dashrender said in AD/AAD and VPN integration:

      @scottalanmiller said in AD/AAD and VPN integration:

      Ask it another way.... so you want to expose your AD infrastructure and fragility directly to the Internet? AD isn't meant to ever see light of day, the entire design of AD is that it is protected inside the LAN. If you do this, you are disabling the foundation of AD's security.

      I can understand where you're coming from - I'll even go so far as to say I agree, at least to some point.

      But the extra oneous on end users is what is trying to be avoided. I guess your answer to that is - tough, suck it up, this is security we're talking about here, and security is basically the antithesis of convenience?

      The thing is you're not exposing your AD with SAML authentication. Worse case scenario a malicious user can spoof a session. MFA does alot to alleviate this concern, but even MFA isn't perfect.

      Plenty of other ways to secure SAML or verify your IDP and service provider like azure has them in place.

      https://cheatsheetseries.owasp.org/cheatsheets/SAML_Security_Cheat_Sheet.html

      Even really basic stuff like IP filtering is helpful when authenticating SAML to a SaaS service. The attacker would have to know the IP range of SaaS application. Again not a save all security measure, but it helps more than you'd think.

      Also short authentication timeouts with need to re
      -authenticate in 15 or 30 mins when not in use is also a huge help.

      I don't understand how SAML isn't exposing your AD/AAD authentication?

      Isn't it the same username/password for SAML as it is for AD/AAD?

      So let's assume a logon to M365 with MFA, let's also assume there is federation between your local AD and AAD.... So you log into M365 and it shows you on the screen that it's waiting for MFA verification - when you see that you KNOW you have the correct username and password for AD/AAD... right?

      If you're concerned with SAML then use openid connect with the authorization code flow. The users creds are never passed through the portal and an access token is generated. Then apps can verify user authorization through a JWT token.

      I have literally zero clue what you just said.
      How does what you just said apply to a user getting on their home laptop and logging into M365? or nearly any web portal?

      61b9be2b-3312-4e76-bf83-507acdd5c109-image.png

      User creds are never passed to the system with the authorization code flow.

      OpenID Connect uses the same model as SAML so there is no difference there. It's called HTTP redirect binding in SAML. SAML can be setup in other ways too but that is what is commonly used.

      Either way, the users password are never sent to or even known by the service you're connecting to.

      posted in IT Discussion
      1
      1337
    • RE: Laptops versus desktops and roaming users

      @dashrender said in Laptops versus desktops and roaming users:

      @scottalanmiller said in Laptops versus desktops and roaming users:

      @dashrender said in Laptops versus desktops and roaming users:

      @scottalanmiller said in Laptops versus desktops and roaming users:

      @pete-s said in Laptops versus desktops and roaming users:

      For the same money you get more power in the desktop.

      The enterprises I know have a mix of both. Those that may have a need for a laptop have one. The rest are predominantly desktop based. Especially if they are not office workers.

      My bigger concerns are always durability and usability. My desktop setups tend to be faster, sure, but also they don't get dropped, banged around, broken hinges, dropped, filled, with coffee, etc.

      I love laptops, I'm on one now, but generally I like to have desktops for the desk and laptops on the go rather than docking stations. More money, but I think in many cases, especially more "advanced" users, it's the better way when you need to provide mobility. The laptop gets used much less, giving it more lifespan (less chance to be dropped) while also giving users a backup device.

      While I get it - damn, that's a lot of spend.

      But we get great laptops typically for $650 and desktops for like $900. So $1550 not including monitors and accoutrements. Spendy, yes, outrageous, no.

      What laptops are you getting for $650 that are worth using?

      JB posted a pic of a Ryzen 5 for $900.

      I picked up an HP home user unit from Costco in early 2020 for $600 and it was OK.
      I'm also not putting Linux, so I have to pay the MS tax for Windows Pro.

      Define worth using. A quick search on Amazon showed 63 different models in the $500 to $600 range that have i3, i5, ryzen 3 or ryzen 5 CPUs, with 8G or more RAM and 128GB or larger SSD.

      posted in IT Discussion
      1
      1337
    • RE: Need to split this string in PHP

      @dafyre said in Need to split this string in PHP:

      @pete-s said in Need to split this string in PHP:

      @dafyre said in Need to split this string in PHP:

      If the preg_match stuff is too aggravating, I have a way that might work.

      It's ugly and hacky, but I tested it with two random strings and it seems to format like you want it...

      It returns an array.

      I'm impressed by the effort!

      Some of us do not get along with regex, lol.

      I cheat...always. I try it out with something like https://regex101.com/

      posted in IT Discussion
      1
      1337
    • RE: SAS 10k 600GB Drive RAID Adapter

      @gjacobse said in SAS 10k 600GB Drive RAID Adapter:

      a friend has more than 30 SAS 10k 600GB drives that he'd like to see about testing for use. Only thing is that he's having some trouble finding an appropriate controller.

      If he's going to use all those 30 drives, he's going to need server class hardware to put them in. No desktop computer has that many drive bays.

      I've used some Supermicro servers that wouldn't even be half full with 30 x 2.5" drives.


      Looks like this:
      0b7d5afb-6ffd-40e8-bb86-51355e8a53f1-image.png
      4U with 72 x 2.5" bays


      And if you want 3.5" drive bays (fits both 2.5" and 3.5" drives):
      02914e85-32f0-4433-b5d2-939309864f7f-image.png
      4U with 36 x 3.5" bays

      posted in IT Discussion
      1
      1337
    • RE: ProxMox eating SSDs?

      @scottalanmiller said in ProxMox eating SSDs?:

      @dashrender said in ProxMox eating SSDs?:

      Anyone run into this issue on enterprise hardware?

      There is no "issue". Even those that claim that they are running into it, it's consumer drives with HA logging going to those drives. Its' nothing to do with ProxMox, it's just standard, everyday CoroSync logging. The people saying "this is system administration basics" are correct.

      Or just understanding what hardware you need for the job.

      All VM guest OS will write to the same drive as well. So 10 guests will generate 10 times as many writes + whatever the hypervisor itself is generating.

      I just checked and Crucial MX500 have 0.2 DWPD, which is not bad for a consumer drive.
      But compare that to enterprise drives that usually start at:

      • 1 DWPD (read-intensive)
      • 3 DWPD (mixed use)
      • 10 to 100 DWPD (write intensive)
      posted in IT Discussion
      1
      1337
    • RE: Cloudflare Spectrum alternative

      @jimmy9008 said in Cloudflare Spectrum alternative:

      One options we are considering is to make storefront internal only. You can only get to it once having SSL VPN active, but that wont help remote contractors who do not have our machines/certificates to get on to the VPN.

      It's very common for global companies to use VPN for contractors to access internal systems. You need to set up some kind of on/offboarding process though.

      Having been on the contractor side we usually get NDAs, a list of security compliance things that need to be fulfilled and then VPN client software, credentials, MFA, hardware tokens etc. But I've also seen complete VMs delivered and even ready to use laptops for remote system access.

      Most contractors I know run a VM for each customer for example using virtualbox or vmware workstation. Then you have a clean OS and whatever software needed for remote system access. It's usually the easiest way to handle many customers with different requirements.

      posted in IT Discussion
      1
      1337
    • RE: New customer - greenfield setup

      @scottalanmiller said in New customer - greenfield setup:

      @dashrender said in New customer - greenfield setup:

      So the long and the short of it is - Scott is saying - no filtering is worth it, either on the employee side or the guest side.

      i.e. the firewall is not a place to provide filtering (via either IP blocking or DNS website blocking) - there is not enough value if it has any cost.

      Doing something simplish like Cloudflare's DNS filtering is worthwhile because there's no cost.

      Yeah, I think that something simple like CloudFlare or even PiHole (or combine the two) can have good value because the cost is low and the value is basic.

      You don't need any PiHole. You can set up DNS filtering policies on your free cloudflare account.

      Just block every kind of external DNS queries in the firewall/router. Set the router to forward DNS to Cloudflare's 1.1.1.1. Cloudflare will detect your IP and filter your DNS results based on your policies.

      https://developers.cloudflare.com/cloudflare-one/tutorials/secure-dns-network

      I haven't played with it yet but there seems to be a lot of filtering options.

      posted in IT Discussion
      1
      1337
    • RE: New customer - greenfield setup

      @travisdh1 said in New customer - greenfield setup:

      @pete-s said in New customer - greenfield setup:

      @scottalanmiller said in New customer - greenfield setup:

      @dashrender said in New customer - greenfield setup:

      So the long and the short of it is - Scott is saying - no filtering is worth it, either on the employee side or the guest side.

      i.e. the firewall is not a place to provide filtering (via either IP blocking or DNS website blocking) - there is not enough value if it has any cost.

      Doing something simplish like Cloudflare's DNS filtering is worthwhile because there's no cost.

      Yeah, I think that something simple like CloudFlare or even PiHole (or combine the two) can have good value because the cost is low and the value is basic.

      You don't need any PiHole. You can set up DNS filtering policies on your free cloudflare account.

      Just block every kind of external DNS queries in the firewall/router. Set the router to forward DNS to Cloudflare's 1.1.1.1. Cloudflare will detect your IP and filter your DNS results based on your policies.

      https://developers.cloudflare.com/cloudflare-one/tutorials/secure-dns-network

      I haven't played with it yet but there seems to be a lot of filtering options.

      Custom filtering without cost? That's news to me. I've known about the 1.1.1.2/1.0.0.2 and 1.1.1.3/1.0.0.3 options of course.

      Yes, they have a lot of new stuff beginning 2020. For instance a VPN solution, web application firewall etc. Some thing you need to pay for some but some that are free, depending on how many users etc.

      They want to be everywhere on the edge for all traffic. Not just a DNS provider and a CDN solution.

      posted in IT Discussion
      1
      1337
    • RE: New customer - greenfield setup

      @dave247 said in New customer - greenfield setup:

      @scottalanmiller said in New customer - greenfield setup:

      @dashrender said in New customer - greenfield setup:

      Of course it's really only worthwhile where we can do SSL inspection (can this be down without installing certs on the clients to allow MiTM inspection?)

      Nope, that's physically impossible. These types of devices I see as reckless because they are often poorly maintained, often made by questionable vendors (Sophos is fine, but many others are less respectable) and provide a single point of total egress of your data with nearly all assumed protections removed.

      Hey Scott, can you elaborate a bit more on that - I'm talking about the recklessness of SSL inspection. I ask because my company has a Sonicwall NSA appliance and in the past I have attempted using the "DPI-SSL" feature (deep packet inspection) which required installing the Sonicwall cert on all systems and then the traffic would be intercepted and inspected. Despite me following their guide and applying the correct settings and site exceptions, I still had some issues and ended up scrapping the effort for now. I already know your opinion on Sonicwall but I just wanted to get more insight into the whole deep packet inspection effort.

      There was a big study a couple of years ago:
      https://www.thesslstore.com/blog/https-interception-harming-security/

      Basically it's what Scott said.

      posted in IT Discussion
      1
      1337
    • RE: sending custom CDR from FreePBX

      @travisdh1 said in sending custom CDR from FreePBX:

      @pete-s said in sending custom CDR from FreePBX:

      @jaredbusch said in sending custom CDR from FreePBX:

      @pete-s said in sending custom CDR from FreePBX:

      Long time since I saw that one 🙂
      It had a name but I have forgotten it. What was it called?

      7486da1c-22aa-415c-8db4-3a991a471da4-image.png

      I was serious this time.

      I looked it up - it was called Clippy (or officially Clippit).
      https://en.wikipedia.org/wiki/Office_Assistant

      You're too young to remember the horror of Clippy?

      1. Get off my lawn!
      2. Consider yourself lucky!

      I am lucky! Not because I'm too young but because I'm too old - too old to remember every irritating thing Microsoft managed to come up with...

      posted in IT Discussion
      1
      1337
    • RE: Looking for simplest/secure setup for connecting a domain joined computer to corporate network when remote

      @dave247 said in Looking for simplest/secure setup for connecting a domain joined computer to corporate network when remote:

      @voip_n00b said in Looking for simplest/secure setup for connecting a domain joined computer to corporate network when remote:

      @dave247 I use certificates to only allow company owned and managed devices to connect.

      Interesting, can you elaborate more on how you achieve that?

      It's common to have certificates with VPN.

      A OpenVPN client for example without any MFA is usually setup so that it needs a client certificate and a username and a password as well as the connection info. The same goes for Cisco AnyConnect and others.

      The VPN connection uses mutual authentication so the client authenticate that the server is who he is suppose to be and the server authenticate the client is who he says he is.

      If you install the certificate on your company devices you can't connect to the VPN just by downloading and installing the client on another computer and enter the credentials. Because you don't have the certificate.

      So that's how you can control what device is allowed to connect. For more security the certificates can also be stored on smart cards, hardware devices or even the TPM module inside the computer.

      You should have something similar on NetExtender. Look for client certificate or client authentication.

      Another thing with certificates is that you can prevent VPN access by revoking the client's certificate. And also certificates expire so you can give someone a short term access if you like.

      posted in IT Discussion
      1
      1337
    • RE: Launching Windows settings, screen shot etc from URI

      @gjacobse said in Launching Windows settings, screen shot etc from URI:

      Interesting - I created a batch file that launches all of my daily applications in the office. It'll be interesting to see what I can move to this method...

      You can look at what URI are registered to what applications by searching for protocol and you'll find "Choose default application by protocol".

      That's how Windows knows what program to launch when it finds something like mailto:

      You can also add your own URI to launch whatever app you want. That's done in the registry.

      BTW, ubuntu and others have the same capability to handle URIs.

      posted in IT Discussion
      1
      1337
    • RE: Centralized Log Management

      @scottalanmiller said in Centralized Log Management:

      @braswelljay said in Centralized Log Management:

      Does not collect server, application and network logs sufficiently to respond to and investigate a cybersecurity incident

      This is not a bad thing. Collecting logs is good, centrally is best. But only if you have a team that can use them. If you had that, likely you'd already be doing this. So the question is... before doing this, do you have a team ready to leverage it? Or is this just a way to potentially spend more money with the "cyber security" guys because there's no better way to make money than getting paid to read logs.

      Standards such as ISO 27001 have requirement that logs are protected. If an intruder gains root privileges on a server the only way to protect the logs is to have them stored somewhere else. So having central logging might be a compliance issues in some cases.

      posted in IT Discussion
      1
      1337
    • RE: CentOS - What is the current opinion here?

      @scottalanmiller said in CentOS - What is the current opinion here?:

      @pete-s said in CentOS - What is the current opinion here?:

      I'm curious about what workloads you are thinking about.
      I try but I can't think of any major application that doesn't run on both debian and redhat based distros.

      Zimbra is one that always gets me. RHEL / CentOS/ Ubuntu LTS only. And they've tried to block CentOS in the past, but gave up on that.

      OK, yeah, that seems to be one that is particularly sensitive.

      IMHO if an application needs heavy integration into the OS and depends on specific package versions then it's better to turn the whole thing into an turn-key linux appliance. Like proxmox, xcp-ng, vyos, pfsense, freepbx, 3cx and other have done.

      My guess is that Zimbra is getting by on mostly legacy installations though. Self-hosted email is hard to justify nowadays.

      posted in IT Discussion
      1
      1337
    • RE: VDI Options - Modernization

      @jimmy9008

      What I've seen large corporations do is to retire their VDI solutions and find other ways to fulfill whatever they were trying to accomplish with VDI.

      So it makes sense asking what your trying to accomplish with VDI and looking at other ways to accomplish it.

      Any centralized solution will have limited scalability by it's very nature of being centralized. That goes for your VDI solution too.

      posted in IT Discussion
      1
      1337
    • RE: VDI Options - Modernization

      @jimmy9008 said in VDI Options - Modernization:

      @jt1001001 said in VDI Options - Modernization:

      @jimmy9008 We have a use case involving a legacy client/server app that we've determined we're going to have to go VDI for in order to secure it. One lousy app for approx 5 users that I hope we eventually move away from. We are currently reviewing Azure VDI for this and it so far will fit the bill though we had to go throught a lot of "hoops" to configure networking, VPN back into our infrastructure, etc. We have not yet presented budget numbers to the bean counters but Im hoping when we do they will see the $$$$$ wasted for 5 users and will force them to a new product.

      What other products do you plan to look at? Still VDI or something else? Any experience of VMWare Horizon?

      We have around 600 - 1000 users globally (mostly developers) on the VDI I need to replace. The company dictates that the VDI must be in the same datacenter as the rest of the developers environments, so I don't think Azure VDI would work for us because of that mandate.

      If you have a solution that works, and at the moment VDI is a must, then it makes no sense to change the fundamentals of what you already have. That's just an unwarranted risk.

      So keep Citrix and VMware as is. Just replace the hardware and consolidate it. You are only averaging 16 cores per physical server and 370GB RAM per server if my math is correct. You could easily cram 3 to 8 times as much into each server. 128 cores per server is nothing special today as well as several TBs of RAM. AMD is the leader and the way to go.

      You could replace your 20 servers and have 384 cores and up to 12TB of RAM with only three Dell R6525 or R7525 dual CPU servers. You might want 4 or more though. But no need to go to blades when you only need a couple of servers. No need for complex hypervisor management solutions either when you only have a couple of servers.

      Use vSAN instead of SAN for the VDI. With the proper drives these servers are certified for ESXi and vSAN. You should use U2 NVMe drives and avoid SAS. It will outperform your old SAN - by a lot.

      Since you have 1 PB of data, storage for non-VDI workloads needs to be researched. I think I would want to separate VDI from the rest. Gut feeling would be to have completely separate physical environments for everything VDI related and the rest. Consolidation is good but overconsolidation can be too risky.

      posted in IT Discussion
      1
      1337
    • RE: VDI Options - Modernization

      @scottalanmiller said in VDI Options - Modernization:

      @pete-s said in VDI Options - Modernization:

      I'm not talking cached files here but client side databases and local storage as defined in html5. Another reason you might insert VDI into the chain.

      Worth pointing out that this "should be" a configuration thing and not something you need heavy VDI to work around. But here in the real world, it isn't always configurable and VDI can be used to deal with that.

      Yeah, it depends entirely on what the html/javascript code looks like. Which in most cases depends on what framework was used.

      It was easier to keep track of the data when a html browser was as dumb as a vt100 terminal.

      posted in IT Discussion
      1
      1337
    • 1 / 1