ML
    • Recent
    • Categories
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login
    1. Topics
    2. Obsolesce
    3. Best
    • Profile
    • Following 0
    • Followers 3
    • Topics 153
    • Posts 9,424
    • Groups 0

    Posts

    Recent Best Controversial
    • RE: Why Virtualize?

      @RojoLoco said in Why Virtualize?:

      @Obsolesce said in Why Virtualize?:

      @IRJ said in Why Virtualize?:

      @RojoLoco said in Why Virtualize?:

      The simple answer to "why virtualize" is that if you don't, everyone here will make fun of you (ask me how I know).

      It is difficult to have a valid reason to run a physical server anymore.

      There's always a snowflake reason, but you should always virtualize unless you have super specific reasons not to.

      When I first got my current job, those reasons were mostly "the boss says 1:1 physical systems". I later found out that like 10+ years ago they got SCREWED by a poorly implemented VM Ware setup for production systems. Cost them some customers and a bunch of money, so it kinda made sense. I've been working really hard to break them out of their late 90s mindset.

      Yeah, that's not your fault and totally out of your hands. It's extremely hard to change the mind of older generations no matter how right you are and no matter how grossly wrong they are. There's usually a lot of emotion in their reasoning as well.

      posted in IT Discussion
      ObsolesceO
      Obsolesce
    • RE: Microsoft Fail - SQL Server on Linux does not log successful logins

      It's not just about threats. Successful logins is also about audit trails, traceability, accountability, etc. Many places policy dictates all logins are recorded as well.

      You always want to know who is logging into a system, even more so than who is failing.

      posted in IT Discussion
      ObsolesceO
      Obsolesce
    • RE: Windows Server - average RAM, vCPU allocation?

      @Pete-S said in Windows Server - average RAM, vCPU allocation?:

      rather those mundane servers that constitute maybe 80% of all VMs

      These are pretty much always Linux VMs. Otherwise, they are the crazy high-requirements Windows Server VMs.

      We'll, except the Windows infrastructure servers like ad/dns/dhcp/etc... Then yeah as others said 2-4gb ram, 2 vcpu.

      posted in IT Discussion
      ObsolesceO
      Obsolesce
    • RE: Hyper V Tape passthrough possible?

      @Pete-S said in Hyper V Tape passthrough possible?:

      @Obsolesce said in Hyper V Tape passthrough possible?:

      @JaredBusch said in Hyper V Tape passthrough possible?:

      @Obsolesce said in Hyper V Tape passthrough possible?:

      @Pete-S said in Hyper V Tape passthrough possible?:

      @Donahue said in Hyper V Tape passthrough possible?:

      I'm not currently using tapes yet. I bought some LTO-7 tapes, but they've been on my shelf for like a year because other projects came up. I had been planning like 1 a week or some similar interval.

      LTO-7 is 6TB native / 15TB compressed so still a lot.

      Do you have a lot of VMs running on it? You might want to consider going bare metal. Solves your problem without any hassle.

      Or as mentioned, just get a second controller for the tape and pass that through the hypervisor. You only need a simple HBA SAS-2 (6Gbps) for a tape drive.

      Keep in mind the data going to the tape is likely already compressed, so expect to the native amount of "backup data".

      No, tape compression is way better.

      Wasn't the case for me when I was using Veeam, with tape compression after Veeam's compression. I got like no tape compression at all.

      The tape drive is designed to not compress already compressed data. It makes this determination in real-time as the data comes in.

      Thing is that the tape drive compresses the data on the fly. So there is no point wasting CPU resources trying to compress files before sending it to tape. The tape drive will take the 900 MByte/sec of data that you send to it and compress it down to 360 MByte/sec that gets written to tape. The 360 MByte/sec is the real limit, while the 900 MByte/sec is actually variable and depending on the compression ratio.

      And then when you want to read from the tape it will do the opposite, read 360 MByte/sec and decompress it for you on the fly and deliver 900 MByte/sec back to you (if the data is compressed by the drive).

      If you send data that can't be compressed it will only take 360MByte/sec and read 360 MByte/sec. So you gain nothing by using compression in the backup software. Unless there is some other reason to do it of course. Like sending the backup over the WAN before it goes to tape.

      The thing is, the production data wasn't going directly to tape. Otherwise, yes, it'd make sense to not have the backup software do the compression and let the tape drive do it.

      Production data was going to a backup repository first (on-prem backup storage). That was super fast DAS, Veeam could back it up quick and compress it very well so there was more room for backups. We needed on-prem backups so we could restore data much faster than had it been off-site on a tape, plus it would costs to get the tape back.

      THen, what went to tape and was rotated off-site was the backup repo to tape. So that's why it was compressed first.

      It was an actual backup plan, not the typical ad-hoc style backup.

      X amount of daily or weekly backups were held on-prem depending on the data. Then that was thrown on to tape once a month and rotated back eventually. I think there were 3 sets, so restores could go back about 4 months depending on when something were to go sour.

      Restores were great coming from on-prem. Everything tested well.

      posted in IT Discussion
      ObsolesceO
      Obsolesce
    • RE: Remove-Item cannot remove crap in Documents folder

      @JaredBusch
      I had a little bit of fun... whether useful to you or not.

      You can run this script as a regular user that has permissions to create and run scheduled tasks and create a file in specified directory.

      This will create a powershell script, and a scheduled tasks to run the script as the SYSTEM account. Then it will delete the script and the scheduled task.

      I could test most of it, but not some of it for obvious reasons.

      <#---- CHANGE THESE VARS: ----#>
      
      # Users to exclude from profile manipulation script, separated by pipe:
      $excludedKnownUsers = "Administrator|SpecialUser1"
      
      # New Script:
      $newLocalScriptPath = "$ENV:SystemDrive\scripts"
      $newLocalScriptFile = "testScript.ps1"
      
      # SID ending: (likely 21 if domain users)
      $sidEnd = 21
      
      # Scheduled Task Name:
      $TaskName = "_Test Task 1"
      
      # Scheduled Task Description:
      $Description = "This is a test scheduled task that runs as the SYSTEM account and will be ran and then deleted at the end of this script."
      
      <#-------- END CHANGE --------#>
      
      # New Script:
      $newLocalScript = "$newLocalScriptPath\$newLocalScriptFile"
      
      # Gethers list of user profile paths:
      $userPaths = Get-ItemProperty -Path "HKLM:\SOFTWARE\Microsoft\Windows NT\CurrentVersion\ProfileList\*" -ErrorAction SilentlyContinue | Where-Object {($_.PSChildName -split '-')[3] -eq $sidEnd -and ($_.ProfileImagePath -split "\\")[2] -notmatch $excludedKnownUsers}
      
      # Creates a 'script in memory':
      $testScript = $null
      foreach ($userPath in $userPaths.ProfileImagePath) {
          $testScript += "Remove-Item -Path "$userPath\Documents" -Force -Recurse`n"
          $testScript += "New-Item -ItemType Junction -Path $userPath -Name 'Documents' -Target '$userPath\Nextcloud\Documents' -Force`n"
      }
      
      # Create a PowerShell script and save it as specified in vars:
      if (-not(Test-Path $newLocalScript)) {New-Item -Force $newLocalScript}
      $testScript | Out-File $newLocalScript -NoNewline -Force
      
      # Task Action:
      $Action = New-ScheduledTaskAction -Execute "powershell.exe" -Argument "-ExecutionPolicy Bypass -File $newLocalScript"
      
      # Task Trigger: (task will be manually run immediately and then deleted, so keep 1 year out)
      $Trigger = New-ScheduledTaskTrigger -Once -At (Get-Date).AddYears(1)
      
      # Task Compatibility: 
      $Compatibility = "Win8" # 'Win8' is 'Windows 10' in the GUI
      
      # Task Settings:
      $Settings = New-ScheduledTaskSettingsSet -Compatibility $Compatibility -StartWhenAvailable -AllowStartIfOnBatteries
      
      # Run task as local SYSTEM account with highest privileges:
      $Principal = New-ScheduledTaskPrincipal -UserId 'S-1-5-18' -RunLevel Highest
      
      # Create the scheduled task:
      Register-ScheduledTask -TaskName $TaskName -Description $Description -Action $Action -Trigger $Trigger -Settings $Settings -Principal $Principal -Force
      
      <#--------------------------#>
      
      # Run the scheduled task:
      Get-ScheduledTask -TaskName $TaskName | Start-ScheduledTask
      
      # Remove the created script:
      Remove-Item $newLocalScript -Force
      
      # Delete the scheduled task:
      Get-ScheduledTask -TaskName $TaskName | Unregister-ScheduledTask -Confirm:$false
      
      
      posted in IT Discussion
      ObsolesceO
      Obsolesce
    • RE: Trying to find a good, on-premises, multi-department help desk application

      @dave247 said in Trying to find a good, on-premises, multi-department help desk application:

      @Obsolesce said in Trying to find a good, on-premises, multi-department help desk application:

      Curious about the on-prem requirement? Seems lime an odd requirement?

      On-prem because we have a lot of PII information that gets put into our help desks, plus we just like having some control over things. Not everything has to be cloud hosted.

      Oh I see. I didn't realize it would be set up for the general public to have read access if it were cloud hosted.

      I suppose all these multi-billion dollar enterprises using hosted services like Service Now have no idea!

      posted in IT Discussion
      ObsolesceO
      Obsolesce
    • RE: Remote management of employees personal cell phones ...

      @Emad-R said in Remote management of employees personal cell phones ...:

      @JaredBusch said in Remote management of employees personal cell phones ...:

      While I agree with all the arguments above, it is also true that there are things like selective wipe possible. But as stated it comes down to how much you wanna pay for the product to do something like that. As an employee I would be perfectly comfortable with allowing control of my device to a limited sandbox like that.

      Of course she wants to have to trust your employer when they say that’s all they can do with the solution they are using.

      Well guess what I will just get the cheapest smartphone like Nokia 2.1 and that is my "personal" work phone, I think this is the only way to manage that kinda of crap, Im sure managment will be happy and this is what they want, for employees to PurchaseYOD, which is fine I will handing them a frekn 512mb RAM android phone, let us see what kind of app will be installed there ? hell it will crash every 10 seconds

      maybe this

      4edbce8c-ee6c-4937-aff9-4010b618c2f0-image.png

      or this

      https://www.amazon.ca/❤Unlocked-Smartphone-Screen-Android-Dual-Core/dp/B07RKMS7BZ/ref=sr_1_5?keywords=cheapest+android+phone&qid=1573852976&sr=8-5

      What a freekn shame, i cant beleive I had more freedom in my previous workplace than I have in Canada, and I lived in what you guys call third word developing countries, hell we even made more progress, where I work now everything is blocked, even SSH to other servers that is not company servers are blocked, that mentality is so stupid, and basically tells you we dont trust you. YOu should worry on hiring good people and thats it. Why do you do all the refernces check, and job checks then limit your employees and constantly monitor them ?

      If it wasnt for certain family conditions I would go back

      It's about way more than the employee.

      Nothing in a background check will protect the company against some user installing some infected fake Angry Birds game on their Android phone, which ends up being a gateway for a hacker into private company data, or a way to get any other kind of information making it easier to an attacker to phish..., or a million other things that make sense to secure access to company data that you don't understand.

      Don't be so damn narrow-sighted and quick to compare countries that actually try to secure their data from all aspects, from one's that don't know what they are doing.

      posted in IT Discussion
      ObsolesceO
      Obsolesce
    • RE: How M$ shakedown stupid corporations

      @Emad-R said in How M$ shakedown stupid corporations:

      @Dashrender

      You have not seen much of "real business" then, I cannot disclose info, but I think this corp is like multi-million revenue.

      Thats how it is ins real world, they get bloated and move slower, thats what happen when corp grow, if you keep it startup-ish vibe and "move fast and break things" you will be running the latest but not everyone is like that.

      Besides windows painfull upgrading process helps you to stick to whats running.

      And no on the client side, its all Win10 ... sadly we use Win10 to manage Linux machines 😞
      I hate that mremote/putty shit

      This is false.

      Big business makes quite an effort to stay current in the Windows world, especially if they are multi-billion $$ company. They HAVE to. It's not a choice.

      It's constant change going on, all the time. 2019 is current, when a server is needed at all. Most are really going serverless when possible, lots of SaaS, Cloud, etc.

      You might be thinking of U.S. defense companies. I mean they run old shit and pay millions and billions to maintain OAF software support.

      posted in IT Discussion
      ObsolesceO
      Obsolesce
    • RE: Windows Server licensing for HA?

      @Pete-S said in Windows Server licensing for HA?:

      If you have two servers and run HA, does that mean that you have to license Windows Server standard for the maximum number of VMs running when you have a failure?

      So for example,
      Server A: 16 cores, runs 6 VMs normally
      Server B: 16 cores, runs 6 VMs normally

      So each server has to be licensed for all 12 VMs running on 16 cores - so 6 x Windows Server Standard licenses for each server, total of 12 licenses?

      But if you didn't run HA, you would only license each server for 6 VMs, with 3 x Windows Server Standard, a total of 6 licenses?

      Is this correct?

      Yup.

      If you're running a HA setup of Server Standard, all physical servers must be licensed for all Windows Server VMs that can run on them. This means each physical server in your HA cluster must be licensed for 12 Windows Server VMs.

      So yes, you are correct in that to license 12 Windows Server VMs on both of your physical servers, you'll need 6x Windows Server Standard licenses for each server, 12 "licenses" total as you said.

      posted in IT Discussion
      ObsolesceO
      Obsolesce
    • RE: PowerShell - Add-ADGroupMember Script - Improvements?

      You could create a function that adds the specified user to the groups you specify as switch parameters.

      That's the next best thing besides grabbing a list of groups automagically based off of a list HR provides in a CSV or something of that sort.... or a GUI with check-marks or list selection.

      function Invoke-GroupLightning {
          Param(
              [Parameter (Mandatory)]
              [string]$sAMAccountName,
              [Parameter ()]
              [Switch]$Office365Users,
              [Parameter ()]
              [Switch]$SmartsheetUsers,
              [Parameter ()]
              [Switch]$OfficeUsers,
              [Parameter ()]
              [Switch]$SlackUsers,
              [Parameter ()]
              [Switch]$AccountingUsers,
              [Parameter ()]
              [Switch]$SouthWareUsers,
              [Parameter ()]
              [Switch]$VPNUsers,
              [Parameter ()]
              [Switch]$OpenVPNUsers,
              [Parameter ()]
              [Switch]$TerminalServiceUsers
          )
          foreach ($key in $PSBoundParameters.Keys | Where-Object {$_ -NotMatch "sAMAccountName"}) {
              if ($key -match "OfficeUsers") {
                  $key = "Office Users"
              }
              if ($key -match "AccountingUsers") {
                  $key = "Accounting Users"
              }
              Add-ADGroupMember -Identity "$key" -Members "$sAMAccountName"
          }
      }
      
      Invoke-GroupLightning -sAMAccountName "test.user" -Office365Users -SlackUsers -AccountingUsers -OfficeUsers -SouthWareUsers
      
      

      Looks dirty, but didn't spend much time on it. Likelysome better ways.

      Of course, looking at this, it seems like more work to add the switches.

      But you could turn it into a PowerShell module that automatically loads when you open PowerShell, that way the function is ready to type, and autocomplete will help you with the switches.

      Again as i said earlier, best to have a script automatically grab the username and groups from something. This could be 100% automated if HR could stick them in a CSV somewhere on a share, or anywhere really. You could grab the data from anything via API or whatever.

      posted in IT Discussion
      ObsolesceO
      Obsolesce
    • RE: (Air Gapped) Data Storage and security

      @gjacobse said in (Air Gapped) Data Storage and security:

      Can you (how do you) Air gap and secure data

      Air-gapped from what? The internet? The LAN? Specific LAN subnets? No network connectivity whatsoever?

      It depends on above. If air-gapped from the LAN the users are on, obviously they can't access it from their system and will have to use something that is not air-gapped from it.

      posted in IT Discussion
      ObsolesceO
      Obsolesce
    • RE: How M$ shakedown stupid corporations

      @StorageNinja said in How M$ shakedown stupid corporations:

      @Obsolesce said in How M$ shakedown stupid corporations:

      It runs on a highly-customized extremely hardened and stripped-down version of Hyper-V basically, but that is where all similarities end. The management layer on top of that is ARM.

      ARM isn't a management layer, it's a processor architecture. They might use an ARM processor for an out of band controller (I suspect that is what most out of band controllers run with the exception of whatever the hell is the custom silicon used for AWS Nitro).

      Wth, I'm not talking about the processor architecture. I was referring to Azure Resource Manager. The proper context was there, no reason at all to think I was referring to processor architecture.

      https://docs.microsoft.com/en-us/azure/azure-resource-manager/resource-group-overview

      posted in IT Discussion
      ObsolesceO
      Obsolesce
    • RE: Looking For Alternate IT roles

      @scottalanmiller said in Looking For Alternate IT roles:

      @jmoore said in Looking For Alternate IT roles:

      So I just need to keep learning skills in different areas until I find a position in one of those areas at a larger company. I should specifically look for an engineering role because it is not as senior as admin then?

      Engineer roles will pay upwards of $350K. Admin roles will pay higher in the most demanding companies. Unless you feel constrained by $350K, I'd not worry about one being more or less senior.

      Keep in mind these are the hidden jobs you and I will never find or hear about other than from Scott.

      I only come across the opposite when talking to recruiters, headhunters, and hiring managers. My personal experience has been IT related Administrator roles offer less than engineering roles, architect roles being the top of the three.

      I wanted to add that it's not that I don't believe Scott, it's just in my experience and everywhere I've seen, it was the opposite.

      posted in IT Discussion
      ObsolesceO
      Obsolesce
    • RE: One Time, Non-Image, Windows Backup Client

      Another option is robocopying the data off needed to rebuild the server or for a server rebuild. That's basically all a VSSless backup will do anyways, except will more easily copy ALL files instead of just the needed minimal.

      posted in IT Discussion
      ObsolesceO
      Obsolesce
    • RE: One Time, Non-Image, Windows Backup Client

      @IRJ said in One Time, Non-Image, Windows Backup Client:

      @Obsolesce said in One Time, Non-Image, Windows Backup Client:

      @IRJ said in One Time, Non-Image, Windows Backup Client:

      @Obsolesce said in One Time, Non-Image, Windows Backup Client:

      Another option is robocopying the data off needed to rebuild the server or for a server rebuild. That's basically all a VSSless backup will do anyways, except will more easily copy ALL files instead of just the needed minimal.

      Syncing to blob or S3 storage is extremely easy as well. Not to mention cheap.

      I'd agree, until I seen 8TB of capacity heading there. Assuming they have a "normal" internet pipe, that's going to take a really really long time to back up and restore that much data over the internet.

      Yeah, I'm so used to it being fast as hell from aws instance

      Well, if he backs the data up to S3, and rebuilds the server in AWS.... no restore time 😉

      posted in IT Discussion
      ObsolesceO
      Obsolesce
    • RE: Looking For Alternate IT roles

      @RamblingBiped said in Looking For Alternate IT roles:

      @Obsolesce If you're doing DevOps and you're not engineering software, aren't you just doing Ops then?

      Literally zero software engineering, zero software programming. I am currently spending a lot of time in Azure DevOps, as well as Azure Automation and other Azure serverless technologies.

      posted in IT Discussion
      ObsolesceO
      Obsolesce
    • RE: Looking For Alternate IT roles

      @IRJ said in Looking For Alternate IT roles:

      Cloudformation is just a json files. Nearly every system admin deals with json files

      ...not because we like json.

      posted in IT Discussion
      ObsolesceO
      Obsolesce
    • RE: Looking For Alternate IT roles

      @RamblingBiped said in Looking For Alternate IT roles:

      @Obsolesce

      One of the most recent examples was a python-based serverless application composed of multiple lambdas that interfaced with AWS Organizations to provide automation around creating, configuring, and deploying AWS accounts and automated governance. It is exposed to end users through a self-service portal as a Service Catalog Product.

      All written in python, deployed and managed via SAM templates, and maintained via a CI/CD pipeline.

      This is (IT-related) automation (DevOps), not software development/engineering/programming.

      posted in IT Discussion
      ObsolesceO
      Obsolesce
    • RE: Looking For Alternate IT roles

      @scottalanmiller said in Looking For Alternate IT roles:

      @Obsolesce said in Looking For Alternate IT roles:

      @scottalanmiller said in Looking For Alternate IT roles:

      @Obsolesce said in Looking For Alternate IT roles:

      What exactly would a f500 company pay someone $400k to administer, who isn't a management title? Because, whatever system they are administering and being paid that much to do it I need to learn it ASAP!

      Linux primarily is what pays in that range. I was literally consulting for a hedge fund two weeks ago talking about them setting their admin scale to $450K for the more senior roles. A manager admining something is crazy, totally different skills. Places that give manager titles to tech roles are the ones that will never pay well.

      Windows will almost never top $300K, regardless of the role.

      So what is it that a Linux systems admin does in a F500 to get $450k that the same role gets for 1/4 that in a non F500?

      A lot, actually. So just using myself as one example.... here are some things that make Linux administration different in a high demands environment...

      1. Outages can be worth in excess of a million dollars a minute, just being the guy who doesn't need to pee before fixing a problem can make the difference between a $100K salary and a $300K salary. Literally, that one thing, once during a critical outage, is all it takes.

      2. Loads of servers. A single snowflake admin might have six hundred unique, critical servers. That's just a lot of work. A DevOps guy might have tens of thousands. For myself, I had 650 snowflakes that I directly managed, 8,000 snowflakes for which I was the "buck stops here" guy (L5 admin), and 10,000 in a DevOps environment that I was directly in charge of. No SMB has that many servers across all staff, let alone for a single person.

      3. Unique Issues. Solving nanosecond kernel tuning issues "never" happens in the SMB. You just don't run into those problems. In the F500, you can end up being the only shop in the world hitting a specific bug or adjusting a kernel in a specific way. Having done administration in both realms, the day to day differences are actually pretty big. One is very "by the book" and essentially trivial. The other there is no book and you have to know the hardware and software inside and out and do things that you can't research in Google.

      4. Automation. An SMB can automate, an F500 has too. This isn't the big difference it might sound like, but it's a difference. Just one of many factors where something is cool and good in the SMB, but a foregone conclusion in the F500.

      5. Advisory. Even if you are the best SMB admin every and you are asked to sit on the SMB's board of directors (this happens to me, for example) it's trivial compared to what you do in the F500. Being the IT guy on the board of directors for a company making $50m a year is nothing compared to being the business advisor to the managing director of a "product" that moves a trillion dollars (yes, for real) a day. The order of magnitude of responsibility, risk, and expectations is unbelievable.

      6. Currency. In the SMB people hope that you "keep current", but running an OS from five years ago isn't going to get you any looks as long as you patch it. Run Windows Server 2012 R2 today and people go "haha, time to upgrade", but they assume you are still doing your job. In the F500, we are dealing with the OS and hardware vendors constantly testing things expected to release one or two years out. You are not just on the bleeding edge (in what you know, not necessarily what you deploy), you are past the edge! The research and information you have to keep up on is completely different.

      7. Security and Responsibility. For me, my hot seat position meant I was the last line of defense. Not just did I have to secure the systems, but I was the judgement call for people accessing or trying to access our systems. I've come down to a single call away from having the FBI walk people out in handcuffs (employees making big money because they pushed their luck too far.) My meetings would include the head of the CIA or the head of the Federal Reserve Bank. I've been followed home by private investigators (and spies.) I've had my phone tapped. I've had to work in sealed buildings. I've been in meetings where risk is assessed at a political and nuclear war level.

      8. Performance. Being the end of the line support means you are on call 24/7/365. For me, that was ten years with only one real break. It can be intense. Few SMBs actually depend minute to minute response at that level. They might act like they want that, but you actually get to sleep most days.

      9. Access. I was a key holder to the global exchange trading floors. I had that responsibility to get in places where even senior managers could not go.

      10. Personal Judgement Calls. At my level in banking, one of my tasks was to "breach protocol." Technically not allowed, and no Senior VP could do it, but my personal role was the one guy allowed to "ask forgiveness" rather than permission. If a MD called me, and I concurred on a change, break, shut down, or whatever, I was the sole person able to override bank policy and ignore my managers and do a change or whatever. Boy do you have to defend those decisions, but it was my job to either stand up to policy and break the rules to protect the bank; or to stand up to the division heads and not let them make a change to their own systems. I had no overriding rules, other than SEC of course, and always had to make the final call. And a bank MD is like going toe to tow with the personal owner of a multi-billion dollar multi-national company. A division might be worth $10bn USD or more.

      Just some of the ways that high end big administration is different from what you typically see in the SMB or in the F500 trenches.

      That's a good bit of information there, more than expected and really answered my question. Thanks for that.

      Knowing this now, I can definitely see how those types of Admins would be of more senior level and make more money. It takes a special type of person to do that. Unfortunately, that isn't the kind of thing I would "enjoy" per se, and I don't have the kind of life to allow for such dedication either (anymore).

      But yeah, it makes a lot of sense and I'm glad you took the time to write that out.

      Personally, I'm happy where I'm at and going (engineer -> architect), and don't need $400k+. I can live more than comfortably at half that and be just as happy, or happier as I'd have a lot more freedom it seems.

      posted in IT Discussion
      ObsolesceO
      Obsolesce
    • RE: Centrally Controlled Local Backup System Options

      @scottalanmiller said in Centrally Controlled Local Backup System Options:

      @Obsolesce said in Centrally Controlled Local Backup System Options:

      @scottalanmiller said in Centrally Controlled Local Backup System Options:

      Tiny customer with a single server, no other hardware. They need to take a backup of their data and be able to restore it in a reasonable amount of time for most problems, primarily hard drive failure.

      This is where built-in OS backup, scripts, and email come in real handy... for businesses that have data they didn't plan for, and can't afford to support. They can easily schedule a backup script to back up to local device, send email alerts, even have some free serverless app in Azure or AWS watch for things and also send out alerts if something fails.

      I haven't seen what OSs need backed up, but I think it doesn't matter.

      Right, but the email system means it doesn't do what's needed.

      What's that? A fancy management portal some non-IT person can take care of? That's just too bad then. Windows and Ubuntu make it easy enough to backup a system.

      posted in IT Discussion
      ObsolesceO
      Obsolesce
    • 1
    • 2
    • 125
    • 126
    • 127
    • 128
    • 129
    • 137
    • 138
    • 127 / 138