function Test-PendingReboot
{
if (Get-ChildItem "HKLM:\Software\Microsoft\Windows\CurrentVersion\Component Based Servicing\RebootPending" -EA Ignore) {
$reboot = 'Component Based Servicing\RebootPending'
Write-Host $reboot }
if (Get-Item "HKLM:\SOFTWARE\Microsoft\Windows\CurrentVersion\WindowsUpdate\Auto Update\RebootRequired" -EA Ignore) {
$reboot = 'WindowsUpdate\Auto Update\RebootRequired'
Write-Host $reboot }
if (Get-ItemProperty "HKLM:\SYSTEM\CurrentControlSet\Control\Session Manager" -Name PendingFileRenameOperations -EA Ignore) {
$reboot = 'CurrentControlSet\Control\Session Manager'
Write-Host $reboot}
try {
$util = [wmiclass]"\\.\root\ccm\clientsdk:CCM_ClientUtilities"
$status = $util.DetermineIfRebootPending()
if(($status -ne $null) -and $status.RebootPending){
$reboot = 'Configuration Manager'
$reboot
}
}catch{}
$reboot = 'no reboot required'
Write-Host $reboot
}
Test-PendingReboot
Best posts made by PhlipElder
-
PowerShell: Function to test for pending reboot reason
-
RE: Random Thread - Anything Goes
@nadnerB said in Random Thread - Anything Goes:
Looks more like a built-in shower to me. After playing in the mud that would be da'bomb so no trouble walking in the door after a mudfest.
-
RE: Backup strategy for customer data?
We've worked with a variety of hosting solution providers. Most start with a base of one backup done per 24 hours with a fee to restore if required.
Some have a built-in backup feature that we can then set up for the VMs we have our cloud desktop clients running in. It can be set up to run relatively often. They charge a fee for that one.
Start with once per day.
As far as the "how" what is the underlying virtualization platform?
Our hosting solutions are set up to use Veeam at the host level.
StarWind's Virtual Tape Library (VTL) can be used to augment the backup in another DC with Veeam's Cloud Connect being another option to tie in to get the backup data out of the production DC.
As far as expectations go, we're in the process of setting up a BaaS and DRaaS service based on Veeam. Backups and DR will be multi-site with one goal to be a two to four week no-delete option available.
In our investigations of BaaS/DRaaS providers none were able, or wanted, to answer the, "How do you back up our backup data to protect against failures in your system?" question.
-
RE: Random Thread - Anything Goes
@scottalanmiller said in Random Thread - Anything Goes:
Which was preceded by the "stare at your book" nature trail.
I remember something a comedian once said about, "the shallow end of the gene pool" that is probably applicable here.
-
RE: Backup strategy for customer data?
@Pete-S said in Backup strategy for customer data?:
@PhlipElder said in Backup strategy for customer data?:
How many tapes in the library?
How many briefcases to take off-premises for rotations?
Where is the brain trust to manage the tapes, their backup windows, and whether the correct tape set is in the drives?
If the tape libraries are elsewhere then the above goes away to some degree (distance comes into play).A 2U high autoloader will have two magazines with 12 tape slots in each. With LTO-8 tapes that means 720TB of data (2.5:1 compression) in one batch without switching any tapes. 24 tapes will fit in one briefcase so not much of a logistical problem. If you go up to a 3U unit it will hold 40 tapes and I think that might fit in one briefcase as well.
Tapes have barcodes that the autoloader will scan so that's how the machine know which tape is the right one.
If you are going to swap several tapes at once, you can get additional magazines that holds the tape and just swap the entire magazine. For daily incremental backups you can swap one tape at a time - if you have less than 30 TB of data change per day.
You can also monitor that tapes have been replaced so you could set up that as a prerequisite for starting the next daily backup. We'll just have to see how long things take and how much data we need to backup on average before putting procedures in place.
I haven't actually used tape since the late 90s so it will be exiting testing this. For off-line storage and archival storage the specs are just so much better than harddrives. Bit error is 1 in 10^19 bits (enterprise HDDs are 1 in 10^15). That's actually 10,000 times better than HDDs. And 30 years of archival properties.
We used to manage HP based tape libraries and their rotation process. It was a bear to manage.
We have one company we are working with that has a grand total of 124 tapes that they need to work with for one rotation.
GFS, that is Grandfather, Father, and Son, is an important factor in any backup regimen. Air-gap is super critical.
Having software
thethat manages it all for you is all fine and dandy until the software fails. BTDT and what a freaking mess that was when the servers hit a hard-stop.Ultimately, it does not matter what medium is used as GFS takes care of one HDD or tape dying due to bit rot (BTDT for both HDD and tape).
The critical element in a DR plan is air-gap. No access. Total loss recovery.
-
RE: Why have mass shootings increased - you thoughts?
@Dashrender said in Why have mass shootings increased - you thoughts?:
I'm curious what people thing the reason is that mass shootings have supposedly increased?
I don't have any hard numbers to know that they really have - only that the media is making an bigger and bigger deal out of it it seems.
The FBI publishes statistics every year.
And every year the media ignores the fact that the most violent places to live in the US of A are the ones with the most restrictive gun laws while the safest ones to live in are where concealed and open carry are the norm. Castle Laws greatly improve those statistics too.
Another statistic that gets ignored is the number of times someone with a firearm that uses it to defeat a perp with a firearm or firearms. But then, that doesn't fit the narrative does it?
As a Canuck, we have self-defense with equal force written into our Rule of Law but our "law enforcement" agencies and "legal system" go all-out lawfare on anyone that defends themselves reasonably with equal force. The case will get thrown out ... eventually but it will cost the defender $250K to get there.
Cherish the Second Amendment. It's the only thing standing between We the People and Tyranny. The Founding Fathers put it in there for a very specific reason.
EDIT: As far as "mass shootings" go, why is the perp's mental illness background never mentioned or only in brief do we see "they were a loner" "they kept to themselves" and so on?
Guns don't kill people. People kill people and with all manner of devices.
Joker: "I'm going to make this pencil disappear." SLAMEDIT 2: Switzerland. Every household has a gun. It's mandatory service there. Where's the mass shootings?
We have one of the highest per capita firearms ownership up here and yet where are the mass shootings?
Why is that? Why would the focus be on disarming the US as a nation? What could the possible motive be for removing over 300M firearms from We the People's hands?
-
RE: AWS Catastrophic Data Loss
@wrx7m said in AWS Catastrophic Data Loss:
This was one AZ, right? If so, you need to design your environment to span multiple AZs, if not regions. This is beginner AWS design theory.
A few things come to mind:
1: Just how many folks know how to architect a highly available solution in any cloud?
2: At what cost over and above the indicated method does the HA setup incur?
3: It does not matter where the data is, it should be backed up.Microsoft's central US DC failure, I think it was last year or early this year, cause a substantial amount of data loss as well. Not sure if any HA setup could have saved them from what I recall.
-
RE: Random Thread - Anything Goes
@nadnerB said in Random Thread - Anything Goes:
Meh ... a Red Eye (some call it a Shot in the Dark) would be a shortcut to this place.
A Black Eye (Double Shot in the Dark?) would bring one to this place and quickly. :0)
-
RE: AWS Catastrophic Data Loss
@PhlipElder said in AWS Catastrophic Data Loss:
@dafyre said in AWS Catastrophic Data Loss:
@Pete-S said in AWS Catastrophic Data Loss:
Update August 28, 2019 JST:
That is how a post-mortem write up should look. It's got details, and they know within reasonable doubt what actually happened...
It reads like Lemony Snicket's Series of Unfortunate Events, though, lol.
It's amazing. A data centre touted as highly available, cloud only according to some marketing folks, has so many different single points of failure that can bring things down.
I can't count the number of times HVAC "redundant" systems have been the source, or blamed, for system wide outages or outright hardware failures.
Oh, and ATS (Automatic Transfer Switch) systems blowing out A/B/C even though the systems are supposed to be redundant.
A/B/C failure from one power provider causing a cascade failure.
Generator failures as mentioned here in the first article.
Storms.
The moral of this story is: Back Up. Back Up. Back the eff up.
Oh, and one more thing: Thinking a distributed system, whether storage or region or whatever, is a "backup" is like saying RAID is a backup. It is not. Period.
-
RE: Random Thread - Anything Goes
@nadnerB said in Random Thread - Anything Goes:
Heh ... we had one of those.
Note the "had" in the above sentence.
It was years ago. We sent them an ultimatum: Get legit or we're out. We'd deployed a robust Small Business Server solution that was tailored to their needs. Their productivity skyrocketed.
We got them to start moving on their licensing then they dug in and decided we were no longer needed.
It was a bit of a messy divorce but only reenforced that we'll never work with a company that rips off other companies.
-
RE: Server with multiple backplane / Drive Configuration
@CCWTech said in Server with multiple backplane / Drive Configuration:
I have a server that will have 6 SSD drives (RAID5) and 4 HDD's (RAID 10)
There are two backplanes with a capacity of 8 drives each.
Does it matter if drives are split between backplanes?
For example:
Backplane 1: 6 SSD's and 2 HDD's
Backplane 2: 2 HDD'sOr is it better to keep all like drives (or drives in same array) on the same backplane?
Example:
Backplane 1: 6 SSD's and 2 empty spots
Backplane 2: 4 HDD's with 4 empty spotsI'm thinking it doesn't matter as they are all on the same RAID card, but wanted to verify.
If the RAID card is reading the drives correctly as far as slot ID across both backplanes then things should be okay.
You should be able to run through the server's management interface and blink each drive to verify.
We tend to split-up high bandwidth/IOPS drives across cables.
In this case, we'd put half of the SSDs on one backplane and half on the other and set up the RAID array that way. This gives us a lot more bandwidth to play with.
HDDs don't really care so much about that.
-
RE: Server with multiple backplane / Drive Configuration
@Dashrender said in Server with multiple backplane / Drive Configuration:
@PhlipElder said in Server with multiple backplane / Drive Configuration:
We tend to split-up high bandwidth/IOPS drives across cables.
In this case, we'd put half of the SSDs on one backplane and half on the other and set up the RAID array that way. This gives us a lot more bandwidth to play with.
So you're saying the backplane cable is a bottleneck?
Each cable has four SAS/SATA paths in it. So yes, with SSDs it is possible to saturate a single cable set.
A single 6Gbps SAS cable saturates around 377K IOPS. A single 12Gbps cable saturates around 750K IOPS. At least, that's what's happened in our own in-house storage thrashing tests.
-
RE: ANU hacked by phishing email through the preview pane
@Nic said in ANU hacked by phishing email through the preview pane:
No clicking on links or downloading attachments required - they payload got executed just by being previewed. No mention of what email client they were using yet.
Highly suspect. No details, no original e-mail mentioned, no analysis.
I call bunk.
Someone clicked on something and didn't fess up.
-
RE: RDS 2019 Setup and RDS License Role
@wrx7m said in RDS 2019 Setup and RDS License Role:
@PhlipElder said in RDS 2019 Setup and RDS License Role:
Archiving is simpler for users that leave the org. Archive the .VHDX file.
Profile choke fix: Rename the .VHDX file to .OLD, log the user on, migrate their data. Done.Is this specific to Hyper-V or is that even related to the way this works?
The User Profile Disk is a dynamic .VHDX file that gets created in the designated storage location.
It can be set up with a storage limit. 5GB, 10GB, or more. Whatever maximum user GB size may be needed.
^^^ This is another reason to use UPDs/FSLogix. Storage sprall.UPD TIP: Once the RDS setup is complete and the TEMPLATE.VHDX is created in the designated location, mount the TEMPLATE.VHDX file, and shrink the partition down to a "starter size" GB, and dismount it.
Example: We have a setup where we deployed 30GB maximum UPDs.
We edit the template to shrink the partition to 10GB. That's all a new user gets when they log on the first time. If they hit a warning for low storage down the road, we can do one of two things:
1: Clean-up your mess
2: Log them off, mount the .VHDX, expand the partition by 5GB or more, dismount the .VHDX, and have the user log on. They get instant storage increases. -
ConnectWise Zero Day?
Not sure how this has played out yet, but it's looking not so good for those using ConnectWise.
-
Data Breach: PDL "Enrichment" Company 1.2B Peeps Impacted ... yeah, BILLION
https://www.dataviper.io/blog/2019/pdl-data-exposure-billion-people/
I didn't even know these kinds of things existed.
Getting pretty sick and tired of these kinds of hidden aggregators.
-
RE: Data Breach: PDL "Enrichment" Company 1.2B Peeps Impacted ... yeah, BILLION
There are many words in my vocabulary spanning rail crews, construction crews, and a couple decades as a mechanic, in several languages, that are still too polite for what I think of this and the peeps behind aggregating.