@pattonb GetDataBack by Run Time Software. Used it recently to recover data from an Apple MacBook Air SSD.
The last recovery was via easeUS recovery software where GDB failed for some reason to find the deleted checkpoint file (.AVHDX).
@pattonb GetDataBack by Run Time Software. Used it recently to recover data from an Apple MacBook Air SSD.
The last recovery was via easeUS recovery software where GDB failed for some reason to find the deleted checkpoint file (.AVHDX).
@wrx7m FSLogix is now included with Remote Desktop Services CALs. That User Profile Disk (UPDs) setup is a step up from the native UPDs.
We've been using UPDs since they were included in RDS. It makes the need for roaming profiles of some sort for larger farms moot. We locate the UPDs on a decent performing file server and make sure to set up the defaults with data growth in mind.
Let's say we set a default size of 30GB for the UPD. We can then mount the template file and shrink the partition in there down to say 5GB. Then, when a user comes close to running out of space we can increase their partition size within their UPD very easily.
We still redirect Desktop and My Documents plus the subfolders to provide some security to the user's data.
We have yet to deploy a RDS Farm using FSLogix. That's next on our RDS To Do List to thrash the setup.
EDIT: When migrating a user to a new farm they get a new UPD and thus the need is there for a bit of post logon configuration. We usually do this when setting the user account up in the new collection.
Not so sure about wear and tear since the components are built to last short of a nuclear EMP.
Fragmentation was always the biggest problem we had since I can remember. Even large arrays would experience degradation over time due to seek times.
We would set up our single host partition for the virtual machines then configure all virtual machines with fixed VHDX files for their operating system "partition" then a fixed VHDX for their data so long as the size was around 250GB or less. Then, for the big one we'd use a dynamically expanding VHDX file. This kept everything nice and contiguous.
For clusters we'd set up dedicated LUNs for each component of the above in smaller settings for each virtual machine. In larger settings we'd set up a LUN for the operating systems, smaller data VHDX files, and a few for the big ones. The smaller VHDX files would still be fixed while the large ones would be dynamic but having their own LUN to grow in limited the fragmentation problem.
All-Flash pretty much renders the whole conversation moot. We're getting to the point where the only place we deploy rust is in 60-bay and 102-bay shared SAS JBODs for archival or backup storage on clustered ReFS repositories.
EDIT: FYI: Fixed VHDX creation on a ReFS Clustered Shared Volume is virtually instantaneous no matter the file size.
@travisdh1 said in Random Thread - Anything Goes:
@PhlipElder said in Random Thread - Anything Goes:
@DustinB3403 said in Random Thread - Anything Goes:
Oh man, this is so freaking true it's not funny.
VMQ enabled in-driver for Broadcom Gigabit controllers in Hyper-V would kill network performance for the guests. Disable it then a driver update would set it back on again.
Ah, straight up fail then. I knew I prefer Intel NICs for a reason.
What blows my mind is the fact that the specifications for VMQ make it clear that 10GbE ports and silicon for tying in to the CPU cores are required.
Despite years of requests to remove that setting enabler/re-enabler Broadcom just ignored it.
@DustinB3403 said in Random Thread - Anything Goes:
Oh man, this is so freaking true it's not funny.
VMQ enabled in-driver for Broadcom Gigabit controllers in Hyper-V would kill network performance for the guests. Disable it then a driver update would set it back on again.
@Dashrender I get pop-ups from sites, Firefox here, asking to push live updates. I've absentmindedly clicked YES and then those kinds of things started happening.
Now, it's been quite awhile so I don't remember how to turn that off. :S
@Emad-R said in Trying my luck in Toronto, Ontario:
@PhlipElder said in Trying my luck in Toronto, Ontario:
I suggest looking into community groups that are from the same geographical area. There should be a few around since TO is one of the largest cities in Canada.
But I'm running away from my people here and the culture, and you are telling me to go back and live next to them. :smiling_face_with_open_mouth_cold_sweat:
I've been quite close to the Polish, Philippine, and French communities most of my life. They tend to build communities together thus my assumption of same.
No worries. Hakuna Matata.
@manxam said in Trying my luck in Toronto, Ontario:
It's definitely Torono...
As for quay, that's "key" in the "queens english"
https://www.merriam-webster.com/dictionary/quay
I've never heard it pronounced that way ... not that that's saying much.
First is theirs while last is ours with a Kay in the middle. Go figure. I stand corrected.
@manxam said in Trying my luck in Toronto, Ontario:
@NashBrydges said in Trying my luck in Toronto, Ontario:
Welcome to Canada eh!
Here's your first tip...pronounce it like "Trono" instead of "Toronto" and you'll fit right in
Omg, thank you @NashBrydges. I'm originally from the GTA (greater Toronto Area) before finally ending up in Alberta. Everyone here pronounces the second T (Tor-on-to) and it drives me nuts
My wife and I took a break in Vancouver from last Thursday to Sunday evening.
We stayed at the Lonsdale Quay Hotel.
We flat landers pronounce the second word: Qway
The locals there looked at us funny, with some snickers, and pronounced it: Kee/Key
Huh?!?
So much for the Queen's English.
EDIT: Oh, and it was Torana with a twist on the Ah at the end when I used to hang-out there. ;0)
@scottalanmiller said in Random Thread - Anything Goes:
There is a bakery here in St. Albert called Grandin Bakery.
They make JamBusters, they call them Bismarks here in Alberta, that are stuffed with raspberry jam that are so good they remind me of my childhood heading into Winnipeg to get some with Gram and Gramps on a weekend.
I gotta tell you, I put a serious dent in a box of a dozen before they get home.
5x Raspberry JamBusters (rarely make it home)
3x Custard creams with thick chocolate fondant (for my wife)
4x Chocolate covered donuts or honey glazed (a few make it home)
Toronto can get pretty cold during the winter though definitely not as cold as Manitoba (I grew up there with coldest temp I've experienced being -56C).
I suggest looking into community groups that are from the same geographical area. There should be a few around since TO is one of the largest cities in Canada.
It's a big place. My preference would be to find a place to live on that's close to GO (commuter train) or subway to avoid driving. The 401 can be really fast but mostly at a snail's pace.
Since *NIX is it, look into the various city's job boards (TO is a Megalopolis) as *NIX is not that uncommon there. The smallish city of St. Albert where our business is runs a lot on *NIX. Mississauga is one place that's fairly heavy in tech. Guelph is another. Canadian distributors that may be hiring are SYNNEX Canada, Ingram Micro Canada, Tech Data Canada, and maybe ASI Canada.
If any recruiters reach out make sure to vet them first. Be cautious around big promises.
Things to do: Visit the Tower, catch a Blue Jays game, catch a Raptors game, catch a Maple Leafs game. Check out the various districts like arts and music.
One of my favourite times in TO was spending an entire day going about barefoot (the city is that clean) with my sandals strapped to my belt. They'd go on when entering a building but otherwise they were off. Subway, to trolley, to city bus from park to park, area to area, my buddy and I had a great adventure.
It's a great city.
EDIT: MeetUp is a great resource.
@Obsolesce I've not seen Wasabi yet.
BackBlaze is integrated into several levels with Veeam and/or StarWind thus my suggestion to go in that direction.
Plus, in my mind the big point in BackBlaze's corner is their push-back against drive manufacturer's NDA on publishing reliability statistics. They are one of the only ones that I know of that do so every year. It's one report that I read as soon as its available ever year.
Veeam set up to back up to a Synology RAID 1 NAS with enough storage to meet your recovery point objectives.
Backup file destination should be password protected with read-only access for all accounts but admin and Veeam user.
A pair of NAS devices set up identically could be used to provide an air-gap for the backups.
A small single drive NAS/external enclosure could provide a similar setup.
StarWind Virtual Tape Library set up as a backup destination that can then trickle the backups up to BackBlaze would be one of the least expensive off-site/cloud solutions.
@DustinB3403 said in Random Thread - Anything Goes:
We home school. This will be a hit with our kids!
@DustinB3403 said in Random Thread - Anything Goes:
@scottalanmiller said in Random Thread - Anything Goes:
A is positively the correct answer.
Kids in our family know what Eye-Dee-Ten-Tee (ID10T) and PEBKAC (Peb-Cack) mean.
Looks bad a la Maersk:
https://www.accountingtoday.com/news/the-wolters-kluwer-cch-outage-what-happened
https://twitter.com/WKTAAUS
http://www.removeddit.com/r/sysadmin/comments/blcswm/wolters_kluwer_cch_axcess_outage/
EDIT: Oh, and they remembered to unplug the *possibly not backed up domain controller(s)!
@JaredBusch said in Email server options:
$17,000 (Exchange on premises) + $5,760(SPAM filter for 4 years) / $5,760 (O365 EO Plan 1) = 3.95 aka 4 years break even
Sorry, missed the line about 120 users. :S
One option for the on-premises portion would be to SPLA the server and Exchange licensing. That would break it down to a more palatable monthly cost.
We run with RAID 6.
Modern RAID controllers have the horsepower and cache RAM that is flash backed to overcome any real parity performance costs.
Rebuild times will be the killer plus some performance cost for a failed disk.
I'd stick with the Exchange based back end.
As far as migrating goes, there's plenty of third party products out there, or, if there's not a lot of users then a simple export to PST and import on the new setup would work.
Thread: https://twitter.com/theek/status/1117895531563372544
Now that's disaster recovery planning long term!