XenServer Export Performance Seems Poor
-
Where are the logs going now?
-
If I recall correctly @Dashrender said the files on this server aren't the critical point for it, IE they are used when they are created and then put away into the storage on the VM.
Which is that's the case, why not limit the size of the VM, allowing for a faster recovery of the VM, and then piece meal restore the data as it's needed from something like a Synology NAS.
My point is, the VM as describe is only 700GB because it was allowed to grow to this size, but it could be a meager 150GB.
-
@scottalanmiller said:
Where are the logs going now?
Into the SQL DB on the server.
same place where the EHR data lives. -
@DustinB3403 said:
If I recall correctly @Dashrender said the files on this server aren't the critical point for it, IE they are used when they are created and then put away into the storage on the VM.
Which is that's the case, why not limit the size of the VM, allowing for a faster recovery of the VM, and then piece meal restore the data as it's needed from something like a Synology NAS.
My point is, the VM as describe is only 700GB because it was allowed to grow to this size, but it could be a meager 150GB.
This is not correct - I guess there was a misunderstanding somewhere.
-
@Dashrender said:
@scottalanmiller said:
Where are the logs going now?
Into the SQL DB on the server.
same place where the EHR data lives.A developer could very quickly make a little component that takes those logs and outputs to a text file. I mean, realistically, you could do this with a one line script - just one SQL query going out to file. ELK will grab the file and boom, all done.
-
This server has a 60 GB SQL db, 500+ GB of TIF (scanned in paper documents) and another 100+ of application and other files associated with the old EHR.
At this point in time, the only thing changing on this system should be the access logs - who's logging in, who they are searching for, etc. The data in the DB and the TIF files, etc should all remain static.
The system (other than the log growth) should not be growing. It has around 50 GB of free space currently. This should be a lifetime of space since the main data isn't growing anymore.
-
So @Dashrender do you need the static data on the VM including everything that makes up the 700GB to function?
Or can all of the extra stuff get pushed off to something else?
If the goal is to ensure the VM boots, and the database is accessible, then you should reduce the size of the VM as much as possible.
Anything that is static and that can get moved out of it, I would imagine should be, so you could recover from a faulty OS update that much more quickly.
-
@scottalanmiller said:
@Dashrender said:
@scottalanmiller said:
Where are the logs going now?
Into the SQL DB on the server.
same place where the EHR data lives.A developer could very quickly make a little component that takes those logs and outputs to a text file. I mean, realistically, you could do this with a one line script - just one SQL query going out to file. ELK will grab the file and boom, all done.
I'm guessing that you're assuming that all of the logs are in a single table - and assuming that's true, then I agree with you.
-
@DustinB3403 said:
So @Dashrender do you need the static data on the VM including everything that makes up the 700GB to function?
yes - if anything on there is removed (or not mapped into it) the whole thing doesn't function as it should.
-
I should also add - 30 hours of downtime on this system would not be a huge deal.
-
@Dashrender said:
I should also add - 30 hours of downtime on this system would not be a huge deal.
If we have to go to a paper chart (yes we still have 10's of thousands of them in storage) it would take at least 24 hours to get it.. this "old" system is now in that ball park.
-
@Dashrender said:
I should also add - 30 hours of downtime on this system would not be a huge deal.
But again, that is assuming the import and your backup is in good working condition. If it fails it could be down for multiple days.
-
@DustinB3403 said:
@Dashrender said:
I should also add - 30 hours of downtime on this system would not be a huge deal.
But again, that is assuming the import and your backup is in good working condition. If it fails it could be down for multiple days.
and it would be down for multiple days if the data VM dies and doesn't restore correctly either.
-
But with the data you could have multiple known good copies, with the VM you have your individual backups.
Which all need to be tested on a regular basis to confirm they function. Which would at best take ~30 hours to test the import of.
-
@DustinB3403 said:
But with the data you could have multiple known good copies, with the VM you have your individual backups.
Which all need to be tested on a regular basis to confirm they function. Which would at best take ~30 hours to test the import of.
Multiple known good copies? huh? Why would I have multiple copies of that non changing data?
-
The very same reason you keep multiple copies of anything critical..... so you have another to recover from.
Even if all 700GB are in this VM, you don't keep just 1 backup of it.
-
@DustinB3403 said:
The very same reason you keep multiple copies of anything critical..... so you have another to recover from.
Even if all 700GB are in this VM, you don't keep just 1 backup of it.
You have a point here.
-
Dustin, you still haven't told me what makes my application VM more vulnerable than a Data SAMBA share or a NAS though to warrant splitting it.
-
So my point with reducing the size of your VM is multiple pointed.
It'll reduce backup time (unless you're doing delta's in which case only the roll-over) will take a while
It'll speed up import time (less to transfer into XS)
It'll be less to have to keep stored as a backup.If you put the data into a separate medium (and chime in folks if you think I'm wrong here) you'd simply update the pathing in the database to access the primary remote store.
This remote store would get backed up to (lets just say) a 4 bay Synology, which then gets pushed off to (again lets just say) BackBlaze B2.
You'd have multiple copies of the data which is needed for the VM, off host, which can then be restored from separate mediums should something go belly up.
-
@DustinB3403
Would you please STFU and take a moment to try and understand the scenario here.FFS man, this is a really simple thing here. @Dashrender has an old EHR system that he needs to be powered on for historical lookup purposes. It is virtualized and thus hardware agnostic.
It is a legacy system. There are no developers available for it as the original developers sold it and the new owners EoL'd it 3 years ago.
Given all of that, how would you go in and break out all the static data (the TIF scans) without breaking the entire f[moderated]ing system?
Let me clue you in.
This is a SQL Server & IIS based application as noted previously.
This means it will be very safe to assume that when things were scanned in and saved, the application wrote the file path to the document into the database records for each file.
So you will then need to go spend time writing custom SQL to pull all of these references out of the database and then verify the structure so you can then update everything outside of the application itself.
Once you know how it is all mapped, you could easily move the TIF files over and map a share or use UNC or even symlinked folder (depending on the limits of the application).Then you have to mass update the records to match the new path.
All of this on a legacy and unsupported system.
So prior to [moderated]ing an entire system up, what would you need to do? Oh yeah, make a F**** [moderated] backup. Which is exactly what @Dashrender is doing right now.