ML
    • Recent
    • Categories
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login
    1. Topics
    2. PSX_Defector
    3. Posts
    • Profile
    • Following 0
    • Followers 21
    • Topics 8
    • Posts 732
    • Groups 0

    Posts

    Recent Best Controversial
    • RE: Just spit-balling here....

      @Dashrender said:

      @PSX_Defector said:

      We just bought four new blades with 1TB of RAM. And we buy tons of equipment all the time. Ain't nobody got time to put in RAM!

      They pay you to much to even install the blade, let alone the RAM on the blade.

      Well, someone gotta put it in. And that's the smarthands in the DC.

      posted in SAM-SD
      PSX_DefectorP
      PSX_Defector
    • RE: Just spit-balling here....

      @Dashrender said:

      @JaredBusch said:

      @Dashrender said:

      I buy the server and install the HDDs and RAM, and if needed, the second processor. Doesn't that qualify for what you are saying?

      Honestly, it damn near does. Why are you not buying this server built to spec? How much are you saving doing it yourself? How much time are you spending researching things and choosing components, etc.

      Hold the phone - who specs your servers? Don't you do that? meaning you need to do all that research to know what parts you want in the box?

      Now sure, I could pay CDW to put all of the components into the chassis, but frankly I enjoy that bit of down time in the mental process.

      We just bought four new blades with 1TB of RAM. And we buy tons of equipment all the time. Ain't nobody got time to put in RAM!

      posted in SAM-SD
      PSX_DefectorP
      PSX_Defector
    • RE: Just spit-balling here....

      @Dashrender said:

      @scottalanmiller said:

      The HP Proliant DL585 G2, the first machine that McAlvin and I designed to take on NetApp in a 10,000 node compute cluster for NFS performance in 2007. Used RHEL 5 and NFS 3. Crushed a half million dollar NetApp tuned by the NetApp team directly for the test. This is the SAM-SD 0, it wasn't called a SAM-SD until years later.

      what about your setup do you think allowed this box to crush them?

      NetApp was pretty weak in performance back then. They were designed around massive scale out, which broke into the performance.

      As someone who works in various "cloud" providers, using this kind of method would not really be worthwhile for us. I've used 3par and NetApp, and now Pure and Compellant for storage. But for SMB, this is absolutely perfect. I used a Dell PE2950 stacked with a bunch of SATA drives for a SAM-SD once. I needed file server space and backup destinations, not SQL IOPS. Thats the beauty of it, stack it with SSDs, you got something close to what Pure can supply. Stack it with SATA, you have a "NAS" that rivals anything out there. It's flexible and customize-able. You just have to decide what is most important for your organization.

      posted in SAM-SD
      PSX_DefectorP
      PSX_Defector
    • RE: Google voice numbers

      @RojoLoco said:

      One was very irate and said "your rude behavior will be reported to Level 3 Communications..."

      You just answered your own question.

      L3 provides numbers for Google Voice, as they do for many different vendors. Odds are it was a pool number that was sold to Google sometime recently.

      posted in Water Closet
      PSX_DefectorP
      PSX_Defector
    • RE: Client system overhaul

      @Dashrender said:

      @PSX_Defector said:

      @Dashrender said:

      @PSX_Defector said:

      First, take the first machine and P2V it into the second machine. No point leaving it bare metal. Then take the first machine, nuke and pave then install Hyper-V or ESXi stand alone. Move your three VMs over to the first machine, nuke and pave the second machine with Hyper-V or ESXi, setup Veeam replication between them, then map the NAS through whatever way you need to for it to keep data onsite and off.

      WOW, this ends up with 4 copies of the data, probably overkill for them.

      I'm guessing they only have two server because the first one ran out of resources and storage slots, so they bought a second one. I have no idea how old the servers are, or what brand (though I'd guess Dell knowing my friend), etc.

      Nah, it's more along the lines of one copy. The NASes would only have critical data, not VM level replication. The two systems doing the VM shuffle would be their own "backups". I would take the NAS and have it mount a drive on the file server, be it NFS or SMB, to facilitate that. File sharing is a low intensity service, and it doesn't require much more than the network not be chatting to hell and back.

      The secondary server is for failover of the critical systems. Although if you wanted to you could use it also in production, it would be awful crowded on that one machine if the other one popped off though.

      How do you have full site recovery if the replicated NASs only have the data you're talking about? I think (not positive) that the purpose of the sync'ed NASs is for full site lost recovery - sure it would be slow, they'd have to get a new server, but they could pull the full VM images/backups/whatever from the remote NAS onto a new server and be up and running in less than a day once the server arrived.

      Risk versus cost. To do it right, you would need to replicate VHDs over to the second box as a warm standby, then to the NAS as a cold standby, which is then mirrored across to the other NAS. Yes, it can be done, but why bother? I don't need the bare VHDs to recover a system, I just want my data back. To bring back up Exchange from scratch would be trivial, and not to mention I would have to perform all kinds of stuff anyways to restore the deltas with backups and such. And odds are you are never gonna get a catastrophic failure of all of your drives at once in order to count on this. I almost never keep bare metal restores of VMs. As long as my critical data is backed up, e.g. MDFs, BAKs, and the main Exchange datastore, then I really don't care about the underlying OS.

      If BOTH server blow up, you got bigger problems. But there is risk v. cost issue. As of all things holy, it's done in threes. You need a active/passive/DR setup if you want to cover all your bases. And in that case it might be more prudent to ship your VMs over to a cloud provider who would get you a DR point in place.

      posted in IT Discussion
      PSX_DefectorP
      PSX_Defector
    • RE: Client system overhaul

      @Dashrender said:

      @PSX_Defector said:

      First, take the first machine and P2V it into the second machine. No point leaving it bare metal. Then take the first machine, nuke and pave then install Hyper-V or ESXi stand alone. Move your three VMs over to the first machine, nuke and pave the second machine with Hyper-V or ESXi, setup Veeam replication between them, then map the NAS through whatever way you need to for it to keep data onsite and off.

      WOW, this ends up with 4 copies of the data, probably overkill for them.

      I'm guessing they only have two server because the first one ran out of resources and storage slots, so they bought a second one. I have no idea how old the servers are, or what brand (though I'd guess Dell knowing my friend), etc.

      Nah, it's more along the lines of one copy. The NASes would only have critical data, not VM level replication. The two systems doing the VM shuffle would be their own "backups". I would take the NAS and have it mount a drive on the file server, be it NFS or SMB, to facilitate that. File sharing is a low intensity service, and it doesn't require much more than the network not be chatting to hell and back.

      The secondary server is for failover of the critical systems. Although if you wanted to you could use it also in production, it would be awful crowded on that one machine if the other one popped off though.

      posted in IT Discussion
      PSX_DefectorP
      PSX_Defector
    • RE: i put myself in a big problem

      @scottalanmiller said:

      They don't treat the "business" that you work for as seriously as American or European (or Nicaraguan in my case)

      That's the first time anyone ever referred to Latin America as "serious" in business. The work ethic of Latin America, besides the lowest level of worker, is horrible. It was like pulling teeth to get things done in Costa Rica when I was running Vegas Club Room. And then we start mentioning South, especially Brazil, holy f[moderated]!

      posted in IT Discussion
      PSX_DefectorP
      PSX_Defector
    • RE: Client system overhaul

      @Dashrender said:

      @PSX_Defector said:

      Didn't say what NAS they have, but NetApp has the ever so useful Snapmirror, which will replicate all the data to another device automagically.

      http://www.netapp.com/us/products/protection-software/snapmirror.aspx

      Performing replication is gonna depend on how fast they want to recover. Using things like Veeam to send data back and forth is fine, but the delta would be kind of a problem. Using snapmirror would replicate in real time and recovery would be within seconds.

      I would beef up the two servers, slap all of the VMs on one, run Veeam to clone across to the secondary for local redundancy, keep critical data on the NAS and shuffle the data over to the offsite backup with the other NAS.

      This would require a significant storage purchase at minimum, but not a bad idea, assuming the system will hold enough disk that is.

      Considering what they are probably using, I bet it wouldn't cost much. We ain't talking about my Cisco UCS blades with NetApp SANs. I would bet the "server" is some off the shelf junk from Fry's and the NASes are some kind of Buffalo device. Don't bother with PCI-E SSDs and fancy Fibre Channel SANs, this is fairly simple in the grand scheme of things. Some high quality SATA would do them just fine.

      First, take the first machine and P2V it into the second machine. No point leaving it bare metal. Then take the first machine, nuke and pave then install Hyper-V or ESXi stand alone. Move your three VMs over to the first machine, nuke and pave the second machine with Hyper-V or ESXi, setup Veeam replication between them, then map the NAS through whatever way you need to for it to keep data onsite and off.

      posted in IT Discussion
      PSX_DefectorP
      PSX_Defector
    • RE: i put myself in a big problem

      @IT-ADMIN said:

      you know guys, i thinking of promoting the server application again hhhhhhhhh
      i know some of you will insult me looooool
      what do you think guys

      posted in IT Discussion
      PSX_DefectorP
      PSX_Defector
    • RE: i put myself in a big problem

      @scottalanmiller said:

      @PSX_Defector said:

      One of the things I tell lots of people is not to rely on luck to get you through a gambling session. But this situation, as a gambler, you need to know how to hedge your bets. Your gonna reach another problem and go head strong into it, like laying down $100 on a table without knowing what the game is. You need to think through the entire scenario, what will happen if you do this, what is your fallback position, what is your backout procedure, how do you know it's done and satisfactory.

      One more thing that has to be considered - the reward. How big is the payoff? In this scenario, the payoff, had everything worked perfectly, was effectively zero. There was risk without potential reward. That's a big deal too. He would not really have benefited here, even if things had not gone poorly.

      Agreed. This is betting on every horse in the race. Yeah, you win, but what do you get out of it? Maybe a dollar or two assuming that the longshot came in. If it was the 2-1, you are out money on the process.

      See the road ahead, understand what is next, what you next move is, and it will usually work out. Something some folks don't understand. 🙂

      posted in IT Discussion
      PSX_DefectorP
      PSX_Defector
    • RE: Client system overhaul

      Didn't say what NAS they have, but NetApp has the ever so useful Snapmirror, which will replicate all the data to another device automagically.

      http://www.netapp.com/us/products/protection-software/snapmirror.aspx

      Performing replication is gonna depend on how fast they want to recover. Using things like Veeam to send data back and forth is fine, but the delta would be kind of a problem. Using snapmirror would replicate in real time and recovery would be within seconds.

      I would beef up the two servers, slap all of the VMs on one, run Veeam to clone across to the secondary for local redundancy, keep critical data on the NAS and shuffle the data over to the offsite backup with the other NAS.

      posted in IT Discussion
      PSX_DefectorP
      PSX_Defector
    • RE: i put myself in a big problem

      @IT-ADMIN said:

      @Dashrender said:

      You had an entire day of downtime on this and no one noticed?

      only the one who was working on the payroll software was having a connection error, i told him that we have a problem in the server, so he stop working on it until this morning when things come back to life
      fortunately the issue occur at about 6 PM and we finish the shift at 7 PM therefor the employee didn't complain because he was about to finish his shift

      People who know me know I'm a gambler. I go to Vegas all the time, I blow money at various casinos throughout the country.

      You just rolled a quick point on the craps table, you hit a blackjack on your first hand, you laid down $5USD on 21 red and it came up. You got seriously lucky. By pure chance you got out without anyone being the wiser.

      One of the things I tell lots of people is not to rely on luck to get you through a gambling session. But this situation, as a gambler, you need to know how to hedge your bets. Your gonna reach another problem and go head strong into it, like laying down $100 on a table without knowing what the game is. You need to think through the entire scenario, what will happen if you do this, what is your fallback position, what is your backout procedure, how do you know it's done and satisfactory.

      Remember, the worst thing you can do as a gambler is go and do things without thinking. When I sit down and play blackjack, I have a good idea about the cards, the odds, where we are in the deck, what is available, what is not available. I calculate the risks, rewards, and make my decisions on that. As one should do when working on anyone else's machines.

      posted in IT Discussion
      PSX_DefectorP
      PSX_Defector
    • RE: Anyone Use a SCSI to iSCSI Bridge?

      Might just be easier to slap on a cheap-o machine with a SCSI card and access it that way.

      If this was a drive array, it might be different. Although tape libraries usually support the standard SCSI commands, once you obscure the process using devices rather than talk to it directly, it might not work as expected, e.g. cycle tapes when requested. I used to work on 42U tape libraries and I would never try to jerry rig one to work.

      Depending on your backup solution, and especially since you have a SCSI tape library, I would go ahead and grab me as many disks as I can and make me a disk to disk to tape system. The clients stream straight to disk, your local machine streams to tape. No muss, no fuss, and can be done easily with even the Windows backup client. And it's usually faster and offers a bit of retention from waiting for tapes to return from the vault.

      posted in IT Discussion
      PSX_DefectorP
      PSX_Defector
    • RE: i put myself in a big problem

      @IT-ADMIN said:

      actually this server application is very important but we don't backup the system image since it is a physical server , we just backup SQL databases

      That's all I ever do, so don't worry.

      Are you just having problems RUNNING SQL or is SQL running but you can't get anyone to access it? You should be able to fire it up by giving it a new account, but the systems that connect to it will need to know the new account. That might require lots of netstat searching.

      If no one can get access, you will need to do the same thing as above, but by getting into SQL and setting up user accounts. This will require the SA account or dropping into single user mode and jackin' with the info:

      https://msdn.microsoft.com/en-us/library/ms188236(v=sql.105).aspx

      posted in IT Discussion
      PSX_DefectorP
      PSX_Defector
    • RE: i put myself in a big problem

      @IT-ADMIN said:

      i'm sure if i speak with the management about this, they will said to me no since everything is OK why are you looking for trouble,,,for this reason i act by myself and do it without telling them anything

      Why does this sound so familiar?

      Nah, it's just in my head.

      posted in IT Discussion
      PSX_DefectorP
      PSX_Defector
    • RE: i put myself in a big problem

      @scottalanmiller said:

      @IT-ADMIN said:

      since i have a connection error, it means that the connection use local account, because all local acconts were deleted (when i go to users and groups i found only 2 account : administrator and guest)

      I am not aware of using local accounts for SQL Server. The SQL Server runs on the box that you put the Domain Controller on or on a separate server?

      There are two, very misleading types of accounts with SQL. Local and Windows Authentication. Local means SQL only, stored in the master security table. Windows authentication means that it's setup to read the GUIDs of IDs within Windows, be it local or domain. You have to add them in separately.

      IT-ADMIN, if you have the sa account, you might be able to pull yourself out of the fire. Get the logs, find out what needs to be recreated, then you will have to rebuild the accounts by hand and reset everyone who might have been accessing it. Certainly better than the current hands in the air pants on fire situation.

      posted in IT Discussion
      PSX_DefectorP
      PSX_Defector
    • RE: Time to try my hand at programming as a career

      @scottalanmiller said:

      @JaredBusch said:

      C# is a good language but pretty limited in scope. I would save that as it is really the language of Windows only development, not where most development, especially the good stuff, happens.

      I would disagree that it is limited in scope. It is the language of millions of Windows desktop applications. It will be decades before it is no longer used.

      That's pretty limited in scope, is it not? Making "Windows desktop apps" is extremely limited compared to making "broadly used server apps", "Windows and other desktop apps", embedded apps, etc. C#'s primary focus is in a single use case and a single platform (actually two use cases, but its server side use has gone down a lot because of the limitations.)

      What?

      I've got tons of folks using C# for IIS applications. Not to mention that Sharepoint is built on it, which dominates the market. Although the C# work for Sharepoint is a lot less than the SQL work needed.

      Plus the code is very portable, as long as you have access to the right .NET Framework, it will run. Soon to any platform:

      http://techcrunch.com/2015/04/29/microsoft-launches-its-net-distribution-for-linux-and-mac/#.q2qqlx:sy3J

      posted in Developer Discussion
      PSX_DefectorP
      PSX_Defector
    • RE: Self Hosted FTP

      @scottalanmiller said:

      If you are on Windows I would stick with IIS.

      This is why you don't let a Unix admin do a Windows admin's job. 🙂

      IIS FTP, be it 6, 7, or 8, sucks ass. Securing it is a pain in the ass, it eats resources badly, and only offers FTPS for secure transfer. If all you need is FTP, Filezilla Server does a better job, with less resources, and higher scaling. It doesn't do it all, e.g. SFTP/FTPS, but it's certainly better than IIS FTP. Just having the autoban feature is worth not using IIS FTP.

      Once you get into paid FTP daemons, you get some real options. Ipswitch WS_FTP Server can do everything and anything. You want AD integration, restricting directory access by the hour and by the user? That's what you get with better applications.

      posted in IT Discussion
      PSX_DefectorP
      PSX_Defector
    • RE: OSPF and BMG Usage in Networking

      @Dashrender said:

      It's truly inconceivable to me that that it would be possible to host a server in a DC less than you can in house. Of course this makes a few assumptions.

      1. in-house I don't pay a fee for the server location
      2. I don't need specialized heating/cooling
      3. Not concerned with redundant ISP links
      4. Not concerned with generator backup power
        etc

      Do I need to pull out the old Out of the Closet and into the datacenter document? You ain't doing it right in the closet.

      Yeah, if you skimp on most things and ignore everything, odds are it's gonna be "cheaper" to host inhouse. Most of the time if you get more than two "servers", using a cloud based box instead would come out cheaper in the long run.

      posted in IT Discussion
      PSX_DefectorP
      PSX_Defector
    • 1 / 1