Please post what you find.
I'm too interested in this too, because I need to do the exact same thing on a couple of servers.
Please post what you find.
I'm too interested in this too, because I need to do the exact same thing on a couple of servers.
@momurda Don't know the answer but everything is more complicated in linux - by design. It's called "flexibility" and "choice". Windows is simpler - also by design.
Anyway the snmp trap is when the agent (server) initiates the communication to send information to the manager (client) - observium/opennms in your case. So snmptrap.conf is the logical place to find that info. Hmm, that might be wrong...
PS. Obvious I know, but check any firewall settings too.
@romo said in Make MS SQL Server 2014 Log Every Query:
@pete-s said in Make MS SQL Server 2014 Log Every Query:
Extended Events
@Pete-S happen to know where the default location of the event files are saved?
I created a new session and I believe it is properly showing the queries ran, but if I try to change the place where the file is logged to is doesn't start, but if I leave the default set I can't find the file!!
I'm not sure the default is actually a file at all, it may be memory buffers.
However it sound like you have a user rights issue. Make sure SQL server is allowed to write to the file where you put it.
@scottalanmiller Check out Extended Events. It can log everything that happens.
Fastest way to learn more is probably to search youtube for sql server extended events.
Extended events are suppose to be the replacement for sql traces/profiling.
Doesn't sound like you have a mobile workforce. Maybe just put a small NAS in the office and call it a day. Just sayin'
@fateknollogee said in Small colo infrastructure - rack layout feedback:
@pete-s The fs.com website is what every decent supplier should strive for.
Quantity in stock is shown clearly, no guessing needed! They tell you when the stuff will ship!!
Yes, that's the way it should be.
I saw that they also do custom orders so if I need 200 purple power cables that are exactly 5.5 ft long they can do that.
I'm already putting together my first order from them 
@fateknollogee said in Small colo infrastructure - rack layout feedback:
Great source w great pricing on Slim Cat6 patch cables: https://www.fs.com/c/28awg-slim-patch-cables-613
Also a great source for Fiber cables.
That's awesome. Just what I was looking for.
Global supplier with great prices and lots of items in stock. Fiber patch cables and also SFP+ DAC twinax and optical modules also have great prices and it shows what they are compatible with - and lifetime warranty.
@nerdydad said in Small colo infrastructure - rack layout feedback:
Are the switches logically stacked together? If so, don't forget to distribute your connections between the servers across both switches for added redundancy. Otherwise, looks good to me, as far as the hardware layout goes.
Yes, it's a real stack not just configuration stack. It will be connected like the picture below, but with two firewalls for redundancy.

We had some discussion on this here:
https://mangolassi.it/topic/18052/questions-on-redundant-switch-setup
@phlipelder said in Small colo infrastructure - rack layout feedback:
@pete-s If the runs are not fibre look into 10GbE certified ultra-thin patch cables. We've started using them for all of our data centre deployments as they save a huge amount of space. There's some really good but expensive VELCRO rolls for tying things up. We've picked up a box or two of VELCRO thin and wide plant ties each. Same stuff as the computer ones in black but a tenth of the price. So what if they're green. ;0)
PDU cables rated for 240V are freaking huge and a bear to manage. I'd bundle and run them straight down the middle then to the sizes and up to their position on the PDUs L/R. That's a bit more cabling to deal with, but it would keep the sides clear for the nodes to be pulled without messing around with getting the PDU cables out of the way. Think W for the cable bundles one left and one right.
EDIT: Make sure the PDU cables support a native locking mechanism at the PDU side at the very least.
Great info! I didn't know about the thin patch cables but they look great.
I'll think about what to do with the PDU cables though. Power supply is actually redundant for both nodes so it's not a big problem to unplug one cable to pull out a node and then put it back.
@scottalanmiller said in Small colo infrastructure - rack layout feedback:
I think you have the physical design down. Only question really is "switch on top" or "switch in the middle". And I think at 14U, on top is better.
If it was a full sized rack with say twice as many servers. Would it still be switch on top?
@scottalanmiller said in Small colo infrastructure - rack layout feedback:
@pete-s said in Small colo infrastructure - rack layout feedback:
DB servers will maybe run on bare metal. They have NVMe drives and both Xen and KVM causes severe performance drops on iops and throughput.
LXC would be the more obvious choice.
Thanks, I'll look into LXC. Did you mean LXC specifically or containers in general?
Right now I'm mostly focused on getting the hardware set up in a practical and standardized way. Then it will be configuration of switches and firewalls. Then I can start setting up the servers with hypervisors, raid arrays etc. And then start the work on setting up VMs, containers maybe, installing OS, software etc, etc.
@emad-r said in Small colo infrastructure - rack layout feedback:
@pete-s
Interesting, is this Visio?
Yes, Visio Pro.
secondly, what is the reasoning for the DB host 1+2, do you mean database or data backup?
No, I mean database servers. Backup will be on the fileservers and off site.
The reasoning was to set up the databases with replication / cluster between the DB server 1 & 2 - and just do it once. This would also give good performance and be scalable to more servers. The VMs running the application code are on the other servers.
DB servers will maybe run on bare metal. They have NVMe drives and both Xen and KVM causes severe performance drops on iops and throughput. I'll test both bare metal and virtualized when I have everything setup to see how much difference it is in real life.
@jimmy9008 @gjacobse
Yes, the datacenter is fully redundant UPS and generators. And also on internet connections, network infrastructure, cooling etc. Good point though!
After some thought, this is what I'm thinking on cabling.

Servers are supermicro twins so it should be possible to pull out the nodes backwards. Power supplies are in the middle. The front has only drive bays.

I need some feedback on how to place servers in a colo rack and how to wire them up so it is practical.
14U (1/3 rack) is what I have. The real rack cabinet is of course full height but I drawn it as a 14U rack to show the available space.
I've placed some 1U cable management but I'm not sure I need it. I have about 20 CAT6 connections per switch to hook up. And stacking cables between them. They are 48 port switches.

No need to wipe all the disks. It's a waste of time.
mdadm --zero-superblock /dev/sdXYou could also use wipefs -a /dev/sdX to remove just partition tables and raid signatures - if you have it installed.
And as soon as you put together a raid-10 it will start synchronizing the disk and write the same zeros one more time.
If you really, really wanted to wipe the drive so no residual data where on them, wipe just what is needed to make a raid-10 array out of it. Then wipe the array with zeros, as a block device. It will save a lot of time. As the synchronization process will write the zeros for you on the other drives.
Do you have drive bays to put disks in?
If you have, preferably 3.5", put two enterprise SATA or SAS drives in there. Dedicate those two drives to a VM and run software raid on it and use as a file server. Or make a raid 1 array and give that to the VM, if you have hardware raid on the VM host.
Absolutely no need for anything faster than 2 x 7200 rpm drives unless you are on a 10 GbE network. Then you need SSD storage.
Get something like two Seagate Exos X10 10TB or 12TB. SATA or SAS version are almost the same price. https://www.amazon.com/Seagate-256MB-Cache-Enterprise-ST10000NM0086/dp/B01LXXV880
You don't need anything more than raid 1 for small requirements like what you have. If you said 50TB or something like that, then it's time to look at raid 6.
PS. Also do the math how long it takes to move files. 1 GbE is roughly 100 MB/sec. It will be sustained sequential writes for large files.
@scottalanmiller said in Battery Backup with SSD raid:
@pete-s said in Battery Backup with SSD raid:
@scottalanmiller said in Battery Backup with SSD raid:
@pete-s said in Battery Backup with SSD raid:
@scottalanmiller said in Battery Backup with SSD raid:
SSD NV protection is to allow the SSD's cache to flush safely should power be lost. RAID NV / battery protection is to allow the RAID's cache to flush safely should power be lost. Each is important on its own, neither covers for the other one.
That's technically slightly incorrect.
The non-volatile cache memory on the raid controller is to be preserve the data that has not yet been written to the drives, until power is restored again.
On the SSD the capacitors hold enough charge so that the drive can write the remaining data in the cache memory to the actual flash memory after the power is gone. The cache is DRAM so it will loose it's contents after a few seconds.
The only time details like this matter is if you remove the battery from a raid card, your data might be lost.
I'm missing how that is different than what I said. What you said is correct, but I feel like you just reworded what I said, with the added detail that the RAID card flush is not until power is restored, which one hopes is obvious.
Sorry Scott, you're right. I was just thrown off by you said "SSD NV protection" and because you worded both thing the same. Obviously both things are to protect from data loss at power failures.
OIC, you are saying that the SSD is volatile, but has a battery in most cases? makes sense.
Almost, let me explain. Below is a picture of an Samsung enterprise SSD, SM863.
The SSD controller (yellow) is the brain. The flash memory (green cross) is non-volatile so it will not suffer data loss without power. There are also more flash memory on the backside.
The cache memory however is the blue ring and it will lose it's memory as soon as the power is removed. It's the same type as the memory in your computer, DRAM. That would cause immediate data loss and that is not good and that is why enterprise drives have a lot of capacitors (red circles).
The capacitors (red) act like small rechargeable batteries. When the drive loses it's external power these small capacitors will work as a reserve power for the entire drive. The controller (yellow) knows that it has lost external power so it will quickly write the data from the cache memory (blue) to the flash memory (green) before the reserve power from the capacitors (red) are empty. That way data loss is prevented. This will only take a couple of seconds at most.

@scottalanmiller said in Battery Backup with SSD raid:
@pete-s said in Battery Backup with SSD raid:
@scottalanmiller said in Battery Backup with SSD raid:
SSD NV protection is to allow the SSD's cache to flush safely should power be lost. RAID NV / battery protection is to allow the RAID's cache to flush safely should power be lost. Each is important on its own, neither covers for the other one.
That's technically slightly incorrect.
The non-volatile cache memory on the raid controller is to be preserve the data that has not yet been written to the drives, until power is restored again.
On the SSD the capacitors hold enough charge so that the drive can write the remaining data in the cache memory to the actual flash memory after the power is gone. The cache is DRAM so it will loose it's contents after a few seconds.
The only time details like this matter is if you remove the battery from a raid card, your data might be lost.
I'm missing how that is different than what I said. What you said is correct, but I feel like you just reworded what I said, with the added detail that the RAID card flush is not until power is restored, which one hopes is obvious.
Sorry Scott, you're right. I was just thrown off by you said "SSD NV protection" and because you worded both thing the same. Obviously both things are to protect from data loss at power failures.
@animal said in Another resume review:
Looks good. Personally I would move the IT Certs at the bottom just above your technical education. Also, I really don't think it's necessary to have your CompTIA id listed, just the cert and the date (and whether it's current or not). You can also put them in a table side-by-side so you can get rid of some of the white space.
Another thing... I just did a post in another discussion around the fact that you should tailor your resume to the job you're looking for. What type of job are you looking to get from this resume?
Some certs, like CISSP, are a prereq for some jobs so it makes sense to show it up front.