ML
    • Recent
    • Categories
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login
    1. Topics
    2. 1337
    3. Best
    1
    • Profile
    • Following 0
    • Followers 0
    • Topics 273
    • Posts 3,519
    • Groups 0

    Posts

    Recent Best Controversial
    • RE: Raspberry Pi-based KVM over IP

      @Dashrender said in Raspberry Pi-based KVM over IP:

      @travisdh1 said in Raspberry Pi-based KVM over IP:

      @dafyre said in Raspberry Pi-based KVM over IP:

      I like the idea of something like this. I admit that some of the points (such as powering the device) are good ones.

      I think something like this could be huge for the team I work on. We do have to go physically visit the servers every now and again to fix a botched VMware upgrade (rare, but it happens) or what-have you. Having something like this would be great. Connect the PiKVM to wireless so we don't have to dig out a monitor, mouse, keyboard, and power cable and find something to sit it on.

      We then could go back to the admin machines (or back to our office!) and connect to the PiKVM over wireless. No muss, no fuss.

      Couldn't you power the PiKVM with battery, or via the USB connection that it uses to connect up for the keyboard & mouse as well. OR a second USB connection for power instead.

      If available, you can run a pi with a PoE to 5v usb adapter. I had a rPI3 B+ that I used that way for a long time. That's assuming you have a network port to provide the power.

      The disadvantage to useing iDRAC / iLO, etc al, in our case is that we just don't have enough network ports to do that. Plus iLO & iDRAC are all 1gig connections and all of our switches are 10gig.

      10gb network switches should all work at 1gb as well. Now the not enough network ports is at least understandable.

      Is it though? Presumably critical infrastructure.. and you can't buy an additional switch to get iLo online?

      One single 48 port 1 gigabit L2 switch will cover a full rack. Not only for server OOB management but also for other devices that might have a management port, such as switches and firewalls. Minuscule cost compared to the servers and the rest of the rack.

      posted in IT Discussion
      1
      1337
    • RE: Another RDS server?

      @Pete-S said in Another RDS server?:

      @siringo said in Another RDS server?:

      @Pete-S yes, there's one other VM. It has 8vCPU's 16GB of startup RAM and uses dynamic memory.

      Unfortuantely I didn't have time to look at the problem too much today so I'm not too sure how busy the host was.

      You have 48GB RAM and one 8 core CPU on the hyper-v host. 10K HDDs.

      On that host you have:

      • 1 VM, 16GB-36GB RAM, 8 vCPU, running 2016 RDS with 15 users @ 70% CPU
      • 1 VM, 16GB+ RAM, 8 vCPU.

      I think your hardware is just not up to the task. Not enough RAM and not enough cores.

      Also the servers memory config is puzzling to me because E5-2600 v4 CPUs have 4 channels of memory.
      So for maximum performance you should use 4, 8 or 12 DIMMs. Only way I get to 48GB RAM is with 12 x 4GB DIMMs. That's very unusual for a server of that generation.

      Or you might have 6 x 8GB DIMM which in that case is bad. It's called an unbalanced memory configuration. It works but it's low performance. You're only getting 60% of the memory bandwidth.

      posted in IT Discussion
      1
      1337
    • RE: Nextcloud experience

      @stacksofplates said in Nextcloud experience:

      @Pete-S said in Nextcloud experience:

      @Dashrender said in Nextcloud experience:

      @Mario-Jakovina said in Nextcloud experience:

      We mostly use NC offline (with Linux and Windows clients), not online .

      huh - you sync 96 GB of data to 40 users for mainly offline use? And you don't have conflict issues (more than one person editing a file while offline?
      Plus - MAN, that's a lot of data to replicate everywhere.
      OK OK OK - before JB jumps on me - that's not exactly what you said, but the details are pretty light.

      On 4 of our sites, we have fileservers that are registered as NC users, and their NC folders are shared in local LANs.

      Is the expectation of these 4 sites to use the files in an online state? Is the setup this way because of need/desire for faster access?

      The amount of data and the number of users doesn't really matter. It's the bandwidth that matters. How much data is changed every day, how many people needs access to that data, how fast do they need that access and how big is the pipe?

      40 people are not all sharing the same files with each other and are not all working on the same thing.

      For instance the average file size in Mario's case is 0.5 MB. Assume 40 user each change 40 files every day and every file is shared by 10 people on average. Using a cloud service that means you have 40x40x0.5=800MB to upload every day and 40x40x10x0.5=8GB to download everyday. 8GB per day to download is 1GB/hour or 300 kB/sec during the day. So you need about 3 Mbits/sec download speed to dedicate for file transfer and 0.3 MBit/sec for uploads.

      Most companies could easily support that amount of data traffic from one site. And in this case it's several.

      And even with this low bandwidth requirement it would only take a few seconds to upload your average 0.5MB file. And a few seconds before it's downloaded to another computer.

      Pretty sure this isn't what happens with cloud services. For instance from what I've seen, dropbox will dedupe files and store metadata locally.

      Then if it's already deduped they can just copy the diff.

      I was calculating worst case without any regard to how the files are synchronized.

      You could have peer to peer sync with the LAN, you could have block based transfer and only send the blocks that have changed as you mentioned and you could also have files on demand, which means that they are only synced when needed. In any case it will lead to less information being transferred.

      The point of my post was to show that even without any fancy schmancy features the amount of data and bandwidth requirements are pretty low.

      posted in IT Discussion
      1
      1337
    • RE: Any pfSense users? Are upgrades smooth?

      @pete-s said in Any pfSense users? Are upgrades smooth?:

      Major version upgrades can cause issues. So 2.4.x to 2.5.x have to be looked at first.

      Minor version upgrades are usually problem free.

      Problem with the major upgrades is usually related to a new FreeBSD version and if you have installed any extra packages. And sometimes to deprecated functionality.

      I wouldn't upgrade without doing the homework first.

      posted in IT Discussion
      1
      1337
    • RE: New IT update 60TB / 60 mil files / 20 people - HP Equipment

      @jim9500 said in New IT update 60TB / 60 mil files / 20 people - HP Equipment:

      Hey guys, thought I'd get your input. Our org has grown and I'm needing to double our internal production server capacity. We are currently running an HP DL360 G8 / Windows Server 2012 Standard / P822 / 3 D2600 + 36 HP 3TB SAS drives (MB3000FBUCN) configured in raid 10ADM (3 drives per set).

      I'm getting horrible benchmarks on the random read MB/s access with CrystalDiskMark RND4K Q1T1 (3.73 vs 2780 sequential). This server stores about 30 million files ranging from 50kb - 10MB and the full storage capacity rotates probably 2 - 3 times a year as we process these jobs then move them to cold storage.

      Forget 2.5" HDDs. SSDs has killed that market with huge performance improvements for the same or lower price. And much higher reliability.

      Where magnetic media reigns supreme is large storage arrays using 10+TB 3.5" HDDs. Best price per TB for this technology.

      SSDs have moved on to the NVMe interface for performance reasons but there are still legacy applications for SAS and SATA SSDs.

      Don't know how much storage you need at what speed, but you'd get the best out of both worlds with a mix of 3.5" HDDs and SSDs (preferable NVMe).

      If you're using an external SAS enclosure, put the SSDs in the server and the HDDs in the enclosure. That way you'll minimize the performance drop. SSDs will otherwise saturate the SAS links.

      If you want to go all SSDs, you might want to go for SATA if you want lots of them and want to put them on a raid card. Current pricing is about $170 per TB for Samsung's value enterprise SATA and also their NVMe drives. There are few SAS SSDs available and they cost more. If you buy from HP (or Dell) expect to pay 2 to 3 times as much.

      As a comparison 3.5" enterprise HDDs are below $30 per TB.

      As for drive sizes, SSDs will usually get cheaper per TB as you go bigger up to about 8TB or so. Fewer drives are better than more drives so go as big as you need.

      For 3.5" drives the point of diminishing return is 16TB. There are larger drives but they are not as cost effective and can be hard to find.

      posted in IT Discussion
      1
      1337
    • RE: New IT update 60TB / 60 mil files / 20 people - HP Equipment

      @jim9500 said in New IT update 60TB / 60 mil files / 20 people - HP Equipment:

      @scottalanmiller Interesting - so the SSD tech got rid of the parity calc corruption issue on raid 5 / raid 6 reubuilds?

      Nope, it's the same issue. People forgot about it because when the SSDs started to appear on servers, they were small drives like 100GB. Nowadays they are the same size as magnetic media.

      The factor was the probability of unrecoverable bit errors during the array rebuild process. The only redeeming factor for enterprise SSDs is that they are better in this regard. JEDEC standard for enterprise SSDs is 10^16 or better, while it's commonly 10^15 for enterprise HDDs.

      So it's still a question about probability of failure during rebuild. And the same rules applies but using different numbers.

      posted in IT Discussion
      1
      1337
    • RE: Simple comms. What to do?

      @siringo said in Simple comms. What to do?:

      I have a site where the two main servers (Windows) are located about 15 'cable' metres from the switch (switch A) they plug into.

      Each server has 3 NICs.

      I'm wondering what others would do?

      Would you run 6 cables from the servers to switch A

      or

      place a switch (switch B) near the servers and run 1 cable from switch B to switch A?

      Thanks for any help.

      15 meters (50ft) is not very long. No need for another switch.

      The proper way IMHO is to set up a couple of patch panels. Use rackmounted if you have racks otherwise wall mounted.

      posted in IT Discussion
      1
      1337
    • RE: Taking suggestions about x86 Access replacement

      @mr-jones

      First you need to figure out what they are actually doing/using.

      Are they just opening the MS Access database or are they actually running an MS Access application or some other application that is using the MS Access database?

      posted in IT Discussion
      1
      1337
    • RE: Bring order into IT environment in chaos

      @notverypunny said in Bring order into IT environment in chaos:

      @pete-s said in Bring order into IT environment in chaos:

      @notverypunny @black3dynamite @gjacobse @EddieJennings

      Thanks guys!

      Are there any cloud software suitable to make the customer inventory/documentation in that would fit SMB price range?

      It doesn't make sense for the customer to pay for a full fledged asset management solution with ticketing and every other possible module. But it makes sense to have the documentation in some central location.

      Depends of course on your price range. Teclib does hosted glpi https://www.glpi-network.cloud/ pricing shows as 19 euro / tech / mth. Unless I missed something snipe is all manual entry whereas glpi does agent-based automatic inventory. Also has a KB, ticketing and financials baked in. GLPI stands for Gestionnaire Libre de Parc Informatique (roughly translated: Free Data Centre Manager). You could also run it on-prem, just requires a basic LAMP setup.

      I had a look at that when you mentioned it before. It looks promising. I'm looking to step up our own documentation as well and I like the rack feature especially.

      dcim_racks-1.jpg

      posted in IT Discussion
      1
      1337
    • RE: Taking suggestions about x86 Access replacement

      @scottalanmiller said in Taking suggestions about x86 Access replacement:

      Whereas PHP is absolutely 100% the language built for this task.

      That is so true. Initially PHP was an abbreviation of Personal Home Page. That's how much is what built for this task!

      All the other languages have been constructed but PHP has been dynamically expanded to what it is today - based on the needs of developers over the years. That's why it's relatively easy to do what you need with it.

      I'd always thought of it as an interpreted dynamic version of C. Considering that C is the foundation of the entire unix/linux universe it made perfect sense to build PHP on that syntax.

      posted in IT Discussion
      1
      1337
    • RE: Excel Help

      @hobbit666 said in Excel Help:

      @dbeato Thanks for that,
      No idea how it works but it does 😄

      The IFS formula is just a bunch of "if" stacked together

      =IFS(F2=$B$12,$C$12,F2=$B$13,$C$13,F2=$B$14,$C$14,F2=$B$15,$C$15,F2=$B$16,$C$16,F2=$B$17,$C$17,F2=$B$18,$C$18)

      Is the same as:

      if F2=B12 then result=C12
      if F2=B13 then result=C13
      if F2=B14 then result=C14
      if F2=B15 then result=C15
      etc

      So basically just looking at the table with the windows build numbers and versions.

      The $ inside the cell names is just to tell Excel what to do with it when you copy the formula to another cell.
      $B$12 just means B12 will always be the absolute cell B12 regardless of where you copy the formula.

      posted in IT Discussion
      1
      1337
    • RE: Deploying firmware updates on servers and testing...

      @jimmy9008 said in Deploying firmware updates on servers and testing...:

      Hi folks,

      We have quite a few servers running outdated firmware. Due to an issue with the current firmware version, we have been going server by server updating to a newer bios firmware. These are Dell servers, all the same model and under warranty.

      We have so far done about 20 servers and they went fine. However, the 21st server developed a flapping issue on one of the NIC interfaces causing unplanned downtime to the VMs.

      Management are asking us to identify which of these servers, as we have many remaining to patch, will develop an issue following patching such as a flapping problem, so they can be done at a different time to lower the impact of an outage.

      My thoughts are that we cannot know if a server will develop an issue from a patch before doing the patch. But, they want a plan to know which to avoid.

      Any advice here on how we could accomplish this? My plan would be to plan the patch, as the patch is valid for the server, and then leave the server out of the cluster for 24h and monitor for flapping/blue screen/whatever, then put back in the cluster. I do not think we could ever know beforehand if any one server will happen to develop an issue from a patch which is for that server.

      You said they are the same model. I would have a look at the serial number then. Make a list and put them in order. Identify which series of numbers that you have updated that didn't have problems. Then begin by updating servers that are in the successful series.

      If you encounter the same problem I think you should be able identify the problem pretty quickly. Maybe consider having a spare server that is ready to go. If you find a problem with one server you could swap it immediately and just move the drives over.

      It's also a possibility that the flapping issue was pure coincidence. Unless you verified it by going back to the old firmware and the issue disappeared.

      posted in IT Discussion
      1
      1337
    • RE: Need to handle parsing these strings in PHP

      If you're good with regular expressions you could use a loop of preg_replace instead and not hardcode the parsing in php. It would be more flexible and you could even put the regexp definitions in it's own file.

      For instance with the expression | substitution:
      ^(Yealink)[^T]+(T[[0-9A-Z]*) ([0-9\.]+)$ | $1,$2,$3
      ^snom(.*)\/([0-9\.]+)$                   | Snom,$1,$2 
      etc
      
      User agent is reformatted into:
      Yealink SIP-T54W 96.85.0.5      => Yealink,T54W,96.85.0.5
      Yealink SIP VP-T49G 51.80.0.100 => Yealink,T49G,51.80.0.100
      snomPA1/8.7.3.19                => Snom,PA1,8.7.3.19"
      

      It's just easier to keep the result in a string and then separate it when you need it. You could use / or tab or whatever as a separator. Then you use explode if you want an array. In many cases it better to put the result into variables instead:

      // User agent has been formatted into $s
      $s='Snom,PA1,8.7.3.19';
      
      // we put the result in variables
      list($brand, $model, $firmware) = explode(',', $s);
      
      // print the results
      print "Brand: $brand | Model: $model | Firmware: $firmware\n";
      

      Also be very careful when you're programming php and do not use " for strings unless you have variables or escaped characters inside the string that needs to be interpreted. Use ' as your go to for strings. For instance 'test\r' is not at all the same string as as "test\r". You got lucky in your sample script because "\s" is an escape sequence in php but it's not a valid one so php didn't interpret it for you. But it's easy to run into conflicts between php and regular expressions when you encapsulate the strings with ".

      posted in IT Discussion
      1
      1337
    • RE: Shift + PgUp/PgDn in terminal?

      @eddiejennings said in Shift + PgUp/PgDn in terminal?:

      @pete-s said in Shift + PgUp/PgDn in terminal?:

      When you use Shift + PgUp/PgDn on a linux console you can scroll the screen buffer.

      Where does this behavior come from? Is it the shell, a utility on the server, is it the console client, is it the ssh client?

      It's not working for me using ssh (on windows) and I realized I have no clue where to start looking...

      Probably specific to the config of your terminal program, unless you’re truly talking about the console itself.

      I had to alter some key bindings in Gnome Terminal to get the desired behavior from the weechat key bindings.

      You were right. I was trying out Windows Terminal and running ssh inside. And shift+pgup/dn didn't work as expected.

      I looked at the Windows Terminal keybindings and the default was not what I wanted.

      So I added this under "actions" in the settings.json file:

              // Scrollback
              { "command": "scrollDown", "keys": "shift+down" },
              { "command": "scrollDownPage", "keys": "shift+pgdn" },
              { "command": "scrollUp", "keys": "shift+up" },
              { "command": "scrollUpPage", "keys": "shift+pgup" },
      

      The added bonus is that shift+pgup/dn now also works with cmd.exe and PowerShell.

      posted in IT Discussion
      1
      1337
    • Authentication to remote RADIUS service?

      We're looking to authenticate users against a remote RADIUS server/service.

      From the info it looks like the server supports RADIUS PAP, EAP-TTLS/PAP, and EAP-PEAP/MSCHAPv2 authentication methods. Is there a preferred auth method?

      Also what do we need to open through our firewall to allow an internal RADIUS client to communicate with an external RADIUS server? Do we need any incoming ports? Or is it just outgoing traffic to the RADIUS server?

      posted in IT Discussion radius authentication
      1
      1337
    • RE: Checking multiple Directories to confirm all files are identical

      @dustinb3403 said in Checking multiple Directories to confirm all files are identical:

      @eddiejennings Yeah I was thinking of the same solution as well, my trouble is how would I get the system to not try and store everything to memory first and then write to file. . . .

      Some of these customer requests are insane...

      If you do the equivalent of md5sum with subdirectories you will get md5 sums of all files. A diff with produce the different files.
      File size or directory size will not matter at all for this operation.

      Get-FileHash seems to output multiple lines per file which is not good for this.

      If you don't need hash to compare and just wanted to check filenames, file sizes and dates, maybe you should just do a directory listing for each tree and compare them. That would be very fast.

      You could get dir to provide a one-file-per-line output, with the proper options.

      posted in IT Discussion
      1
      1337
    • RE: Raspberry Pi 4 as IT Workstation

      @obsolesce said in Raspberry Pi 4 as IT Workstation:

      @scottalanmiller said in Raspberry Pi 4 as IT Workstation:

      But that means that you can run the lighter 32bit OS version that is faster AND it is already overclocked and includes a massive passive heatsink.

      But the web browser plus whatever else is running in addition to the OS will still eat up what's left of the RAM pretty easily I would think. My phone uses more than 4g of ram easily.

      Not really. It's actually hard to use up the 4GB RAM on the RPI4 as a normal user. Remember that everything running on it is lean, like LXDE for example.

      https://www.tomshardware.com/uk/news/raspberry-pi-4-8gb-tested

      I'd say that if you're happy with the RPI4 performance as a desktop then there's a 95% chance that 4GB RAM will be enough.

      If you're a power user then RPI4 is too slow and you'll probably looking at something with at least 16GB RAM anyway.

      posted in IT Discussion
      1
      1337
    • Yealink and bluetooth headset

      Yealink phones SIP-T27G/T29G/T46G/T48G/T41S/T42S/T46S/T48S/T53 need Yealink's USB adapter (BT40 or BT41) to have bluetooth support. Some of the newer T5 series have built-in bluetooth support.

      Normally you have to buy Yealink's adapters which are relatively expensive at $30 to $40 per phone.

      By chance I discovered that TP-link UB400 works the same as Yealink's own adapter. The phone thinks it's a Yealink adapter - the same chipset I think. Difference is that it's only $10 each and TP-link is easy to find.

      posted in IT Discussion yealink bluetooth
      1
      1337
    • RE: Did I connect these switches according to best practices?

      The edgerouter would sit where it says Internet.

      Backbone switch is also called core switch.
      Edge switch is also called access switch. That's where you would have PoE.

      core-Ethernet-switch-star-topology-by-S3800-24T4S-1-768x402.jpg

      What you did was kind of like daisy chaining switches, which is in general a bad idea. But wouldn't really matter in a very small network.

      posted in IT Discussion
      1
      1337
    • RE: RAID 6 in my backup VM host on spinning rust?

      @beta said in RAID 6 in my backup VM host on spinning rust?:

      Hear me out...I have a Dell server that I use as a Veeam replication target. This host is used as a backup in case my primary server dies - I just turn on the replicas and run from it until primary host is repaired.

      This backup host currently has OBR10 comprised of 10 600GB 10K SAS drives. I'm running up against storage capacity limitations and have ordered 2 additional 600GB disks to add to the array, but I was thinking while I am in the process of rebuilding this array, maybe I should change it from OBR10 to RAID 6? My concern is that while I am pretty sure the OBR10 will give me enough space to last until I schedule a complete replacement of the server, the margin will be very slim whereas the RAID 6 I'm sure will give me plenty of extra breathing room until the server is replaced.

      Would this be crazy to do? Or should I just stick to OBR10? Thanks!

      You know, you only have 10x600/2 = 3TB of storage.

      You could replace the entire array with two 3.84TB SATA/SAS SSDs. Run them in RAID1 you'd have better performance and higher reliability.

      Buying 2.5" hard drives today is a mistake.

      posted in IT Discussion
      1
      1337
    • 1 / 1