• Too much on my plate

    10
    2 Votes
    10 Posts
    2k Views
    scottalanmillerS

    The situation that you are facing here is that you are being forced to act as a Project Manager (PM) and you need to step into that role mentally and work as one in the context of addressing the scheduling.

  • Ubiquiti Access Point Missing

    14
    3 Votes
    14 Posts
    3k Views
    gjacobseG

    We've been able to Default the remaining APs and they are adopted. We have one that is being stubborn, while it 'pings' it's not reachable. Suspect an issue with the AP itself. One just lost the heartbeat, but, it was found and adopted.

    All in all, four of the six are running and good to go. lot better than where we started.

    Thanks

  • end user computer backups

    31
    1 Votes
    31 Posts
    4k Views
    JaredBuschJ

    @s.hackleman said in end user computer backups:

    I have used Mozy over the years and have been quite happy.

    I use Crashplan at home. Have used Mozy and Carbonite over the years also.

  • WordPress 4.6 Pepper Released

    1
    0 Votes
    1 Posts
    267 Views
    No one has replied
  • 4 Votes
    14 Posts
    6k Views
    T

    Very cool, will have to toy around with this.

  • Nemucod Ransomware Analysis

    2
    3 Votes
    2 Posts
    404 Views
    T

    Always enjoy reading articles like this. :thumbsup_tone1:

  • Starting Clean - Kibana

    68
    1 Votes
    68 Posts
    6k Views
    DashrenderD

    @scottalanmiller said in Starting Clean - Kibana:

    Looking through the old threads on this that I can find, the first mention of Filebeat was by @DustinB3403 and that's what sent us down this path, not someone suggesting it (as far as I can tell.) Then he posted on the Filebeat article, which firmed up this path even more. Then in this thread, there was no talk of anything else.

    So that Filebeat wasn't the right tool was never really considered because Filebeat was injected from the beginning. That's what led to the crazy confusion.

    So a new thread all about using rsyslog to send to Logstash in ELK is what is needed. And the issue appears to be that ELK was never configured to accept syslog files because it's not open by default to listen for them.

    And mentioning Kibana doesn't help. KIbana is the K in ELK, but it's not a part that processes logs. You can use Kibana for other things too, like just showing system graphs.

    Yeah I get the whole Kibana wasn't where the problem was - but this was because Dustin (and I) didn't understand where the error was.

    The simple setup Dustin did - install Kiwi syslog server, change the XS log config file to send all logs to the Kiwi syslog server (took less than 10 mins) was so brain dead simple, neither of us knew what was failing in ELK. So Dustin started troubleshooting at the point he had direct contact with, Kibana.

  • 2 Votes
    5 Posts
    6k Views
    scottalanmillerS

    @stacksofplates said in Configuring a VNC Connection in Guacamole in the user-mapping File:

    @scottalanmiller said in Configuring a VNC Connection in Guacamole in the user-mapping File:

    @stacksofplates said in Configuring a VNC Connection in Guacamole in the user-mapping File:

    I really wish there was a way to do multiple hosts (like a range or comma separated) because this will get unwieldy with a lot of users. And would be nice for some LDAP integration.

    There is LDAP and an interface for this, but not available in the XML setup. I'm going to be doing another one with MariaDB soon.

    Ah I've only tested the XML version. Didn't realize there was another.

    Yeah, much more robust interface.

  • DC Demotion Question

    108
    1 Votes
    108 Posts
    10k Views
    DashrenderD

    @scottalanmiller said in DC Demotion Question:

    @Dashrender said in DC Demotion Question:

    @scottalanmiller is NTG using Azure AD for it's Windows 10 machines yet? Is NTG using anything for GPOs?

    Yes, we've been on Azure AD for quite a while now. Like since last year.

    No GPOs.

    Local admin? or at least access to the Local admin account?

  • XenServer 6.5 - Moving Virtual Disks Among SRs

    9
    0 Votes
    9 Posts
    2k Views
  • 4 Votes
    11 Posts
    7k Views
    DustinB3403D

    For anyone looking to do this here is my filebeat.yml file.

    # Name of the registry file. Per default it is put in the current working # directory. In case the working directory is changed after when running # filebeat again, indexing starts from the beginning again. registry_file: /var/lib/filebeat/registry # Full Path to directory with additional prospector configuration files. Each file must end with .yml # These config files must have the full filebeat config part inside, but only # the prospector part is processed. All global options like spool_size are ignored. # The config_dir MUST point to a different directory then where the main filebeat config file is in. #config_dir: ############################################################################### ############################# Libbeat Config ################################## # Base config file used by all other beats for using libbeat features ############################# Output ########################################## # Configure what outputs to use when sending the data collected by the beat. # Multiple outputs may be used. output: ### Elasticsearch as output elasticsearch: # Array of hosts to connect to. # Scheme and port can be left out and will be set to the default (http and 9200) # In case you specify and additional path, the scheme is required: http://localhost:9200/path # IPv6 addresses should always be defined as: https://[2001:db8::1]:9200 hosts: ["localhost:9200"] # Optional protocol and basic auth credentials. #protocol: "https" #username: "admin" #password: "s3cr3t" # Number of workers per Elasticsearch host. #worker: 1 # Optional index name. The default is "filebeat" and generates # [filebeat-]YYYY.MM.DD keys. #index: "filebeat" # A template is used to set the mapping in Elasticsearch # By default template loading is disabled and no template is loaded. # These settings can be adjusted to load your own template or overwrite existing ones #template: # Template name. By default the template name is filebeat. #name: "filebeat" # Path to template file #path: "filebeat.template.json" # Overwrite existing template #overwrite: false # Optional HTTP Path #path: "/elasticsearch" # Proxy server url #proxy_url: http://proxy:3128 # The number of times a particular Elasticsearch index operation is attempted. If # the indexing operation doesn't succeed after this many retries, the events are # dropped. The default is 3. #max_retries: 3 # The maximum number of events to bulk in a single Elasticsearch bulk API index request. # The default is 50. #bulk_max_size: 50 # Configure http request timeout before failing an request to Elasticsearch. #timeout: 90 # The number of seconds to wait for new events between two bulk API index requests. # If `bulk_max_size` is reached before this interval expires, addition bulk index # requests are made. #flush_interval: 1 # Boolean that sets if the topology is kept in Elasticsearch. The default is # false. This option makes sense only for Packetbeat. #save_topology: false # The time to live in seconds for the topology information that is stored in # Elasticsearch. The default is 15 seconds. #topology_expire: 15 # tls configuration. By default is off. #tls: # List of root certificates for HTTPS server verifications certificate_authorities: ["/etc/pki/tls/certs/logstash-forwarder.crt"] # Certificate for TLS client authentication #certificate: "/etc/pki/client/cert.pem" # Client Certificate Key #certificate_key: "/etc/pki/client/cert.key" # Controls whether the client verifies server certificates and host name. # If insecure is set to true, all server host names and certificates will be # accepted. In this mode TLS based connections are susceptible to # man-in-the-middle attacks. Use only for testing. #insecure: true # Configure cipher suites to be used for TLS connections #cipher_suites: [] # Configure curve types for ECDHE based cipher suites #curve_types: [] # Configure minimum TLS version allowed for connection to logstash #min_version: 1.0 # Configure maximum TLS version allowed for connection to logstash #max_version: 1.2 ### Logstash as output #logstash: # The Logstash hosts hosts: ["localhost:5044"] # Number of workers per Logstash host. #worker: 1 # Set gzip compression level. #compression_level: 3 # Optional load balance the events between the Logstash hosts #loadbalance: true # Optional index name. The default index name depends on the each beat. # For Packetbeat, the default is set to packetbeat, for Topbeat # top topbeat and for Filebeat to filebeat. #index: filebeat # Optional TLS. By default is off. #tls: # List of root certificates for HTTPS server verifications #certificate_authorities: ["/etc/pki/root/ca.pem"] # Certificate for TLS client authentication #certificate: "/etc/pki/client/cert.pem" # Client Certificate Key #certificate_key: "/etc/pki/client/cert.key" # Controls whether the client verifies server certificates and host name. # If insecure is set to true, all server host names and certificates will be # accepted. In this mode TLS based connections are susceptible to # man-in-the-middle attacks. Use only for testing. #insecure: true # Configure cipher suites to be used for TLS connections #cipher_suites: [] # Configure curve types for ECDHE based cipher suites #curve_types: [] ### File as output #file: # Path to the directory where to save the generated files. The option is mandatory. #path: "/tmp/filebeat" # Name of the generated files. The default is `filebeat` and it generates files: `filebeat`, `filebeat.1`, `filebeat.2`, etc. #filename: filebeat # Maximum size in kilobytes of each file. When this size is reached, the files are # rotated. The default value is 10 MB. #rotate_every_kb: 10000 # Maximum number of files under path. When this number of files is reached, the # oldest file is deleted and the rest are shifted from last to first. The default # is 7 files. #number_of_files: 7 ### Console output # console: # Pretty print json event #pretty: false ############################# Shipper ######################################### shipper: # The name of the shipper that publishes the network data. It can be used to group # all the transactions sent by a single shipper in the web interface. # If this options is not defined, the hostname is used. #name: # The tags of the shipper are included in their own field with each # transaction published. Tags make it easy to group servers by different # logical properties. #tags: ["service-X", "web-tier"] # Uncomment the following if you want to ignore transactions created # by the server on which the shipper is installed. This option is useful # to remove duplicates if shippers are installed on multiple servers. #ignore_outgoing: true # How often (in seconds) shippers are publishing their IPs to the topology map. # The default is 10 seconds. #refresh_topology_freq: 10 # Expiration time (in seconds) of the IPs published by a shipper to the topology map. # All the IPs will be deleted afterwards. Note, that the value must be higher than # refresh_topology_freq. The default is 15 seconds. #topology_expire: 15 # Internal queue size for single events in processing pipeline #queue_size: 1000 # Configure local GeoIP database support. # If no paths are not configured geoip is disabled. #geoip: #paths: # - "/usr/share/GeoIP/GeoLiteCity.dat" # - "/usr/local/var/GeoIP/GeoLiteCity.dat" ############################# Logging ######################################### # There are three options for the log ouput: syslog, file, stderr. # Under Windos systems, the log files are per default sent to the file output, # under all other system per default to syslog. logging: # Send all logging output to syslog. On Windows default is false, otherwise # default is true. #to_syslog: true # Write all logging output to files. Beats automatically rotate files if rotateeverybytes # limit is reached. #to_files: false # To enable logging to files, to_files option has to be set to true files: # The directory where the log files will written to. #path: /var/log/mybeat # The name of the files where the logs are written to. #name: mybeat # Configure log file size limit. If limit is reached, log file will be # automatically rotated rotateeverybytes: 10485760 # = 10MB # Number of rotated log files to keep. Oldest files will be deleted first. #keepfiles: 7 # Enable debug output for selected components. To enable all selectors use ["*"] # Other available selectors are beat, publish, service # Multiple selectors can be chained. #selectors: [ ] # Sets log level. The default log level is error. # Available log levels are: critical, error, warning, info, debug #level: error
  • HP DL360 gen8 fans dying

    3
    0 Votes
    3 Posts
    704 Views
    scottalanmillerS

    Maybe you got a bad batch.

  • Seagate Reveals 60TB SSD

    23
    1 Votes
    23 Posts
    3k Views
    wirestyle22W

    We really need an alternative power source. Powering all of these data centers is a lot. Pretty soon the earth is just going to be rows and rows of racks. Like that episode of Silicon Valley. We're just going to be mole people.

  • ownCloud Backups

    15
    1 Votes
    15 Posts
    2k Views
    JaredBuschJ

    @Cdarw said in ownCloud Backups:

    @alex.olynyk I think there are better ways to do backups then just hypervisor.

    No, there is no better way than a hypervisor level backup for a full system backup.

    Proper backups can be done like this:
    https://docs.nextcloud.com/server/9/admin_manual/maintenance/backup.html

    If you need some component level backup, then a backup designed for that component is most definitely a good idea.

    By the way. You should consider to switch to Nextcloud because it has some additional important security features. A backup like that can be done with a cronjob. Super easy.

    Nextcloud is certainly not something I would deploy full force anywhere yet. It is getting close.

  • Guide to getting Graphite up and running

    15
    0 Votes
    15 Posts
    3k Views
    S

    @Technomancer

    Hey!

    So I'm currently trying to set up Worldping but I'm having troubles setting up the API key.

    Was wondering if you could lend a hand.

    I continuously see

    curl -H "Authorization: Bearer
    api key" http://worldping-api.raintank.io/api/dashboards/db/mydash

    like this and similar.

    I'm also reading that I can just authenticate by username

    None of this is working, could you point me in the right direction.

    Thanks

  • 1 Votes
    1 Posts
    2k Views
    No one has replied
  • cPanel mail to google apps migration

    4
    1 Votes
    4 Posts
    2k Views
    V

    Hi,

    There should be a Data Migration option in the Google Apps Admin dashboard ...

  • Guacamole Compilation Error on Fedora 24

    5
    0 Votes
    5 Posts
    2k Views
    T

    So strange even the GCC 6 documentation implies that -Wno-error=pedantic should cause -Werror=pedantic warnings to not be treated as errors.

    "This switch takes a negative form, to be used to negate -Werror for specific warnings; for example -Wno-error=switch makes -Wswitch warnings not be errors, even when -Werror is in effect."

    https://gcc.gnu.org/onlinedocs/gcc-6.1.0/gcc/Warning-Options.html#Warning-Options

    I would guess if you really want to install you could remove -Werror from AM_INIT_AUTOMAKE (or add -Wno-error=pedantic)

    https://github.com/apache/incubator-guacamole-server/blob/master/configure.ac#L22

  • Replaced HP Printer With Same DNS Name and IP - Won't Ping

    10
    1 Votes
    10 Posts
    2k Views
    thanksajdotcomT

    @garak0410 said in Replaced HP Printer With Same DNS Name and IP - Won't Ping:

    Sorry, I didn't see this was posted on my behalf.

    After I connected my laptop to the port and got no lights or connection and brought over a 20 foot cable to plug into the printer and then the printer connected, I have determined it is the LAN Jack.

    But get this...I connected an old 100 Meg desktop switch I had laying around to that jack and then connected the printer to that switch and then it worked. So they are up and running.

    I've seen that before. The auto-negotiation doesn't work for anything beyond the IP. I saw that happen with an HP LJ 4000 series once. Replaced the card on it and it fixed the issue. But sometimes that's the issue.

  • 1 Votes
    24 Posts
    5k Views
    K

    @scottalanmiller said in XenServer Host to Host Migration Questions:

    nciple, but I find it getting harder to do conside

    Well since this will be my first venture into using a video server, not sure how much system to give something like that.