ML
    • Recent
    • Categories
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login
    1. Topics
    2. scottalanmiller
    3. Best
    • Profile
    • Following 170
    • Followers 168
    • Topics 3,473
    • Posts 151,752
    • Groups 1

    Posts

    Recent Best Controversial
    • RE: What Are You Doing Right Now

      @DustinB3403 said in What Are You Doing Right Now:

      I'm livid at the moment and in complete disbelief that I have to explain the difference between scheduled down time, and downtime.

      How one is "you plan for it to take X long, if it's shorter great. If the full length no one gives a crap."

      The other is "our systems are down at unplanned time, fix it quickly"

      Rename them "maintenance window" and "outage."

      posted in Water Closet
      scottalanmillerS
      scottalanmiller
    • Linux: Common Filesystems

      Linux is the land of filesystems. It is flooded with options. Today, in 2016, the "standard" filesystems on Linux are XFS and EXT4, nearly across the board with BtrFS starting to make inroads specifically in the Suse space (always a leader in Linux filesystems) and with Facebook making big BtrFS investments themselves as it is their filesystem of choice. Working on Linux means at least being aware that many file system options exist, even if we ourselves only choose to work with the defaults. In nearly all cases, default options are "good enough". They are defaults for a reason and are some of the world's more powerful, stable and versatile filesystems. But having lots of options means that we have choices when we want to get extremely fast, big or reliable.

      A complete survey of filesystems is simply not possible. The number available on Linux approaches the ridiculous, but a small handful represent the real world, practical filesystems found, supported officially and used in the world on Linux. It should be noted that filesystems on Linux are partially additionally complex because the OS offers services that many other operating systems do not. Clustered filesystems are very important for certain use cases and are not offered by Solaris or Windows, for example, without additional add-on software to do so. In Linux, multiple clustered file system options are standard.

      Here I will list the common Linux filessytem options, why they are important and when they would likely be selected. We will learn more about filesystems as we progress. But we need a firm foundation to start from now.

      In this list I am looking only at local, traditional filesystems and not things like distributed or network filesystems. Linux is also rich in those options, and they will be addressed later in the course.

      The original 1992 Linux filesystem was called the "Extended File System" or EXT and was so named as it was an enhancement to the common UNIX filesystem UFS and the MINIX filesystem which were its inspiration. The Linux filesystems have always been one of its strong suits, even before it was considered a business viable operating system. EXT2 replaced EXT a year later in 1993 and was inspired by FFS, the Berkely Fast File System from BSD UNIX. EXT2 is the Linux world's best known file system and while legacy today, it was fast, robust and stable and was the filesystem on which the Linux ecosystem was primarily built.


      EXT3: In 2001, Journaling was added to the EXT family in EXT3. EXT3 is essentially just EXT2 with journalling added. Journaling made the filesystem more robust and slightly faster while maintaining bidirectional compatibility with its predecessor making it a very popular and important upgrade in the Linux ecosystem. EXT3 was by far the most popular filesystem on Linux across the board from 2001 and for much more than a decade following. Still found often today.

      Traditional Non-Clustered Filesystems:

      EXT4: Additional stability and scalability upgrades added on top of EXT3 in 2008. EXT4 is the latest member of the Linux EXT family and was the most popular filesystem on Linux from roughly 2010 to 2014. Still very popular today and a good general purpose filesystem option, especially for smaller systems and desktops.

      XFS: The filesystem developed by SGI (Silicon Graphics) for the IRIX UNIX system in 1993 and ported to Linux in 2001. As of 2014 it has effectively unseated EXT4 as the primary filesystem on Linux with Red Hat choosing it to replace EXT4 as the default filesystem choice for servers. XFS is well known for its extreme level of performance and reliability and is very mature. XFS continues to grow in popularity and is more and more often being recommended for systems requiring very large or very stable filesystems, even at twenty three years old. Like EXT3 and EXT4, XFS is journaled.

      JFS: IBM's filesystem built for their AIX UNIX system in 1990 (and later in 1999 used for OS/2 Warp), JFS was ported to Linux in 2001 and has retained substantial commercial support while never reaching a point of popularity within the community. JFS is a stable, mature operating system with decades of work on it and the Linux release is backported as the mature JFS release for those still running OS/2. JFS is a viable option for Linux, but very rare and would not be expected to be used in a new install. The J in JFS standard for Journaled and was a pioneer in filesystem journalling.

      ReiserFS: A modern original to Linux filesystem made in 2001 with performance focused on large filesystems of a large number of very small files, ReiserFS was very important to Linux and was the primary filesystem of Suse Linux from 2001 to 2006 when it was replaced with EXT3. Reiser's inventor was at work on Reiser4, a very promising replacement to ReiserFS when he was convicted of murder and the Reiser4 project failed to get code into mainline Linux. ReiserFS' age has left it rarely used today, but still fast and stable for its intended use cases.

      UFS: The Unix File System, also known as the BSD Fast File System (FFS) can be found on Linux and exists for cross compatibility with most UNIX versions. UFS is a very old filesystem but still extremely widely used and is actually very stable and performant and still the default choice on several enterprise operating systems, such as FreeBSD.

      ZFS: Originally developed by Sun Microsystems in 2005, ZFS is a super modern massive scale filesystem with integrated logical volume manager and software RAID implementation. It was designs for use on AMD64 based Solaris systems using Sun's large scale Thumper storage hardware. ZFS was open sourced alone with Solaris 10 but then with Solaris 11 closed sourced again. The OpenSolaris project took the older open sourced version of ZFS and ported it to Mac OSX (later dropped) and to FreeBSD where it is now an extremely popular filesytem alongside UFS. ZFS has recently been ported to Linux but is very unpopular there and struggles to find support. It is considered a second class filesystem but there is a lot of interest around it although it offers little to the Linux ecosystem compared to where it exists already on Solaris and FreeBSD which lacked broad filesystem support prior to ZFS' introduction there.

      BtrFS: A very new, very modern filesystem native to Linux determined to bring ZFS functionality plus greater performance natively to Linux and full open sourced. BtrFS has been very good headway and has recently been deemed production ready and is in use by large Linux shops like Facebook already. BtrFS, like ZFS, includes a logical volume manager and software RAID inside of the filesystem instead of getting these as external components. BtrFS is already seeing broad adoption into production distros and is occasionally being deployed as the default. BtrFS is the most likely filesystem to dominate Linux over the next decade.

      NTFS: The Windows NT filesystem. Not popular on Linux but available primarily for use in cross platform compatibility situations.


      Clustered Filesystems:

      GFS2: Red Hat's Global File System 2 is the most popular clustered filesystem on Linux. The original GFS, from 1995, was developed for IRIX and GFS2 for Linux in 2005. GFS2 is widely used wherever concurrent access to a single filesystem is needed. Very common on general purpose Linux high availability clusters.

      OCFS: The Oracle Clustered File System was originally developed for specific use to allow concurrent access to central storage of Oracle databases for high availability database needs. It is a general purpose clustered filesystem however and can be used for other purposes.


      This is anything but an exhaustive list of the filesystems found on or possible on Linux systems. Linux, of course, supports old formats and special media formats like other operating systems like FAT16, FAT32 and ISO9660. It has become increasingly common for special purpose, high performance SSD-only filesystems to be introduced as well.

      Suse Filesystem Guide: https://www.suse.com/documentation/sles11/stor_admin/data/sec_filesystems_major.html

      Part of a series on Linux Systems Administration by Scott Alan Miller

      posted in IT Discussion linux filesystem xfs zfs ext2 ext3 ext4 jfs btrfs sam linux administration reiserfs
      scottalanmillerS
      scottalanmiller
    • Stuff I Am Finding When Cleaning

      I'm finding lots of awesome stuff while cleaning out my dad's house and I figured that a place to put them would be great. Some pretty hilarious stuff in here, including a book that I wrote around 1987.

      posted in Water Closet
      scottalanmillerS
      scottalanmiller
    • RE: Is there anyway to clean up the cabling behind the rack?

      @jyates said:

      I would use Velcro wraps. You can organize them and get them out way, and still replace cables as needed. If I were you, I'd just fake a fire to get everyone out of the building and unplug everything and untangle them.

      I love velcro wraps. Look great, easy to use, no chance of slicing cables and you can modify them whenever you need.

      posted in IT Discussion
      scottalanmillerS
      scottalanmiller
    • RE: What Are You Doing Right Now

      0_1470065900172_image.jpeg

      posted in Water Closet
      scottalanmillerS
      scottalanmiller
    • Installing Zimbra Email 8.6 on CentOS 7

      Zimbra is possibly the most popular free and open source enterprise email and collaboration suite. It is mature and advanced and has, at times, been part of both the Yahoo! and VMware families. The suite has remained open source since the beginning and was one of the earliest drivers of enterprise AJAZ interface designs. Development of the suite is active and Zimbra 8.6 is the most recent release. While a major product suite, installation of Zimbra is relatively straightforward.

      At this time Zimbra 8.6 officially supports CentOS (and RHEL) 7 and CentOS (and RHEL) 6, Ubuntu 14.04 and Ubuntu 12.04 (partially.) Support for SLES is being discontinued. Unlike in the past (around Zimbra 5 era) CentOS support is official and is not a technicality riding on RHEL's coattails. CentOS 7 represents the recommended installation target for Zimbra for many reasons, maturity and most up to date supported platform being key.

      The Zimbra installer handles many of our tasks for us, there is little that needs to be done to prepare our environment although a few steps are needed. We will start by creating a minimal CentOS 7 server install (in my example, by cloning my minimal template on a Scale HC3 cluster.)

      clone centos on scale hc3

      I use 1GB as a default, but for a large email platform likely you will want a minimum of 2GB of RAM and quite likely rather a bit more. I'll use 4GB in this example. If you have run Zimbra on CentOS 7, you will likely notice that 2GB is the absolute minimum for an idle Zimbra system.

      add memory Scale HC3

      Adding an additional block device will be common for a production installation. Putting your email store onto its own XFS block device would be advised. For a lab or test installation this would not be necessary at all. But for production, it is likely that you will want an XFS filesystem on an LVM logical volume. As this is email, it is common to have 20GB - 100GB per mailbox, so this can scale up quickly.

      add a block device on Scale HC3

      Now we can log in and run our installation. Take note that in our one step we need to manually edit our /etc/hosts file. We need that entry to match our hostname or the Zimbra installer will halt and make you fix it and run the installation program again:

      yum -y install perl wget nmap-ncat perl-core unzip firewalld net-tools
      cd /tmp
      wget https://files.zimbra.com/downloads/8.6.0_GA/zcs-8.6.0_GA_1153.RHEL7_64.20141215151110.tgz
      tar -xzvf zcs-8.6.0_GA_1153.RHEL7_64.20141215151110.tgz
      cd zcs-8.6.0_GA_1153.RHEL7_64.2014121515111
      vi /etc/hosts #Add IP and FQDN of the local host, must match hostname
      firewall-cmd --zone=public --add-port=8443/tcp --permanent
      firewall-cmd --reload
      ./install.sh 
      

      And that is it. If all went well, you can point a web browser to https://your.fqnd.com:8443/ and you will log in to your email desktop!

      zimbra login

      zimbra email

      posted in IT Discussion ntg lab scale scale hc3 email centos centos 7 how to linux zimbra zcs open source zimbra 8.6 rhel 7
      scottalanmillerS
      scottalanmiller
    • RE: Happy Birthday Thread

      Happy Birthday to the love of my life, @dominica who turned [redacted] today!

      posted in Water Closet
      scottalanmillerS
      scottalanmiller
    • RE: Saving a dying server

      Might be worth using a direct P2V tool to do the backup in this case. If this was my situation, this is likely what I would do:

      1. Tell the users that between their decisions and whoever installed this decisions, the system cannot be used now and they don't have any vote in this whatsoever. Tell them the way things have to be, don't let them throw away the company's data because they will just blame you for this later.
      2. Determine where data is stored. Likely only in MySQL. Use MySQL's own tools and do a database dump ASAP.
      3. Shut down the databases and all applications.
      4. Do a direct P2V to a production platform.
      posted in IT Discussion
      scottalanmillerS
      scottalanmiller
    • RE: What Are You Doing Right Now

      We just had an awesome side by side posting between two communities. Those are exciting to see because they really put the community to the test in a way that normal posting does not. It's the best test that I know of both activity and technical expertise. And the result... eighteen posts on ML to zero there. And the answer was determined here before any posts happened there. Not a big thing, but it's very encouraging that the activity and quality level here are staying high and keeping pace like they should.

      posted in Water Closet
      scottalanmillerS
      scottalanmiller
    • RE: A slow descent into burnout

      @RamblingBiped said:

      I'm doing this stuff all day, studying in the evenings, going in on the weekends to do updates and maintenance while everyone else is out, and not getting any free time to just relax and get work off of the brain. When I am at work I feel like I don't have the ability to focus on the tasks I am trying to accomplish without fairly regular interruptions.

      Bottom line is... don't do this. Put the burden on your manager or employer. Don't work weekends except for very special emergencies or projects. If you need to do weekends or evenings with any regularity then you do that instead of working worthless daytime hours. If people make you unable to focus, don't ask, tell them that they are blocking you from working so you now work from home - if they don't like it, all extra hours require outside contractors to do because they don't need you this tired, they are doing it because they can get away with it. They are wasting your time and burning you out. Don't let them. No other role would accept that behaviour, don't empower them to do it to you or the next IT guy.

      posted in IT Discussion
      scottalanmillerS
      scottalanmiller
    • Keeping It Classy Houston

      0_1473366983771_image.jpeg

      Yup. STD Go might be the hottest new game in Texas. Gotta catch them all indeed.

      posted in Water Closet
      scottalanmillerS
      scottalanmiller
    • Installing Mattermost on CentOS 7

      Mattermost is a Slack competitor written in Google's new(ish) Go Language (GoLang.) Mattermost is fully free and open source. The biggest deal about Mattermost is that it is Slack compatible, so those Slack tools that you want to use will work with it.

      We get started with a normal, plain CentOS 7 minimal install. I'll make mine from a template.

      0_1460471965138_Screenshot from 2016-04-06 12:55:54.png

      yum install http://yum.postgresql.org/9.4/redhat/rhel-6-x86_64/pgdg-redhat94-9.4-1.noarch.rpm
      yum install postgresql94-server postgresql94-contrib wget
      /usr/pgsql-9.4/bin/postgresql94-setup initdb
      systemctl enable postgresql-9.4.service
      systemctl start postgresql-9.4.service
      

      Now we need to work as the database user...

      sudo -i -u postgres
      psql
      CREATE DATABASE mattermost;
      CREATE USER mmuser WITH PASSWORD 'noonewilleverguess';
      GRANT ALL PRIVILEGES ON DATABASE mattermost to mmuser;
      \q
      exit
      

      Back as root, again. Uncomment the listen_addresses line.

      vi /var/lib/pgsql/9.4/data/postgresql.conf
      

      And in this file, we need to allow md5 instead of peer for the local connection at the bottom.
      vi /var/lib/pgsql/9.4/data/pg_hba.conf

      And now we need to reload the database and test:

      systemctl reload postgresql-9.4.service
      

      Here is the test...

      psql --host=127.0.0.1 --dbname=mattermost --username=mmuser --password
      

      If we get an error here, stop, nothing will work if this fails. Comment and we will troubleshoot. A success message looks like mattermost=> and you \q to quit.

      cd /tmp
      wget
      tar -xvzf mattermost.tar.gz
      mv mattermost /opt/
      mkdir -p /opt/mattermost/data
      useradd -r mattermost -U
      chown -R mattermost:mattermost /opt/mattermost
      chmod -R g+w /opt/mattermost
      cd /opt/mattermost/config
      vi config.json
      

      In this file you need to replace DriverName": "mysql" with DriverName": "postgres"

      And then replace "DataSource":"mmuser:mostest@tcp(dockerhost:3306)/mattermost_test?charset=utf8mb4,utf8" with "DataSource": "postgres://mmuser:[email protected]:5432/mattermost?sslmode=disable&connect_timeout=10"

      And then to test it out...

      cd /opt/mattermost/bin
      ./platform
      

      That's it. You should be able to log into the web interface from a web browser.

      posted in IT Discussion linux ntg lab scale scale hc3 mattermost instant messaging centos centos 7 golang projects slack
      scottalanmillerS
      scottalanmiller
    • RE: Happy Birthday Thread

      @Minion-Queen said in Happy Birthday Thread:

      how the crap did he get to be 19 already?

      Because 19 years have passed.

      posted in Water Closet
      scottalanmillerS
      scottalanmiller
    • If You Have to Ask the Question...

      Sometimes, when asking technical questions, the questions that we ask provide more information than we realize. In a previous article, Asking Better Questions, one of the key things that I mentioned is the need to "pull back one level." By the time that we have a technical question there is a very good chance that we have already gone too far, past the point where we have sufficient knowledge to support ourselves, and may have already made a bad decision.

      Sometimes this is even more dramatic. That the question we are asking means that we should not be doing the thing we are attempting at all. A great example of this is when someone asks a really basic question about a very dangerous, and complicated topic (dangerous being the key warning indicator.)

      Imagine you are about to jump out of an airplane and you say to the person next to you "so do I count to ten and pull a cord or something like that?" or worse "should I have some special backpack or something." DANGER WILL ROBINSON. You've gone too far. You should never, ever have gotten into an airplane with the intention to jump out of it without understanding parachutes, jump procedures, harness techniques, landing techniques, the terrain you are over and all kinds of specifics about the parachute that you are using that particular day. That you are asking the question is answer a bigger question - Should you be jumping out of an airplane? No.

      In IT we see this same scenario. A question that, in its asking, exposes that the asker has gone too deep, is already on the airplane and didn't learn about skydiving first. The situation is dangerous and that they are not aware is the real problem.

      Some places where we see this most often are with storage, an area where it seems most common to make decisions before learning about the technology and the questions asked are ones that would have needed to have been answered long before getting to the current point. But it can happen in any technical arena, storage just gets a high profile here because the dangers are so much more dramatic as they often lead to both data loss and loss of availability at the same time and often across many systems.

      Don't take offense when people have this reaction. Stop and ask yourself if it is true. Do you really understand the technology that you are involved with, are you confident that you are familiar with its use and caveats and that you really are just missing some really basic understanding that is not crucial to have had before getting as far as you have? Or are you perhaps operating somewhat blindly and may not understand how you got to where you are and are working in the dark - taking on risks and not being able to understand or explain them clearly.

      There is no shame in admitting what you don't know. But there is in putting a business at risk because you were hoping that no one would notice what you didn't know. None of us understands every aspect of what we do, we all have tons of questions and need help from lots of people. Get that help as early as possible, don't wait until you are about to jump and then decide that maybe you should know where the ripcord is.

      Geronimo..........

      [Edit: Originally published December 16, 2013]

      posted in IT Discussion best practices
      scottalanmillerS
      scottalanmiller
    • Tesla Announces Self Driving Cars

      Tesla announces fully self-driving cars - USA TODAY
      https://apple.news/AY4HxRMb3SIKJKCLn3TCLPQ

      posted in Water Closet
      scottalanmillerS
      scottalanmiller
    • How Reliable Is Your Server

      Servers are unreliable things, right? We hear about this all of the time. Everyone is concerned that their server(s) will fail. They fail left and right. It happens all of the time. Servers are fragile and need risk mitigation for nearly all situations. They don't even have internally redundant components, right?

      Wrong. None of this is true. Once upon a time it was true that commodity servers and their corresponding operating systems were highly fragile, but this was the early 1990s and even then the failure risks were mostly limited to the storage layer and mostly limited to only those systems where cost cutting measures reduced reliability far below what was available at the time.

      Enterprise servers have long been highly reliable, even going back to the 1970s, and commodity servers entered this world of being highly reliable by the late 1990s and in the 2000s moved even closer, especially with the advents of 64bit computing and full virtualization. Today commodity enterprise servers like those from HPE's Proliant line and Dell's PowerEdge line are incredibly reliable. When properly designed, built and maintained reliability might move towards the six nines range! This puts a normal server well into consideration for "high availability" right from the onset today.

      Standard servers do this through a couple techniques. One is by simply using very solid, well engineered components. Parts like processors and motherboards have come a very long way and almost never fail, even after a decade or more of continuous abuse. But some parts will always continue to have some risk, power supplies and hard drives being some of the riskiest components. In modern enterprise commodity servers nearly all reasonable components are redundant and field serviceable and nearly always hot swappable. Hot swap power supplies, hard drives, fans and more are standard. Pretty much every component with significant risk is already redundant, field replaceable and can be done live without any downtime even after a component has failed. And others, like NICs, are often redundant as well.

      Even two decades ago it was standard to have hot swappable PCI slots so that support components could be replaced without downtime!

      Of course these are only commodity servers that we are talking about. Today even AMD64 architecture servers are available in non-commodity approaches (mini computers and better.) RAS features (reliability, accesibility and serviceability) on mini (HPE Integrity, SuperDome, Oracle M, IBM Power, Fujitsu Sparc) and mainframe systems are extreme and go far beyond what can be done with commodity servers. Hot swappable memory, backplanes, CPUs, controllers and even motherboards are standardly available. Downtime isn't a word that systems like these know, at all.

      Simply put, servers today are not the fragile things that they were twenty or thirty or even forty years ago. Servers are generally rock solid, incredibly reliable devices. The idea that servers will simply die regularly, that they are unreliable and need to be protected from hardware failure in all cases is emotional, irrational and based off of the fears of not just a different era, but a totally different generation entirely.

      Before giving in to fear that your server will stop functioning every few months, take a minute to think... perhaps your servers are more reliable than you give them credit for.

      posted in IT Discussion best practice server risk risk analysis
      scottalanmillerS
      scottalanmiller
    • RE: What Are You Doing Right Now

      @wirestyle22 said in What Are You Doing Right Now:

      I just want to say thank god for mangolassi. I've learned so much since I came here. I really don't think I'd be where I'm at now if it weren't for all of you guys helping me so much. It's really appreciated

      Everyone is here for each other. That's our whole purpose for putting in the time here.

      posted in Water Closet
      scottalanmillerS
      scottalanmiller
    • Comparing NTFS and ReFS

      Windows has recently joined the Linux and BSD worlds in having a selection of enterprise filesystems to choose from when working on Windows servers. In addition to the traditional NTFS filesystem that we have had for decades, we now have ReFS.

      NTFS stands for the [Windows] NT File System. ReFS stands for the Resilient File System. ReFS originated with Windows Server 2012 and contains a subset of NTFS functionality. ReFS tackles much of the same ground as ZFS and BtrFS, and has similarly garnered the same, odd excitement from IT professionals around odd use cases and a desire to implement outside of practical use cases.

      ReFS was designed to address needs with Windows Servers around Storage Spaces, software RAID and Hyper-V and is not designed to replace NTFS as the primary general purpose file system. It is targeted almost exclusively at large, local Hyper-V storage for virtual disks and for large, low cost file servers using software RAID. Like ZFS and BtrFS, ReFS builds features and functionality from LVM and software RAID layers into the filesystem and results in similar confusion around benefits and caveats because of it.

      With only rare exceptions, NTFS should be the normal choice for Windows systems. According to extensive tests by Josh Odgers* NTFS is more performant than ReFS, contrary to most assumptions (this is mirrored by ZFS which under normal conditions is not as fast as the decades old UFS filesystem) and has more features. And concerns on other sources point to reliability issues with ReFS, even when used with Storage Spaces (the full LVM layer) exactly as it is intended to be used. ReFS having reliability issues on its own is bad enough, but because ReFS is intended to be so resilient it lacks necessary tools to deal with it when it fails making it riskier still.

      ReFS exists essentially solely for use with Storage Spaces in software RAID scenarios where ReFS works with SS to add checksoming and data integrity to the stack.

      Compared to NTFS, ReFS is not just slower and uses slightly more system resources but it also lacks quotas, filesystem encryption via EFS, compression and deduplication. NTFS is the better choice for nearly all use cases.

      References:

      https://en.wikipedia.org/wiki/ReFS
      http://www.joshodgers.com/2016/07/10/storage-performance-refs-vs-ntfs/
      http://windowsitpro.com/hyper-v/dont-use-refs-hyper-v
      https://social.technet.microsoft.com/Forums/windowsserver/en-US/8363b69d-1eb1-4dfe-ace0-1fb6e4bf9adc/refs-and-hyperv-vhd?forum=winserverhyperv

      posted in IT Discussion ntfs refs windows filesystems filesystem sam windows administration
      scottalanmillerS
      scottalanmiller
    • RE: Mangoes and Marijuana

      Leave it to MQ to post about the MJ on ML.

      posted in Water Closet
      scottalanmillerS
      scottalanmiller
    • Simple File Sharing over NFS

      In the UNIX world, NFS is the simplest means of handling file sharing between systems. And working with NFS on Linux is incredibly easy too. Details on exporting NFS files is handled by the aptly named /etc/exports file. In this file we list the folder to export, the host(s) allowed to mount this exported share and any options on that share (like read-only and root squash restrictions.)

      In our example here we will take the local folder, /var/data

      /var/data garfield.mydomain.com(rw,root_squash)
      

      In this example, /var/data on our local server is being shared out to the host named “garfield” with the options of “rw”, for read-write (use ro for read-only) and “root_squash.” Root squashing requires a little explanation. This is a standard security procedure for NFS as it stops foreign hosts from acting as the root and gaining unlimited control of your file share. Basically anyone claiming to be root on the foreign system gets least privilege access, rather than most privilege access, to the share. It is rare that you would want to not have this option in place.

      To cause the NFS service to re-read the /ext/exports file and enact the changes there we run the following command:

      # exportfs -a
      

      At this point you have a working NFS export and you should be able to mount it from the host names “garfield” without any problem, we hope.

      Originally posted in 2012 on my Linux blog: http://web.archive.org/web/20140823021548/http://www.scottalanmiller.com/linux/2012/04/24/simple-file-sharing-over-nfs/

      posted in IT Discussion unix nfs storage
      scottalanmillerS
      scottalanmiller
    • 1 / 1