XS file systems
-
In my quest to learn more Linux I digging around on my XenServer.
I think it was @johnhooks who said that SX has very little extra space for installing patches, so you'll want to make sure you clean them up after you apply them.
So I had to figure out where XS stored the update files. Pssst it's in /var/patch (FYI, don't ever delete /var/patches/applied or it's contents)
This lead me to wanting to see what drives/partitions I had and how full they were. So I found the df command.
So here's my output:
Filesystem 1K-blocks Used Available Use% Mounted on /dev/sda1 4127440 2199952 1717824 57% / none 1979840 56 1979784 1% /dev/shm /opt/xensource/packages/iso/XenCenter.iso 57296 57296 0 100% /var/xen/xc-install /dev/mapper/XSLocalEXT--3c68d5fc--2e9f--5f5b--b832--dc1aacf920b4-3c68d5fc--2e9f--5f5b--b832--dc1aacf920b4 1153344396 771670308 323087568 71% /var/run/sr-mount/3c68d5fc-2e9f-5f5b-b832-dc1aacf920b4
So I've learned enough to know that /dev/sda1 which is mounted as / is the main drive, and the booting drive, but I don't know what the rest really are.
So I'm inferring that /dev is where typical disk drives lives - HDDs, SSDs, hardware RAID provided drives, etc.
So /dev/sda1 is the first disk in the system, which in my case is an SD card plugged into the server, but then there's /dev/mapper/XSLocalEXT--3c68d5fc--2e9f--5f5b--b832--dc1aacf920b4-3c68d5fc--2e9f--5f5b--b832--dc1aacf920b4.WTH? Why is this line so long compared to the first one?
and what is none? I'm somehow mounting a none?
/opt - now this I'm assuming is an optical drive. Weird thing is, I don't have a physical optical drive in this machine. I do have iLo, but currently there is no ISO connected via iLo so I'm lost where this is coming from.
Anyone help me out with my questions?
-
So there are a couple items here that make it so long. First... the first item is a raw device: /dev/sda1. That is simply the raw name of the partition, nothing more. That's the "dirty" name as given by fdisk or parted.
The other name is under /dev/mapper. Any device under /dev/mapper is not real but is actually an alias to a physical device. It is a "handy name for humans" rather than a directly raw block device handle.
-
@Dashrender said:
/opt - now this I'm assuming is an optical drive.
/opt is part of the universal UNIX filesystem standards (from POSIX) that stands for optional and is where "third party applications" are meant to be installed. It's where non-system binaries and scripts go. Like if you install zsh, that's a system tool and goes into a system directly. But if you go download and install a product from outside of the OS, it belongs in /opt.
-
You can learn more about the Linux Filesystem Hierarchy Here: http://mangolassi.it/topic/7863/linux-the-lay-of-the-land-and-the-filesystem-hierarchy-standard
-
Optical drives by standard mount under /media
-
Now the reason that the one device is SO long, far longer than a simple alias would indicate, is because in addition to the handy human readable name (XSLocalEXT standing for XenServer Local Extended Filesystem) is that a UUID or "Universally Unique ID" field has been appended to the name. Presumably this is done, and is standard in this case, where the system is designed to protect against accidental config file copying or system imaging moving a config file from one device to another and causing damage. This is a "protecting the user from themselves" thing meant to make it hard for someone to do something unintentional. It's not a security tool as it takes seconds to look in /dev/mapper and find the current name and replace the one in /etc/fstab with it, so anyone can work around it. It's about preventing accidents.
Without this appended UUID, the system would mount anything called /dev/mapper/XSLocalEXT and try to use it even if the system were imaged and the device was not the same one and might not be intended to be used in that way. This is not a component that can be trivially replicated from system to system. So this is just a handy way to protect new users from themselves and makes you stop and think before it blindly mounts up your local data store.
-
Below you'll find some very useful documentation that I've compiled and found useful for XS as I've learned.
The below command will list the free space in a readable format on the primary partion (dom0) 1. df –h | grep “/$” | head –n 1 The below commands will list all directories 1 folder deep, and how much file space each directory is using. 1. cd / 2. du –hc --max-depth=1 The below commands will remove log files (.log) and backup log files (.gz) from the server in the respective file paths. I often remove the *.gz files first as these are the compressed & older logs. • rm -rf /var/log/installer/*.log • rm -rf /var/log/pm/*.log • rm -rf /var/log/xen/*.log • rm -rf /var/log/*.gz
-
Another thing to note is that devices under /dev/mapper are sym linked logical volumes. If you run ls -l /dev/mapper you'll see all of the lv's have an l attribute and it will show you which device the link points to.
These same devices are also sym linked under /dev/somevolumegroup. But they lack the volume group prefix that you have under /dev/mapper.
-
man.. so many more options that Windows.. makes the head swim more than a bit.
OK in a private chat with Scott, he mentioned that this was an LVM mapper (where my VM storage is). I'm assuming you know it's LVM because of the mapper term in the path?
Is LVM a file system? or something higher than that? like the RAID layer before the file system? -
@Dashrender said:
man.. so many more options that Windows.. makes the head swim more than a bit.
OK in a private chat with Scott, he mentioned that this was an LVM mapper (where my VM storage is). I'm assuming you know it's LVM because of the mapper term in the path?
Is LVM a file system? or something higher than that? like the RAID layer before the file system?Lower, LVM sits between the file system and the disks driver.
-
@Dashrender said:
OK in a private chat with Scott, he mentioned that this was an LVM mapper (where my VM storage is). I'm assuming you know it's LVM because of the mapper term in the path?
He and I were going through Linux stuff yesterday offline.
I wonder how many Linux conversations he has per day, LOL.
-
@Dashrender said:
man.. so many more options that Windows.. makes the head swim more than a bit.
OK in a private chat with Scott, he mentioned that this was an LVM mapper (where my VM storage is). I'm assuming you know it's LVM because of the mapper term in the path?
Is LVM a file system? or something higher than that? like the RAID layer before the file system?Not an LVM mapper, LVM and a mapper. You can double check that it is LVM by using...
lvs
-
@Dashrender said:
Is LVM a file system? or something higher than that? like the RAID layer before the file system?
LVM = Logical Volume Manager
Physical Device -> RAID Layer -> LVM -> Volume -> Filesystem
Same as on Windows. You know the Windows LVM layer as "Dynamic Disks".
-
@BRRABill said:
I wonder how many Linux conversations he has per day, LOL.
Rather a lot. However a lot of them are actually Linux-triggered general conversations. Like LVM isn't a Linux thing, it's a generic OS thing that Windows Admins are often encouraged to ignore or not grok.
-
@scottalanmiller said:
@Dashrender said:
Is LVM a file system? or something higher than that? like the RAID layer before the file system?
LVM = Logical Volume Manager
Physical Device -> RAID Layer -> LVM -> Volume -> Filesystem
Same as on Windows. You know the Windows LVM layer as "Dynamic Disks".
Yeah, I rarely use Dynamic Disks. In my situations it's not a common need.
-
@Dashrender said:
Yeah, I rarely use Dynamic Disks. In my situations it's not a common need.
The big reason for any LVM is risk mitigation, not a need at the outset. You use it so that you are prepared for the unknown. Plus it is what normally provides for snapshots.
-
@scottalanmiller said:
@Dashrender said:
Yeah, I rarely use Dynamic Disks. In my situations it's not a common need.
The big reason for any LVM is risk mitigation, not a need at the outset. You use it so that you are prepared for the unknown. Plus it is what normally provides for snapshots.
So a non VM'ed Linux machine can take a snapshot?
let me ask that another way.
A baremetal Linux box can do a snapshot if using LVM?
-
@Dashrender said:
@scottalanmiller said:
@Dashrender said:
Yeah, I rarely use Dynamic Disks. In my situations it's not a common need.
The big reason for any LVM is risk mitigation, not a need at the outset. You use it so that you are prepared for the unknown. Plus it is what normally provides for snapshots.
So a non VM'ed Linux machine can take a snapshot?
let me ask that another way.
A baremetal Linux box can do a snapshot if using LVM?
Yes, this isn't a snapshot at the hypervisor level this is a snapshot below the filesystem. The windows analogy, a bad one but still, is the "previous versions" feature.
-
@Dashrender said:
A baremetal Linux box can do a snapshot if using LVM?
Of course. Every enterprise has OS since the 1990s except Windows and Windows since only a little bit later than that. That was standard long before virtualization started using it in the AMD64 space.
-
@coliver said:
Yes, this isn't a snapshot at the hypervisor level this is a snapshot below the filesystem. The windows analogy, a bad one but still, is the "previous versions" feature.
Windows VSS is a direct "copy" (by feature, not by implementation) of the Linux LVM system which, in turn, was a copy of the one from AIX.