Linux is one of the world’s most powerful and popular operating system. Linux operating system was developed by Linus Benedict Torvalds at the age of 21. At present there are more than 300 flavors of Linux available and one can choose between any of them depending on the kind of applications they want. Linux is a freeware and generally speaking its free from Virus and other malware infections.
Tuesday, December 20, 2016
Tuesday, December 13, 2016
How to Compress and Extract Files Using the tar Command on Linux
The tar command on Linux is often used to create .tar.gz or .tgz archive files, also called “tarballs.” This command has a large number of options, but you just need to remember a few letters to quickly create archives with tar. The tar command can extract the resulting archives, too.
The GNU tar command included with Linux distributions has integrated compression. It can create a .tar archive and then compress it with gzip or bzip2 compression in a single command. That’s why the resulting file is a .tar.gz file or .tar.bz2 file.
Compress an Entire Directory or a Single File
Use the following command to compress an entire directory or a single file on Linux. It’ll also compress every other directory inside a directory you specify–in other words, it works recursively.
tar -czvf name-of-archive.tar.gz /path/to/directory-or-file
Here’s what those switches actually mean:
- -c: Create an archive.
- -z: Compress the archive with gzip.
- -v: Display progress in the terminal while creating the archive, also known as “verbose” mode. The v is always optional in these commands, but it’s helpful.
- -f: Allows you to specify the filename of the archive.
Let’s say you have a directory named “stuff” in the current directory and you want to save it to a file named archive.tar.gz. You’d run the following command:
tar -czvf archive.tar.gz stuff
Or, let’s say there’s a directory at /usr/local/something on the current system and you want to compress it to a file named archive.tar.gz. You’d run the following command:
tar -czvf archive.tar.gz /usr/local/something
Compress Multiple Directories or Files at Once
tar -czvf archive.tar.gz /home/ubuntu/Downloads /usr/local/stuff /home/ubuntu/Documents/notes.txt
Just list as many directories or files as you want to back up.
Exclude Directories and Files
In some cases, you may wish to compress an entire directory, but not include certain files and directories. You can do so by appending an
--exclude
switch for each directory or file you want to exclude.
For example, let’s say you want to compress /home/ubuntu, but you don’t want to compress the /home/ubuntu/Downloads and /home/ubuntu/.cache directories. Here’s how you’d do it:
tar -czvf archive.tar.gz /home/ubuntu --exclude=/home/ubuntu/Downloads --exclude=/home/ubuntu/.cache
The
--exclude
switch is very powerful. It doesn’t take names of directories and files–it actually accepts patterns. There’s a lot more you can do with it. For example, you could archive an entire directory and exclude all .mp4 files with the following command:tar -czvf archive.tar.gz /home/ubuntu --exclude=*.mp4
Use bzip2 Compression Instead
While gzip compression is most frequently used to create .tar.gz or .tgz files, tar also supports bzip2 compression. This allows you to create bzip2-compressed files, often named .tar.bz2, .tar.bz, or .tbz files. To do so, just replace the -z for gzip in the commands here with a -j for bzip2.
Gzip is faster, but it generally compresses a bit less, so you get a somewhat larger file. Bzip2 is slower, but it compresses a bit more, so you get a somewhat smaller file. Gzip is also more common, with some stripped-down Linux systems including gzip support by default, but not bzip2 support. In general, though, gzip and bzip2 are practically the same thing and both will work similarly.
For example, instead of the first example we provided for compressing the stuff directory, you’d run the following command:
tar -cjvf archive.tar.bz2 stuff
Extract an Archive
Once you have an archive, you can extract it with the tar command. The following command will extract the contents of archive.tar.gz to the current directory.
tar -xzvf archive.tar.gz
It’s the same as the archive creation command we used above, except the
-x
switch replaces the -c
switch. This specifies you want to extract an archive instead of create one.
You may want to extract the contents of the archive to a specific directory. You can do so by appending the
-C
switch to the end of the command. For example, the following command will extract the contents of the archive.tar.gz file to the /tmp directory.tar -xzvf archive.tar.gz -C /tmp
If the file is a bzip2-compressed file, replace the “z” in the above commands with a “j”.
This is the simplest possible usage of the tar command. The command includes a large number of additional options, so we can’t possibly list them all here. For more information. run the info tar command at the shell to view the tar command’s detailed information page. Press the qkey to quit the information page when you’re done. You can also read tar’s manual online.
If you’re using a graphical Linux desktop, you could also use the file-compression utility or file manager included with your desktop to create or extract .tar files. On Windows, you can extract and create .tar archives with the free 7-Zip utility.
Friday, December 9, 2016
Duplicate PV Warnings for Devices in LVM commands
Duplicate PV Warnings for Multipathed Devices
When using LVM with multipathed storage, some LVM commands (such as
vgs
or lvchange
) may display messages such as the following when listing a volume group or logical volume.Found duplicate PV GDjTZf7Y03GJHjteqOwrye2dcSCjdaUi: using /dev/dm-5 not /dev/sdd Found duplicate PV GDjTZf7Y03GJHjteqOwrye2dcSCjdaUi: using /dev/emcpowerb not /dev/sde Found duplicate PV GDjTZf7Y03GJHjteqOwrye2dcSCjdaUi: using /dev/sddlmab not /dev/sdf
After providing information on the root cause for these warnings, this section describes how to address this issue in the following two cases.
- The two devices displayed in the output are both single paths to the same device
- The two devices displayed in the output are both multipath maps
7.8.1. Root Cause of Duplicate PV Warning
With a default configuration, LVM commands will scan for devices in
/dev
and check every resulting device for LVM metadata. This is caused by the default filter in the /etc/lvm/lvm.conf
, which is as follows:filter = [ "a/.*/" ]
When using Device Mapper Multipath or other multipath software such as EMC PowerPath or Hitachi Dynamic Link Manager (HDLM), each path to a particular logical unit number (LUN) is registered as a different SCSI device, such as
/dev/sdb
or /dev/sdc
. The multipath software will then create a new device that maps to those individual paths, such as /dev/mapper/mpath1
or /dev/mapper/mpatha
for Device Mapper Multipath, /dev/emcpowera
for EMC PowerPath, or /dev/sddlmab
for Hitachi HDLM. Since each LUN has multiple device nodes in /dev
that point to the same underlying data, they all contain the same LVM metadata and thus LVM commands will find the same metadata multiple times and report them as duplicates.
These duplicate messages are only warnings and do not mean the LVM operation has failed. Rather, they are alerting the user that only one of the devices has been used as a physical volume and the others are being ignored. If the messages indicate the incorrect device is being chosen or if the warnings are disruptive to users, then a filter can be applied to search only the necessary devices for physical volumes, and to leave out any underlying paths to multipath devices.
7.8.2. Duplicate Warnings for Single Paths
The following example shows a duplicate PV warning in which the duplicate devices displayed are both single paths to the same device. In this case, both
/dev/sdd
and /dev/sdf
can be found under the same multipath map in the output to the multipath -ll
command.Found duplicate PV GDjTZf7Y03GJHjteqOwrye2dcSCjdaUi: using **/dev/sdd** not **/dev/sdf**
To prevent this warning from appearing, you can configure a filter in the
/etc/lvm/lvm.conf
file to restrict the devices that LVM will search for metadata. The filter is a list of patterns that will be applied to each device found by a scan of /dev
(or the directory specified by the dir
keyword in the /etc/lvm/lvm.conf
file). Patterns are regular expressions delimited by any character and preceded by a
(for accept) or r
(for reject). The list is traversed in order, and the first regex that matches a device determines if the device will be accepted or rejected (ignored). Devices that don’t match any patterns are accepted. For general information on LVM filters, see Section 5.5, “Controlling LVM Device Scans with Filters”.
The filter you configure should include all devices that need to be checked for LVM metadata, such as the local hard drive with the root volume group on it and any multipathed devices. By rejecting the underlying paths to a multipath device (such as
/dev/sdb
, /dev/sdd
, etc) you can avoid these duplicate PV warnings, since each unique metadata area will only be found once on the multipath device itself.
The following examples show filters that will avoid duplicate PV warnings due to multiple storage paths being available.
- This filter accepts the second partition on the first hard drive (
/dev/sda
and any device-mapper-multipath devices, while rejecting everything else.filter = [ "a|/dev/sda2$|", "a|/dev/mapper/mpath.*|", "r|.*|" ]
- This filter accepts all HP SmartArray controllers and any EMC PowerPath devices.
filter = [ "a|/dev/cciss/.*|", "a|/dev/emcpower.*|", "r|.*|" ]
- This filter accepts any partitions on the first IDE drive and any multipath devices.
filter = [ "a|/dev/hda.*|", "a|/dev/mapper/mpath.*|", "r|.*|" ]
Note
When adding a new filter to the
/etc/lvm/lvm.conf
file, ensure that the original filter is either commented out with a # or is removed.
Once a filter has been configured and the
/etc/lvm/lvm.conf
file has been saved, check the output of these commands to ensure that no physical volumes or volume groups are missing.# pvscan # vgscan
You can also test a filter on the fly, without modifying the
/etc/lvm/lvm.conf
file, by adding the --config
argument to the LVM command, as in the following example.# lvs --config 'devices{ filter = [ "a|/dev/emcpower.*|", "r|.*|" ] }'
Note
Testing filters using the
--config
argument will not make permanent changes to the server's configuration. Make sure to include the working filter in the /etc/lvm/lvm.conf
file after testing.
After configuring an LVM filter, it is recommended that you rebuild the
initrd
device with the dracut
command so that only the necessary devices are scanned upon reboot.7.8.3. Duplicate Warnings for Multipath Maps
The following examples show a duplicate PV warning for two devices that are both multipath maps. In these examples we are not looking at two different paths, but two different devices.
Found duplicate PV GDjTZf7Y03GJHjteqOwrye2dcSCjdaUi: using **/dev/mapper/mpatha** not **/dev/mapper/mpathc**
Found duplicate PV GDjTZf7Y03GJHjteqOwrye2dcSCjdaUi: using **/dev/emcpowera** not **/dev/emcpowerh**
This situation is more serious than duplicate warnings for devices that are both single paths to the same device, since these warnings often mean that the machine has been presented devices which it should not be seeing (for example, LUN clones or mirrors). In this case, unless you have a clear idea of what devices should be removed from the machine, the situation could be unrecoverable. It is recommended that you contact Red Hat Technical Support to address this issue.
Monday, December 5, 2016
Raid configuration and troubleshooting steps
1. Create a new RAID array
Create (mdadm —create) is used to create a new array:
1
mdadm --create --verbose /dev/md0 --level=1 /dev/sda1 /dev/sdb2
or using the compact notation:
1
mdadm -Cv /dev/md0 -l1 -n2 /dev/sd[ab]1
1
|
|
1
|
|
2. /etc/mdadm.conf
/etc/mdadm.conf or /etc/mdadm/mdadm.conf (on debian) is the main configuration file for mdadm. After we create our RAID arrays we add them to this file using:
1
mdadm --detail --scan >> /etc/mdadm.conf
or on debian
1
mdadm --detail --scan >> /etc/mdadm/mdadm.conf
1
|
|
1
|
|
3. Remove a disk from an array
We can’t remove a disk directly from the array, unless it is failed, so we first have to fail it (if the drive it is failed this is normally already in failed state and this step is not needed):
1
mdadm --fail /dev/md0 /dev/sda1
and now we can remove it:
1
mdadm --remove /dev/md0 /dev/sda1
This can be done in a single step using:
1
mdadm /dev/md0 --fail /dev/sda1 --remove /dev/sda1
1
|
|
1
|
|
1
|
|
4. Add a disk to an existing array
We can add a new disk to an array (replacing a failed one probably):
1
mdadm --add /dev/md0 /dev/sdb1
1
|
|
5. Verifying the status of the RAID arrays
We can check the status of the arrays on the system with:
1
cat /proc/mdstat
or
1
mdadm --detail /dev/md0
The output of this command will look like:
1
2
3
4
5
6
7
8
9
10
cat /proc/mdstat
Personalities : [raid1]
md0 : active raid1 sdb1[1] sda1[0]
104320 blocks [2/2] [UU]
md1 : active raid1 sdb3[1] sda3[0]
19542976 blocks [2/2] [UU]
md2 : active raid1 sdb4[1] sda4[0]
223504192 blocks [2/2] [UU]
here we can see both drives are used and working fine – U. A failed drive will show as F, while a degraded array will miss the second disk –
Note: while monitoring the status of a RAID rebuild operation using watch can be useful:
1
watch cat /proc/mdstat
1
|
|
1
|
|
1 2 3 4 5 6 7 8 9 10 |
|
1
|
|
6. Stop and delete a RAID array
If we want to completely remove a raid array we have to stop if first and then remove it:
1
2
mdadm --stop /dev/md0
mdadm --remove /dev/md0
and finally we can even delete the superblock from the individual drives:
1
mdadm --zero-superblock /dev/sda
Finally in using RAID1 arrays, where we create identical partitions on both drives this can be useful to copy the partitions from sda to sdb:
1
sfdisk -d /dev/sda | sfdisk /dev/sdb
(this will dump the partition table of sda, removing completely the existing partitions on sdb, so be sure you want this before running this command, as it will not warn you at all).
There are many other usages of mdadm particular for each type of RAID level, and I would recommend to use the manual page (man mdadm) or the help (mdadm —help) if you need more details on its usage. Hopefully these quick examples will put you on the fast track with how mdadm works.
1 2 |
|
1
|
|
1
|
|
Check the status of a raid device
[root@bcane ~]# mdadm --detail /dev/md10
/dev/md10:
Version : 1.2
Creation Time : Sat Jul 2 13:56:38 2011
Raid Level : raid1
Array Size : 26212280 (25.00 GiB 26.84 GB)
Used Dev Size : 26212280 (25.00 GiB 26.84 GB)
Raid Devices : 2
Total Devices : 2
Persistence : Superblock is persistent
Update Time : Sat Jul 2 13:56:47 2011
State : clean, resyncing
Active Devices : 2
Working Devices : 2
Failed Devices : 0
Spare Devices : 0
Rebuild Status : 10% complete
Name : bcane.virtuals.local:10 (local to host bcane.virtuals.local)
UUID : 10a96ed5:92dc48e6:04b2bf43:3539e089
Events : 1
Number Major Minor RaidDevice State
0 8 33 0 active sync /dev/sdc1
1 8 49 1 active sync /dev/sdd1
In order to remove a drive it must first be marked as faulty. A drive can be marked as faulty either through a failure or if you want to manually mark a drive as faulty you can use the -f/--fail flag.
[root@bcane ~]# mdadm /dev/md10 -f /dev/sdc1
mdadm: set /dev/sdc1 faulty in /dev/md10
[root@bcane ~]# mdadm --detail /dev/md10
/dev/md10:
Version : 1.2
Creation Time : Sat Jul 2 13:56:38 2011
Raid Level : raid1
Array Size : 26212280 (25.00 GiB 26.84 GB)
Used Dev Size : 26212280 (25.00 GiB 26.84 GB)
Raid Devices : 2
Total Devices : 2
Persistence : Superblock is persistent
Update Time : Sat Jul 2 14:00:18 2011
State : active, degraded
Active Devices : 1
Working Devices : 1
Failed Devices : 1
Spare Devices : 0
Name : bcane.virtuals.local:10 (local to host bcane.virtuals.local)
UUID : 10a96ed5:92dc48e6:04b2bf43:3539e089
Events : 19
Number Major Minor RaidDevice State
0 0 0 0 removed
1 8 49 1 active sync /dev/sdd1
0 8 33 - faulty spare /dev/sdc1
Now that the drive is marked as failed/faulty you can remove it using the -r/--remove flag.
[root@bcane ~]# mdadm /dev/md10 -r /dev/sdc1
mdadm: hot removed /dev/sdc1 from /dev/md10
[root@bcane ~]# mdadm --detail /dev/md10
/dev/md10:
Version : 1.2
Creation Time : Sat Jul 2 13:56:38 2011
Raid Level : raid1
Array Size : 26212280 (25.00 GiB 26.84 GB)
Used Dev Size : 26212280 (25.00 GiB 26.84 GB)
Raid Devices : 2
Total Devices : 1
Persistence : Superblock is persistent
Update Time : Sat Jul 2 14:02:04 2011
State : active, degraded
Active Devices : 1
Working Devices : 1
Failed Devices : 0
Spare Devices : 0
Name : bcane.virtuals.local:10 (local to host bcane.virtuals.local)
UUID : 10a96ed5:92dc48e6:04b2bf43:3539e089
Events : 20
Number Major Minor RaidDevice State
0 0 0 0 removed
1 8 49 1 active sync /dev/sdd1
If you want to re-add the device you can do so with the -a flag.
[root@bcane ~]# mdadm /dev/md10 -a /dev/sdc1
mdadm: re-added /dev/sdc1
[root@bcane ~]# mdadm --detail /dev/md10
/dev/md10:
Version : 1.2
Creation Time : Sat Jul 2 13:56:38 2011
Raid Level : raid1
Array Size : 26212280 (25.00 GiB 26.84 GB)
Used Dev Size : 26212280 (25.00 GiB 26.84 GB)
Raid Devices : 2
Total Devices : 2
Persistence : Superblock is persistent
Update Time : Sat Jul 2 18:02:21 2011
State : clean, degraded, recovering
Active Devices : 1
Working Devices : 2
Failed Devices : 0
Spare Devices : 1
Rebuild Status : 4% complete
Name : bcane.virtuals.local:10 (local to host bcane.virtuals.local)
UUID : 10a96ed5:92dc48e6:04b2bf43:3539e089
Events : 23
Number Major Minor RaidDevice State
0 8 33 0 spare rebuilding /dev/sdc1
1 8 49 1 active sync /dev/sdd1
One thing to keep an eye out for is that you need to specify the raid device when running these commands. If they are performed without specifying the raid device the flags take on a new meaning.
Hotplug
mdadm versions < 3.1.2
In older version of mdadm, the hotplug & hot-unplug support is present, but for full automatic functionality, we need to employ some bits of scripting. First of all, look what madm provides by manually trying its features from command line:
Hot-unplug from command line
- If the physical disk is still alive:
mdadm --fail /dev/mdX /dev/sdYZ mdadm --remove /dev/mdX /dev/sdYZ
Do this for all RAIDs containing partitions of the failed disk. Then the disk can be hot-unplugged without any problems
- If the physical disk is dead or unplugged, just do
mdadm /dev/mdX --fail detached --remove detached
Fully automated hotplug and hot-unplug using UDEV rules
In case you need fully automatic hot-plug and hot-unplug events handling, the UDEV "add" and "remove" events can be used for this.
Note: the following code had been validated on Linux Debian 5 (Lenny), with kernel 2.6.26 and udevd version 125.
Important notes:
- the rule for "add" event MUST be placed in a file positioned after the "persistent_storage.rules" file, because it uses the ENV{ID_FS_TYPE} condition, which is produced by the persistent_storage.rules file during the "add" event processing.
- The rule for "remove" event can reside in any file in the UDEV rules chain, but let's keep it together with the "add" rule :-)
For this reason, in Debian Lenny I placed the mdadm hotplug rules in file /etc/udev/rules.d/66-mdadm-hotplug.rules This is the content of the file:
SUBSYSTEM!="block", GOTO="END_66_MDADM" ENV{ID_FS_TYPE}!="linux_raid_member", GOTO="END_66_MDADM" ACTION=="add", RUN+="/usr/local/sbin/handle-add-old $env{DEVNAME}" ACTION=="remove", RUN+="/usr/local/sbin/handle-remove-old $name" LABEL="END_66_MDADM"
(these rules are based on the UDEV rules contained in the hot-unplug patches by Doug Ledford)
And here are the scripts which are called from these rules:
#!/bin/bash #This is the /usr/local/sbin/handle-add-old MDADM=/sbin/mdadm LOGGER=/usr/bin/logger mdline=`mdadm --examine --scan $1` #mdline contains something like "ARRAY /dev/mdX level=raid1 num-devices=2 UUID=..." mddev=${mdline#* } #delete "ARRAY " and return the result as mddev mddev=${mddev%% *} #delete everything behind /dev/mdX $LOGGER $0 $1 if [ -n "$mddev" ]; then $LOGGER "Adding $1 into RAID device $mddev" log=`$MDADM -a $mddev $1 2>&1` $LOGGER "$log" fi
#!/bin/bash #This is the /usr/local/sbin/handle-remove-old MDADM=/sbin/mdadm LOGGER=/usr/bin/logger $LOGGER "$0 $1" mdline=`grep $1 /proc/mdstat` #mdline contains something like "md0 : active raid1 sda1[0] sdb1[1]" mddev=${mdline% :*} #delete everything from " :" till the end of line and return the result as mddev $LOGGER "$0: Trying to remove $mdpart from $mddev" log=`$MDADM /dev/$mddev --fail detached --remove detached 2>&1` $LOGGER $log
mdadm versions > 3.1.2
The hot-unplug support introduced in mdadm version 3.1.2 removed the necessity of scripting you see above. If your Linux distribution contains this or later version of mdadm, you hopefully have fully automatic hotplug and hot-unplug without any hassles.
Examples of behavior WITHOUT the automatic hotplug/hot-unplug
Let's have the following RAID configuration:
# cat /proc/mdstat Personalities : [raid1] md0 : active raid1 sda1[0] sdb1[1] 3903680 blocks [2/2] [UU] md1 : active raid1 sda2[0] sdb2[1] 224612672 blocks [2/2] [UU]
The md0 contains the system, md1 is for data (but is not used yet).
Hot-unplug
If we hot-unplug the disk /dev/sda, the /proc/mdstat will show:
# cat /proc/mdstat Personalities : [raid1] md0 : active raid1 sda1[2](F) sdb1[1] 3903680 blocks [2/1] [_U] md1 : active raid1 sda2[0] sdb2[1] 224612672 blocks [2/2] [UU]
We see that sda1 has role [2]. Since RAID1 needs only 2 components - [0] and [1], the [2] means "Spare disk". And it is (F)ailed.
But why the system thinks that /dev/sda2 in /dev/md1 is still OK? Because my system hasn't tried to access /dev/md1 yet (I have no data on /dev/md1). The /dev/sda2 will be marked as fault automatically as soon as I try to access /dev/md1:
# dd if=/dev/md1 of=/dev/null bs=1 count=1 1+0 records in 1+0 records out 1 byte (1 B) copied, 0.0184819 s, 0.1 kB/s
# cat /proc/mdstat Personalities : [raid1] md0 : active raid1 sda1[2](F) sdb1[1] 3903680 blocks [2/1] [_U] md1 : active raid1 sda2[2](F) sdb2[1] 224612672 blocks [2/1] [_U]
At any point after the disk has been unplugged, we can remove its partitions from an array only by this command:
# mdadm /dev/md0 --fail detached --remove detached mdadm: hot removed 8:1
Removing a RAID Device
To remove an existing RAID device, first deactivate it by running the following command as
root
:mdadm
--stop
raid_device
Once deactivated, remove the RAID device itself:
mdadm
--remove
raid_device
Finally, zero superblocks on all devices that were associated with the particular array:
mdadm
--zero-superblock
component_device…
Example 6.5. Removing a RAID device
Assume the system has an active RAID device,
/dev/md3
, with the following layout (that is, the RAID device created in Example 6.4, “Extending a RAID device”):~]# mdadm --detail /dev/md3 | tail -n 4
Number Major Minor RaidDevice State
0 8 1 0 active sync /dev/sda1
1 8 17 1 active sync /dev/sdb1
2 8 33 2 active sync /dev/sdc1
In order to remove this device, first stop it by typing the following at a shell prompt:
~]# mdadm --stop /dev/md3
mdadm: stopped /dev/md3
Once stopped, you can remove the
/dev/md3
device by running the following command:~]# mdadm --remove /dev/md3
Finally, to remove the superblocks from all associated devices, type:
~]# mdadm --zero-superblock /dev/sda1 /dev/sdb1 /dev/sdc1
Subscribe to:
Posts (Atom)