• USA : +1 973 910 5725
  • INDIA: +91 905 291 3388
  • info@tekslate.com
  • Login

File system

File system in Linux – Part II

It is method of storing the data in an organized fashion on the disk. Every partition on the disk except MBR and extended partition should be assigned with some file system in order to make them store the data.

File system is applied on the partition by formatting it with a particular type of a file system.

The number of file system types may exceed the number of operating systems. While RHEL can work with many of these formats, the default is ext4. While many users enable other file systems such as reiserFS, red hat may not support them.

Before the partitions can be used, however you need to create a file system for each one.

The default file system for RHEL5 is ext3 and has been changed to ext4 for RHE6. Both of these file systems offer a journaling option which has two main advantages.

It can help speed up recovery if there is a disk failure because journaling file systems keep a “journal” of the file system’s metadata

It can check drivers faster during the system boot process.

The journaling feature isn’t available on older file systems such as ext2.

The first linux operating systems used the extended file system (ext) until the fast few years, Red Hat linux operating systems formatted their partitions by default to the seconded extended file system (ext2) for RHEL5, the default was the third extended file system (ext3). The new default for RHEL6 is the fourth extend file system (ext4).

Ext file system is the widely used file system in linux, where as v fat is the file system to maintain a common storage between linux and windows (in case of multiple o/s)



Ext2 Ext3 Ext4
Stands for second extend file system. Third extend file system Third extended file system
Introduced in 1993 Introduced in 2001 Introduced in 2008
Does not have journaling support journaling Support journaling
Maximum file size can be from 16 GB to 2 TB 16GB to 2TB 16GB to 16 TB
Maximum ext2 file system size can be from 2 TB to 32 TB. 2TB to 32 TB Maximum ext4 file system size is 1EB (Exabyte). 1EB =1024 PB(peta byte)   1PB = 1024 TB (Tera byte)


-> There are many types of file systems

Swap: The linux swap file systems is  associated with dedicated swap partitions. You’ve probably created at least one swap partition when you installed RHEL.

MS-Dos & VFAT:  These file systems allow you to read MS-DOS formatted file systems. MS-DOS lets you read pre – windows 95 partitions, or regular window partitions within the limits of short file names. VFAT lets you read windows 9x/NT/2000/vista/7 partitions formatted to the FAT 16 or FAT 32 file systems.

ISO 9660: The standard file system for CD-ROMs. It is also known as the high sierra file system, or HSFS, on the unix systems.

/proc:  A linux virtual file system. Virtual means that it doesn’t occupy real  disk space. Instead, files are created as needed. Used to provide information on kernel configuration and device status.

/dev/pts: The linux implementation of the open group’s unix 98 PTY support.

JFS:  IBM’s journaled file system, commonly used on IBM enterprise servers.

ReiserFS:  The reiserFS file system is resizable and supports fast journaling It’s more efficient when most of the files are very small and very large. It’s based on the concept of “balanced trees”. It is no longer supported by RHEL, or even by its format main proponent, SUSE.

Xfs: Developed by silicon Graphics as a journaling file system, it supports very large files: as of this writing, xfs files are limited to 9×1018 bytes. Do not confuse this file system with the x font server, both use the same a acronym.

NTFS:  The current Microsoft windows file system.

  • Creating a file system:
  • When you’re creating a file system, there are many different ways to complete the same task.
  • They’re all based on the mkfs command, which works as a front end to filesystem – specific commands such as ext2, mkfs.ext3, and mkfs.Ext4.

Syn:       mkfs      [options]             [device]

-j —>        creates a journal option
-m  –>    specifies a reserved percentage of blocks on a file system.

  • There are two ways to apply formatting on a volume. For example, If you’ve just created a partition on /dev/sda 5
    #mkfs – ext4 /dev/sda5
    #mke2fs – t ext 4 /dev/sda5
    #mkfs.ext4 /dev/sda5.
  • If you want to reformat an existing partition, logical volume, or RAID array, take the following precautions.

-Backup any existing data on the partition
-un mount the partition.

  • You can format partitions, logical volumes, and RAID arrays to other file systems. The options available in RHEL 6 include:
    – mkfs.cramfs creates a compressed Rom file system.

– mkfs.ext2          Formats a volume to the ext2 file system.

– mkfs.ext3          Formats a volume to the ext3 file system.

– mkfs.Ext4          formats a volume to the Ext4 file system.

– Mkfs.msdos      (or)   – mkfs.vfat    (or)    – mkdosfs   —-> Formats a partition to the Microsoft compatible VFAT file system: it

does not create bootable file systems.

– mkfs.xfs            Formats a volume to the xfs file system developed by the former silicon graphics.

– mkswap             Formats a volume to the linux swap file system




  • One advantage of some rebuild distributions is the availability of useful packages not supported by or available from Red Hat. For Example, cent O56 includes the ntfs progs package, which supports the mounting of NTFS partitions.
  • Creating a swap:-
  • In linux, a swap space is used as a “scratch space” for the system. When the system runs low on memory, it uses the swap as a virtual memory area to swap items in and out of physical memory. Although it should not be used in place of physical memory because it is much slower, it’s critical piece of any system.
  • There are two different types of swaps that you can have:
    • File swap
    • Partition swap

Partition swap in Linux:

Step 1):   —> Create a partition
#f disk / dev/ sda

Step 2): —> Update to kerne;

#Partprobe /dev/sda

Step 3): —> Use the mkswap command to create a swap space

Syn: mkswap       [options]             [Device]

-c            Checks the device for bad blocks before creating the swap area.

#mkswap /dev/sda8

Step4:  —>  Enable the swap partition

#swapon /dev/sda8

Step  5):  —> Verify the swap is running correctly

#swapon              -s

Step 6):   —> If you want to turn off the swap, you can use the swapoff command.

Syn:  swapoff     [options]             [device]

-a  —->         Enables all swap devices

-e   —->        silently skips devices that don’t exist

-s  —->         verifies that the swap is running

File swap in Linux:

You can use the dd command to reserve space for another swap on the /dev/sda9 partition.

—->The dd command can be used for many different purposes and has a huge syntax

Step 1): —->Reserve 1GB of space for the swap

#dd if =/dev/zero of=/mnt/file_swap           bs=1024               count =1000000


Step 2): àJust as with partition swaps, you can now create a swap space specifying the device file just created
#mkswap /mnt/file_swap


Step 3:—->Enable the swap
#swapon / mnt/file_swap


Step 4)  à Again you can verify that the swap is enabled

#swapon –s



The big difference between the two swap types is that file swap is easier to manage because you can just move the swap file to another disk if you want. The swap partition would need to be removed re-created, and so on. Although Red Hat recommends using a partition swap, file swaps are fast enough these days with less administrative overhead to not use them instead. One word of caution, though, is that you can use only one swap (of either type) per physical disk.

Mounting a file system:

  • After formatting a partition we cannot add the data into the partition. In order to add the data in the partition it is required to be mounted.
  • Mounting is a procedure where we attach a directory to the file system
  • They can be mounted to any directory, which is referred to as amount point. Every mount point before is a directory.
  • If you mount a file system on a directory that is not empty everything within that directory becomes inaccessible. Therefore, you should create a new directory as a mount point for each of your file systems
  • There are only two commands for mounting a file systems:
    mount mounts a file system
    umount                    unmounts a file system


Step 1):  —->start by going to the /opt directory, where you can make some directorys to serve as a mount points.

#cd /opt

#mkdir   company_data

#mkdir   backup

Syn: mount [options]      [device]                              [mount_point]

-r  —->          mounts as read – only

W —->          mounts as read/write (the default)

L   —->          LABEL MOUNTS the file system with the name LABEL

V   —->          Provides verbose outpur

Step 2: : Mount the two file systems

#mount /dev/sda 6 /opt/company_data

#mount/dev/sda7 /opt/backup

  • Notice that you don’t specify a file system type or any mount options. The reason is that the mount command automatically detects the file system type and mounts it for you. By default the file system is also mounted with the defaults option (rw).

Step 3): à to unmount a file systems:

Syn: umonut [options]    [mount_point]

-f  —->          force unmount.

-v  —->          provides verbose output

Step 4): àYou can use the “Fuser and lsof” commands to check for open files and users that are currently using files on a file system

Syn: Fuser [options]        [Mount_point/file system]

-c   —->        checks the mounted file system

-k  —->          Kills processes using the file system

-m —->         shows all processes using the file system.

-u  —->         Displays user IDs

-v  —->          Verbose output

  • Check to see what users are currently using the file system

#fuser –cu/dev/sda6     (or)       #lsof /dev/sda6

  • To kill the open connections, you can use the fuser cmd again:

#fuser –ck /opt/backup

  • Now you should be able to unmount the file system:

# umount /opt/backup

Now you know how to mount and unmount file systems, but there is something else you need to look at. If you reboot your system right now, all the file systems that you just mounted will no longer be available when the system comes back up. Why? The mount command is not persistent, so anything that is mounted with it will no longer be available across system reboots. I suppose you want to know how to fix that, right the system looks at two config files

  • /etc/mtab contains a list of all currently mounted file systems
  • /etc/fstab mounts all listed file systems with given options at boot time.


  • View the /etc/mtab file:

#cat       /etc/mtab

Every time you mount or unmount a file system, this file is updated to always reflact what is currently mounted on the system

  • You can also query to check whether a particular file system is mounted

#cat /etc/mtab/grep backup

  • You can use the mount command with no options to also view the currently mounted file systems:


  • Go through the /etc/fstab file. The file follows this syntax:
  • <device> <mount point> <file system type> <mount options>              <write data during shout down>         <check sequence>
  • View the /etc/fstab file:


  • The first three fields should be fairly obvious because you have been working with them throughout the chapter. The fourth field defines the options that you can use to mount the file system. The fifth field defines whether data should be backup (also called dumping) before a system shutdown or reboot occurs. This field commonly use a value of 1. A value of ʘ might be used if the file system is a temporary storage space for files, such as /tmp. The last field defines the order in which file system checking should take place. For the root file system, the value should be 1: everything else should be 2. If you have a removable file system (CD-ROM (or) External) you can define a value of O and skip the checking altogether. Because you want the two file systems created earlier to be mounted when the system boots, you can add two definitions for them here.
  • Open the /etc/fstab file for editing:


  • /dev/sda6 /opt/backup ext3       defaults               ʘ            ʘ


  • You can use the mount command with –a options to remount all file systems defined in the /etc/fstab file

#mount – a

  • Extra file system commands:

Label:  Labels enable you to determine a specific file system more easily with a common name, instead of /dev/sda6. An added benefit is the system’s being able to keep its label even if the underlying  disk is switched with a new one.

Step 1):  —-> Take your file system offline

#umount /dev/sda6

Step 2): —-> Let’s label the file system data to denote that it’s the company_data file system.

#e2label /dev/sda6   CData

Step 3) —-> You can use the same command to also verify:

#e2label /dev/sda6

Step 4)  : —->Find the file system you just labeled

Syn: findfs LABEL=<label>/UUID=<uuid>

#findfs LABEL = cdata

àYou can also query more information about the device using the blkid command.

Syn:  blkid           [options]

-s   —->         Shows specified tag (s)

Dev  —->      specifies the device to probe.

Step 5: —-> Combine the blkid cmd with grep for specific results

#blkid / grep CData

Step 6: —-> When you finish your maintenance, you can remount the file system with the new Label instead of the device path:

#mount LABEL = Cdata    /opt/company_data

—->You could even update the /etc/fstab file to use the label information instead of the device path.


LABEL = CData   /opt/company-data Ext3               defaults               ʘ            ʘ


—->you can use the mount command to verify the label names

#mount – l

—->you also can use the df command to view the usage information for your file systems,

Syn:  df [options]

-h   —->        specifies human-readable format

-l   —->          local file systems only

-T  —->         print the file system type

#df  —->       -h

#df —->        -th

  • Managing file system Quotas:-
  • Quotas are used to restrict the amount of disk space occupied by users or groups.
  • Quotas regulates disk consumption of users. It improves system performance
  • Quotas are two types
    • User level
    • Group level
  • If we apply quotas on a group level it will effected to only the primary users of that grup.
  • Quotas can be applied only quotas enabled partitions
  • You need to install the required packages before you can use quotas on your system.

Step 1): To install the quota package
#yum install – y quota

Step 2): verify that the package was installed successfully

  • #rpm -qa/grep quota

Step 3): you can query quota support from the kernel with the following command

#grep – I config_quota /boot /config – ‘uname –r’

  • Now that you have a listing of the commands you can use, you first need to edit the /etc/fstab file to specify which file systems you want to utilize quotas.
Interested in mastering Linux? Learn more about Linux Tutorial in this blog post.

Step 1): open the /etc/fstab file, edit the following line

/dev/sda6            /opt/company_data        Ext3       defaults, useruota, grpquota


  • Now you need to remount the /opt/com pany_data file system before the changes take effect.

Step 2:  you can accomplish this by using the mount command:

#mount                -o remount /opt/company_data

Step 3: You can verify that the mount and quota options took correctly

#mount/grep company-data

There are two files that maintain quotas for users and groups.

Aquota.users      users quota file.

Aquota.group     group quota file.

  • These two files are automatically created in the top-level directory of the file system where you are turning on quotas – in this case, the /opt/company-data file system.

Step 4: to start the quota system, you use the quotacheck cmd.

Syn: quotacheck               [options][Partition]

-c  —->          Don’t read existing quota files

-u   —->        checks only user quotas

-g    —->       checks only group quotas

-m  —->        doesn’t remount the file system as read-only.

-v    —->        provides verbose output.

#quotacheck       -ugm      /opt/company-data

  • To verify that the quota files were crated successfully

#ls          /opt/company-data

  • Enabling Quotas: Normally, you would have to call the quota on and quota off cmds to have the quota system enforced, but they are automatically called when the system boots up and shuts down.

Step 5: Run the cmd manually the first time just to make sure that quotas turned on:

  • Quota on -v /opt/company-data
  • Let’s briefly discuss the two different limits you can have when dealing with quotas:

Soft limit: Has a grace period that acts as an alarm, signaling when you are reaching your limit. If your grace period expires, you are required to delete files until you are once again under your limit.

  • If you don’t specify a grace period, the soft limit is the maximum number of files you can have.

Hard limit: Required only when a grace period exists for soft limits. If the hard limit does exits, it is the maximum limit that you can hit before your grace period expires on the soft limit.

  • To work with quotas for users and groups, you need to do some conversions in your head here. Each block is equal to 1 KB.

Step 6:  Set the limits for user 1 by using the edquota cmd.

Syn:       edquota [-u/-g]  [username/groupname]

#edquota             -u           user 1

File system          blocks             soft            hard         inodes           soft        hard

/dev/sda6               o                   20000          25000          0                  0              0


Step 7: Again, you use the eduquota cmd, but with a different option:

#edquota –t

  • Here, the current value is seven days for the block grace period. You should not give your users that much time to get their act together, so drop that limit to two days.

Tip: The edquota cmd offers a pretty cool feature. After you configure a quota and your limits for a single user, you can actually copy this over to other users as if it were a template. To do this, specify the user you want to use as a template first and call the eduquota cmd with the –p option.

# edquota –up user1 user2 user3

Step 8: quota usage Reports:

Syn:       repquota             [options]             [partitions]

-a  —->         Reports on all non-NFS file systems with quotas turned on

-u   —->        Reports on user quotas

-g    —->       Reports on group quotas.

-v   —->         verbose output.

#repquota           -uv         /opt/company-data


  • File system security: Linux, like most operating systems, has a standard set of file permissions. Aside from these, it also has a more refined set of permissions implemented through access control lists.
  • This section covers both of these topics and how they are used to implement file system security for files, directories, and more.

Step 1:  Installing the required package

#yum     install – y acl

Step2:   Verify the package installation:

#rpm      -qa/grep acl

Step3:   Before you can even use ACL’s however, you need to make sure that the file system has been mounted with ACL parameter:

#mont/grep acl

Step4:   You can accomplish this using the following

#mount – t ext3 – 0 acl, remount /dev/sda 7 /opt/bacup

Step5:   If your file system isn’t already mounted, you could also use the following.

#mount –t ext3 –o           acl /dev/sda7     /opt/backup

Step6:   To verify, you can use the previous cmd:

#mount /grep acl

Step7:   adjust the following line in your /etc/fs tab file

/dev/sda7            /opt/backup       ext3       defaults, acl        1


Step 8:  To make the changes take effect, you need to remount the file system

# mont -o           remount              /opt/backup

Now verify that your file system has the ACL options:

# mount / grep   -I            acl

  • The file system is now mounted properly with the ACL Iption, so con start to look at the management cmds that pertain to ACL’s:

Getfacl obtains the ACL from a file or directory

Setfacl   sets or modifies an ACL

  • Step 1: create a sample file on which you can test an ACL in the /opt/backup

#cd         /opt/backup

#touch file1

  • Now you can use the fetfacl cmd to view the ACL currently associated with the file.

Syn: getfacl [options] file

-d  —->         Displays the default ACL

-R   —->        Recurses into subdirectories.

#getfacl  file1

Syn:        setfacl [options]               file

-m   —->       Modifies on ACL

-x    —->        Removes an ACL

-n   —->        Doesn’t recalculate the mask

-R   —->        Recurses into subdirectories

Step 2: Set the test file so that user 1 also has access to this file

# set facl  -m       u: suer 1: rwx      /opt/backup/file1

To check the ACL permissions again:

# getfacl file1

Step 3:  To remove the ACL for user1:

#Setfacl               -x            u:user1  /opt/backup/file1

Verify the ACL has been removed:

#getfacl file1


File permissions and ACL’s can get really complex if they aren’t throught out ahead of time.

Step4:   If you have multiple ACL set up on a single file, you can remove them all with the – -b option instead of removing them one by one:

#Setfacl               -b —-> test file

  • Logical volume Manager(LVM):
  • LVM is a form of advanced partition management. The benefit to using LVM is ease of management due to the way disks are setup.
  • LVM is a method of allocating hard drive space in to logical volumes that can be resized of partition with LVM, the hard drive (or) set of hard drives are allocated to one (or) more physical volumes.
  • The physical volumes are combined into volume groups such volume group is divided into logical volumes which are assigned mount points as “/home; ‘/’ etc. These logical volumes are formatted to ext3 file system.
  • The LVM must follow the bellow sequence:
    • Physical volume.
    • Volume group
    • Logical volume
  • Physical volume: The collocation of individual physical drives are called as physical volumes.
  • Volume group: It is a collection of physical volumes and assign a name through which we can create logical volumes.
  • Logical volume: The logical volumes are specified from the volume group these are logical partitions which can resize, format, mount etc.



Implementation of LVM:

Step 1:  Install the required packages:

#yum install –y lvm*

Step 2: Verify that it is installed

#rpm –qa / grep lvm

Step 3: Creating an Lvm partitions; (four partitions)

#fdisk /dev/sda

—-> To update to kernel for rereading

#partprobe /dev/sad

Creating an LVM partition:

Step4 :To create physical volumes:

#pvcreate /dev/sda{10,11,12,13}

—-> verify that the physical volume was created successfully;

#pvdiselay /dev/sda10

Step5 :  To create volume group:

#Vgcreate india /dev/sda{10,11,12}

—-> Verify that the volume group was created successfully:

#vgdisplay –v India

—-> when volume groups are created and initialized, the physical  volumes are broken down into physical extends (the unit of measurement for LVM). This is significant because you can adjust the size of how data is stored based on the size of each physical extend, defined when the volume group is created (the default is 4MB).

Step  6: Create logical volumes:

—-> To create a logical volumes, use the lucreate cmd and specify the size of the partition that you’d like to create. The size can be specified in kilobytes, megabytes, gigabytes, or logical extents (LE). Like physical extents, logical extents are a unit of measure when dealing with logical volumes.

—-> Create a partition 2GB in size

#lvcreate  -L 2000 India  -n ap  —->           New

—-> To verify logical volume info

#lvdisplay     —->        (or)  #lvs

—->To create one more logical volume

#lvcreate             -L 3000  -n           India      mp

—->Using the Lvrename cmd you can change the name of a logical partition

#lvrename /dev/india/mp              /dev/india/up

—->Verify with the following cmd


Adjusting the size of LVM partitions:

The single best feature of LVM is that you can reduce or expand your logical volumes and volume groups. If you are running out of room on a particular logical volume (or) volume group, you can add another physical volume to the volume group and then expand & logical volume to give you more room.

Step 1:  Add 2GB more to the       ap           logical volume

#lvextend            -L            +2000    /dev/india/ap


#lvextend            -L            2000      /dev/india/ap

—->verify the change with the following cmd:

#lvdisplay India

Step 2:  To decrease a logical volume

#lvresize              -L            2000      /dev/india/ap


#lvreduce            -L            -2000     /dev/india/ap

Step 3:   —->Suppose, through, that you want to add a new physical volume so that you can extend your volume group.

—->Create a new physical volume somewhere

#pvcreate /dev/sda 15

—->Now extend your volume group to incorporate that new physical volume

#vgextend India /dev/sda 15

—->Now verify the details of the newly increase vg:

#vgdisplay           -v            india

Step 4:  To reduce the volume group to no longer include the physical volume /dev/ sda15, you can use the vgreduce cmd:

#vgreduce India /dev/sda 15

—->Now verify expansion or reduction of volume groups

#vgdisplay India


Migrating Data:

Suppose you have a drive that is old or dying and you’d like to remove it from the system. On a system with normal partitions, you would have to copy all the data from one disk to another while the disk is offline (because of files locks). Having LVM makes this easier because you can migrate your data from one disk to another, even while the disk is online! This capability is very useful when you need to replace a disk

àIf you want to replace               /dev/sda14    (dev/sda)  because it’s failing, you can use the pvmove cmd to migrate the physical extents (which is really your data to an other physical volume (/dev/sda 14)


Step 1:  To create physical volume

#pucreate /dev/sda 15

Step 2:  You need to add back  /dev/sda15  to the vg:

#vgextend India  /dev/sda 15

Step 3: Also create a logical volume to hold the migrate data:

#lvcreate             -l             3000      India      -n           ban

Step 4:  Verify all logical volumes are in place

#lvdisplay india

Step 5: Migrate the data from the “dying” drive

#pvmove             /dev/sda14         /dev/sda15

(/dev/sda)           (dev/sdb)

Note:     Make sure that you have more than one physical volume; otherwise there will be nowhere for the data to move.

Step 6: verify that physical volume is empty

#pvdisplay /dev/sda 14


Deleting on LVM partition:

It just as important to understand how to delete LVM partitions as it to create them. This is a common task when you are upgrading or redesigning a file system layout.

Step 1: To remove a logical volume

#lvremove           /dev/india/ap

*  —->Although this advice should common sense, make sure you back up any data before deleting anything within the LVM storucure.

Step 2:   —->To remove the volume group

#vgremove India

àYou can also do both steps in one cmd by using the –f option

#vgremove –f India

Step 3:  Wipe all the current physical volumes:

#pvremove          /dev/sda10

#pvremove          /dev/sda 11

Note: use the resize 2fs cmd to extend the file system. Before extending the file system, however, you should always ensure the integrity of the file system first with the e2fsck

Step 1: Syn:         e2fsck   [options]             [device]

-p    —->         automatically repairs (no questions)

-n     —->        Makes no changes to the file system

-y      —->        Assumes “yes” to all questions

-f     —->         Force checking of the file system

-v      —->        provides verbose output

àcheck the file system

#e2fsck -f            /dev/india/ap

Step 2:  

Syn:        resize2fs              [options]             [device]

-p     —->        prints percentage as task completes

-f       —->       Force the cmd to proceed

àExtend the underlying logical volume:

#lvextend            -l             3000      /dev/India/ap

àNow you can extend the file system

#resize2fs            -p           /dev/India/ap

Step 3:  

àNow that your maintenance is complete, remount the file system;

#mount /dev/india/ap/mnt

àyou can use the mount cmd to verify it mounted successfully:


àNow you can use the df cmd to view the usage information for your file systems. This should also reflect the additional space that you just added to the /mnt file system.

Syn:        df    —->         [options]

-h      —->       specifies human-readable format

-T      —->       prints the file system type

#df     —->      -h

RAID:  Now let’s move on to the final type of advanced partitioning : RAID

—->RAID means redundant Array of Independent Disk

—->RAID partitions allow for more advanced features such as redundancy and better performance.

—->Mainly we implement the Raid in order to increase the storage capacity along with data security.

—->There are two types of RAID’s

  • Hardware raid
  • Software raid

—->While RAID can be implemented at the hardware level, the Red Hat exams are not hardware based and therefore focus on the software implementation of RAID through the MD driver.

—->Before we describe how to implement RAID, let’s look at the different types of RAID.

RAID 0: (Striping)              Disks are grouped together to form one large drive. This offers better performance at the cost of availability. Should any single disk in the RAID fail, the entire set of disks becomes unusable.

  • Minimum2, Max 32 Hard disks
  • Data is written alternatively
  • No fault tolerance.
  • Read & write speed is fast/

RAID 1: (Mirroring)           Disks are copied from one to another, allowing for redundancy. Should one disk fail, the other disk takes over, having an exact copy of data from the original disk

  • Min 2, Mas 32 hard disks
  • Data is written simultaneously
  • Fault tolerance available
  • Read fast, write slow.

RAID5:  (Striping with parity) Disks are similar to RAID 0 and are join together to form one large drive. The difference here is that 25% of the disk is used for a parity bit, which allows the disks to be recovered should a single disk fail.

  • Min 3, Max 32 hard disks
  • Data is written alternatively
  • Parity is written on all disks
  • Read & write speed is fast
  • Fault tolerance is available.

Implementation of RAID 5:

Step 1 :                Install the following package

#Yum install –y mdadm

Step 2:  Verify the install

#rpm –qa/grep madadm

—->To start, you first need to create partitions on the disk you want to use. You start with a RAID 5 setup, so you need to make partitions on at least three different disks.

Creating a RAID Array:

—->Create three partitions              3fdsk /dev/sda

—->To verify when you’re done      #fdisk    -l

—->Now you can begin to set up the RAID 5 array with the three partitions

Step 1: Syn: mdadm       [options]

-a  —->         Add a disk into a current array

-c  —->          create a new RAID array

-D  —->         prints the details of array

-f   —->         fails a disk in the array

-l   —->          specifies level of Raid array to create

-n  —->         specifies the devices in the RAID array.

-S   —->         stops an array

-A  —->         Activate an array

-V  —->         provides verbose output.

#mdadm              -cv          /dev/mdo –n3    /dev/sda1 /dev/sdb1 /dev/sdc1   -l5

Step 2: Again to verify that the RAID array has been created successfully

#mdadm              -D /dev/mdo

Step 3: View the status of the newly created RAID array:

#cat /proc/mdstat

This output shows that you have an active RAID 5 array with three disks into. The last few lines here show the state of each disk and partition in the RAID array. You can also see that the RAID is in “Recovery” mode, or creating itself.

Step 4: If you wait the estimated 2.9 minutes and then query again, you see the following

#cat       /proc/mdstat

You now see that the RAID is good to go as it has finished building itself

What to Do when a disk fails:

Suppose that a disk in the array failed. In that case, you need to remove that disk from the array and replace it with a working one.

Step 1:  manually fail a disk in the array,

#mdadm              /dev/mdo            -f            /dev/sdc1

Step 2: verify that the disk in the array has failed

#mdadm              -D           /dev/mdo

Step 3: To remove a disk from the array

#mdadm              /dev/mdo            -r            /dev/sdc 1

Step 4: Look at the last few lines of the RAID details again

#mdadm              -D           /dev/mdo

—->If you want, you could combine the previous two commands

#mdadm              -v /dev/mdo       -f /dev/sdc 1       -r  /dev/sdc 1

Step 5: When the disk is partitioned, you can add it back to the array

#mdadm /dev/mdo          -a /dev/sdd 1

—->Verify that it has been added properly

#mdadm –D /dev/mdo

Step 6: query the kernel

#cat / proc /mdstat

Step 7:  Should something go seriously wrong and you need to take RAID array offline completely

#mdadm              -vs /dev/mdo

Deleting a RAID Array:

Step1: To delete an array, first stop it

#mdadm              -vs /dev/mdo

Step2:  Then remove the RAID array device

#mdadm              -r /dev/mdo

For indepth understanding of Linux click on

Review Date
Reviewed Item
File system
Author Rating

“At TekSlate, we are trying to create high quality tutorials and articles, if you think any information is incorrect or want to add anything to the article, please feel free to get in touch with us at info@tekslate.com, we will update the article in 24 hours.”

0 Responses on File system"

Leave a Message

Your email address will not be published. Required fields are marked *

Site Disclaimer, Copyright © 2016 - All Rights Reserved.