Linux Important Topics :
Why to use RAID?
With the increasing demand in the storage and data world wide the prime concern for the organization is moving towards the security of their data. Now when I use the term security, here it does not means security from vulnerable attacks rather than from hard disk failures and any such relevant accidents which can lead to destruction of data. Now at those scenarios RAID plays it magic by giving you redundancy and an opportunity to get back all your data within a glimpse of time.
Levels
Now with the moving generation and introduction of new technologies new RAID levels started coming into the picture with various improvisation giving an opportunity to organizations to select the required model of RAID as per their work requirement.
Now here I will be giving you brief introduction about some of the main RAID levels which are used in various organizations.
RAID 0
This level strips the data into multiple available drives equally giving a very high read and write performance but offering no fault tolerance or redundancy. This level does not provides any of the RAID factor and cannot be considered in an organization looking for redundancy instead it is preferred where high performance is required.
Calculation:
No. of Disk: 5
Size of each disk: 100GB
Usable Disk size: 500GB
Pros
|
Cons
|
Data is stripped into multiple drives
|
No support for Data Redundancy
|
Disk space is fully utilized
|
No support for Fault Tolerance
|
Minimum 2 drives required
|
No error detection mechanism
|
High performance
|
Failure of either disk results in complete data loss in respective array
|
RAID 1
This level performs mirroring of data in drive 1 to drive 2. It offers 100% redundancy as array will continue to work even if either disk fails. So organization looking for better redundancy can opt for this solution but again cost can become a factor.
Calculation:
No. of Disk: 2
Size of each disk: 100GB
Usable Disk size: 100GB
Pros
|
Cons
|
Performs mirroring of data i.e identical data from one drive is written to another drive for redundancy.
|
Expense is higher (1 extra drive required per drive for mirroring)
|
High read speed as either disk can be used if one disk is busy
|
Slow write performance as all drives has to be updated
|
Array will function even if any one of the drive fails
| |
Minimum 2 drives required
|
RAID 2
This level uses bit-level data stripping rather than block level. To be able to use RAID 2 make sure the disk selected has no self disk error checking mechanism as this level uses external Hamming code for error detection. This is one of the reason RAID is not in the existence in real IT world as most of the disks used these days come with self error detection. It uses an extra disk for storing all the parity informationCalculation:
Formula: n-1 where n is the no. of disk
No. of Disk: 3
Size of each disk: 100GB
Usable Disk size: 200GB
Pros
|
Cons
|
BIT level stripping with parity
|
It is used with drives with no built in error detection mechanism
|
One designated drive is used to store parity
|
These days all SCSI drives have error detection
|
Uses Hamming code for error detection
|
Additional drives required for error detection
|
RAID 3
This level uses byte level stripping along with parity. One dedicated drive is used to store the parity information and in case of any drive failure the parity is restored using this extra drive. But in case the parity drive crashes then the redundancy gets affected again so not much considered in organizations.
Calculation:
Formula: n-1 where n is the no. of disk
No. of Disk: 3
Size of each disk: 100GB
Usable Disk size: 200GB
Pros
|
Cons
|
BYTE level stripping with parity
|
Additional drives required for parity
|
One designated drive is used to store parity
|
No redundancy in case parity drive crashes
|
Data is regenerated using parity drive
|
Slow performance for operating on small sized files
|
Data is accessed parallel
| |
High data transfer rates (for large sized files)
| |
Minimum 3 drives required
|
RAID 4
This level is very much similar to RAID 3 apart from the feature where RAID 4 uses block level stripping rather than byte level.
Calculation:
Formula: n-1 where n is the no. of disk
No. of Disk: 3
Size of each disk: 100GB
Usable Disk size: 200GB
Pros
|
Cons
|
BLOCK level stripping along with dedicated parity
|
Since only 1 block is accessed at a time so performance degrades
|
One designated drive is used to store parity
|
Additional drives required for parity
|
Data is accessed independently
|
Write operation becomes slow as every time a parity has to be entered
|
Minimum 3 drives required
| |
High read performance since data is accessed independently.
|
RAID 5
It uses block level stripping and with this level distributed parity concept came into the picture leaving behind the traditional dedicated parity as used in RAID 3 and RAID 5. Parity information is written to a different disk in the array for each stripe. In case of single disk failure data can be recovered with the help of distributed parity without affecting the operation and other read write operations.
Calculation:
Formula: n-1 where n is the no. of disk
No. of Disk: 4
Size of each disk: 100GB
Usable Disk size: 300GB
Pros
|
Cons
|
Block level stripping with DISTRIBUTED parity
|
In case of disk failure recovery may take longer time as parity has to be calculated from all available drives
|
Parity is distributed across the disks in an array
|
Cannot survive concurrent drive failures
|
High Performance
| |
Cost effective
| |
Minimum 3 drives required
|
RAID 6
This level is an enhanced version of RAID 5 adding extra benefit of dual parity. This level uses block level stripping with DUAL distributed parity. So now you can get extra redundancy. Imagine you are using RAID 5 and 1 of your disk fails so you need to hurry to replace the failed disk because if simultaneously another disk fails then you won't be able to recover any of the data so for those situations RAID 6 plays its part where you can survive 2 concurrent disk failures before you run out of options.
Calculation:
Formula: n-2 where n is the no. of disk
No. of Disk: 4
Size of each disk: 100GB
Usable Disk size: 200GB
Pros
|
Cons
|
Block level stripping with DUAL distributed parity
|
Cost Expense can become a factor
|
2 parity blocks are created
|
Writing data takes longer time due to dual parity
|
Can survive concurrent 2 drive failures in an array
| |
Extra Fault Tolerance and Redundancy
| |
Minimum 4 drives required
|
RAID 0+1
This level uses RAID 0 and RAID 1 for providing redundancy. Stripping of data is performed before Mirroring. In this level the overall capacity of usable drives is reduced as compared to other RAID levels. You can sustain more than one drive failure as long as they are not in the same mirrored set.
NOTE: The no. of drives to be created should always be in the multiple of 2
Calculation:
Formula: n/2 * size of disk (where n is the no. of disk)
No. of Disk: 8
Size of each disk: 100GB
Usable Disk size: 400GB
Pros
|
Cons
|
No parity generation
|
Costly as extra drive is required for each drive
|
Performs RAID 0 to strip data and RAID 1 to mirror
|
100% disk capacity is not utilized as half is used for mirroring
|
Stripping is performed before Mirroring
|
Very limited scalability
|
Usable capacity is n/2 * size of disk (n = no. of disks)
| |
Drives required should be multiple of 2
| |
High Performance as data is stripped
|
RAID 1+0 (RAID 10)
This level performs Mirroring of data prior stripping which makes it much more efficient and redundant as compared to RAID 0+1. This level can survive multiple simultaneous drive failures. This can be used in organizations where high performance and security are required. In terms of fault Tolerance and rebuild performance it is better than RAID 0+1.
NOTE: The no. of drives to be created should always be in the multiple of 2
Calculation:
Formula: n/2 * size of disk (where n is the no. of disk)
No. of Disk: 8
Size of each disk: 100GB
Usable Disk size: 400GB
Pros
|
Cons
|
No Parity generation
|
Very Expensive
|
Performs RAID 1 to mirror and RAID 0 to strip data
|
Limited scalability
|
Mirroring is performed before stripping
| |
Drives required should be multiple of 2
| |
Usable capacity is n/2 * size of disk (n = no. of disks)
| |
Better Fault Tolerance than RAID 0+1
| |
Better Redundancy and faster rebuild than 0+1
| |
Can sustain multiple drive failures
|
RAID :
RAID stands for Redundant Array of Inexpensive (Independent) Disks.
On most situations you will be using one of the following four levels of RAIDs.
- RAID 0
- RAID 1
- RAID 5
- RAID 10 (also known as RAID 1+0)
RAID Level 0 :
Use RAID0 when you need performance but the data is not important.
In a RAID0, the data is divided into blocks, and blocks are written to disks in turn.+
RAID0 provides the most speed improvement, especially for write speed, because read and write requests are evenly distributed across all the disks in the array. Note that RAID1, Mirror, can provide the same improvement with reads but not writes. So if the request comes for, say, blocks 1, 2, and 3, each block is read from its own disk. Thus, the data is read three times faster than from a single disk.
However, RAID0 provides no fault tolerance at all. Should any of the disks in the array fail, the entire array fails and all the data is lost.
RAID0 solutions are cheap, and RAID0 uses all the disk capacity.
Following are the key points to remember for RAID level 0.
- Minimum 2 disks.
- Excellent performance ( as blocks are striped ).
- No redundancy ( no mirror, no parity ).
- Don’t use this for any critical system.
Advantages :
RAID 0 offers great performance, both in read and write operations. There is no overhead caused by parity controls.
All storage capacity is used, there is no overhead.
The technology is easy to implement.
RAID 0 is not fault-tolerant. If one drive fails, all data in the RAID 0 array are lost. It should not be used for mission-critical systems.
RAID1 :
Use mirroring when you need reliable storage of relatively small capacity.+
Mirroring (RAID1) stores two identical copies of data on two hard drives. Should one of the drives fail, all the data can be read from the other drive. Mirroring does not use blocks and stripes.
Read speed can be improved in certain implementations, because read requests are sent to two drives in turn. Similar to RAID0, this should increase speed by the factor of two. However, not all implementations take advantage of this technique.
Write speed on RAID1 is the same as the write speed of a single disk, because all the copies of the data must be updated.
RAID1 uses the capacity of one of its drives to maintain fault tolearnce. This amounts to 50% capacity loss for the array. E.g. if you combine two 500GB drives in RAID1, you'd only get 500GB of usable disk space.
If RAID1 controller fails you do not need to recover neither array configuration nor data from it. To get data you should just connect any of the drives to the known-good computer.
Following are the key points to remember for RAID level 1.
- Minimum 2 disks.
- Good performance ( no striping. no parity ).
- Excellent redundancy ( as blocks are mirrored ).
Advantages :
RAID 1 offers excellent read speed and a write-speed that is comparable to that of a single drive.
In case a drive fails, data do not have to be rebuild, they just have to be copied to the replacement drive.
RAID 1 is a very simple technology.
Disadvantages :
The main disadvantage is that the effective storage capacity is only half of the total drive capacity because all data get written twice.
Software RAID 1 solutions do not always allow a hot swap of a failed drive. That means the failed drive can only be replaced after powering down the computer it is attached to. For servers that are used simultaneously by many people, this may not be acceptable. Such systems typically use hardware controllers that do support hot swapping.
RAID 5 :
RAID5 fits as large, reliable, relatively cheap storage.+
RAID5 writes data blocks evenly to all the disks, in a pattern similar to RAID0. However, one additional "parity" block is written in each row. This additional parity, derived from all the data blocks in the row, provides redundancy. If one of the drives fails and thus one block in the row is unreadable, the contents of this block can be reconstructed using parity data together with all the remaining data blocks.
If all drives are OK, read requests are distributed evenly across drives, providing read speed similar to that of RAID0. For N disks in the array, RAID0 provides N times faster reads and RAID5 provides (N-1) times faster reads. If one of the drives has failed, the read speed degrades to that of a single drive, because all blocks in a row are required to serve the request.
Write speed of a RAID5 is limited by the parity updates. For each written block, its corresponding parity block has to be read, updated, and then written back. Thus, there is no significant write speed improvement on RAID5, if any at all.
The capacity of one member drive is used to maintain fault tolerance. E.g. if you have 10 drives 1TB each, the resulting RAID5 capacity would be 9TB.
If RAID5 controller fails, you can still recover data from the array with RAID 5 recoverysoftware. Unlike RAID0, RAID5 is redundant and it can survive one member disk failure.
While the diagram on the right might seem simple enough, there is a variety of different layouts in practical use. Left/right and synchronous/asynchronous produce four possible combinations (see here for diagrams). Further complicating the issue, certain controllers implement delayed parity.
Following are the key points to remember for RAID level 5.
- Minimum 3 disks.
- Good performance ( as blocks are striped ).
- Good redundancy ( distributed parity ).
- Best cost effective option providing both performance and redundancy. Use this for DB that is heavily read oriented. Write operations will be slow.
Advantages :
Read data transactions are very fast while write data transactions are somewhat slower (due to the parity that has to be calculated).
If a drive fails, you still have access to all data, even while the failed drive is being replaced and the storage controller rebuilds the data on the new drive.
Disadvantages :
Drive failures have an effect on throughput, although this is still acceptable.
This is complex technology. If one of the disks in an array using 4TB disks fails and is replaced, restoring the data (the rebuild time) may take a day or longer, depending on the load on the array and the speed of the controller. If another disk goes bad during that time, data are lost forever.
RAID6 :
RAID6 is a large, highly reliable, relatively expensive storage.
RAID6 uses a block pattern similar to RAID5, but utilizes two different parity functions to derive two different parity blocks per row. If one of the drives fails, its contents are reconstructed using one set of parity data. If another drive fails before the array is recovered, the contents of the two missing drives are reconstructed by combining the remaining data and two sets of parity.
Read speed of the N-disk RAID6 is (N-2) times faster than the speed of a single drive, similar to RAID levels 0 and 5. If one or two drives fail in RAID6, the read speed degrades significantly because a reconstruction of missing blocks requires an entire row to be read.
There is no significant write speed improvement in RAID6 layout. RAID6 parity updates require even more processing than that in RAID5.
The capacity of two member drives is used to maintain fault tolerance. For an array of 10 drives 1TB each, the resulting RAID6 capacity would be 8TB.
The recovery of a RAID6 from a controller failure is fairly complicated. The main approaches to RAID6 data recovery in particular and data recovery in general are covered in data recovery book.
RAID 6 is like RAID 5, but the parity data are written to two drives. That means it requires at least 4 drives and can withstand 2 drives dying simultaneously. The chances that two drives break down at exactly the same moment are of course very small. However, if a drive in a RAID 5 systems dies and is replaced by a new drive, it takes hours or even more than a day to rebuild the swapped drive. If another drive dies during that time, you still lose all of your data. With RAID 6, the RAID array will even survive that second failure.
RAID 10 :
RAID10 is a large, fast, reliable, but expensive storage.+
RAID10 uses two identical RAID0 arrays to hold two identical copies of the content.
Read speed of the N-drive RAID10 array is N times faster than that of a single drive. Each drive can read its block of data independently, same as in RAID0 of N disks.
Writes are two times slower than reads, because both copies have to be updated. As far as writes are concerned, RAID10 of N disks is the same as RAID0 of N/2 disks.
Half the array capacity is used to maintain fault tolerance. In RAID10, the overhead increases with the number of disks, contrary to RAID levels 5 and 6, where the overhead is the same for any number of disks. This makes RAID10 the most expensive RAID type when scaled to large capacity.
If there is a controller failure in a RAID10, any subset of the drives forming a complete RAID0 can be recovered in the same way the RAID0 is recovered.
Similarly to RAID 5, several variations of the layout are possible in implementation. For more diagrams, refer here.
- Minimum 4 disks.
- This is also called as “stripe of mirrors”
- Excellent redundancy ( as blocks are mirrored )
- Excellent performance ( as blocks are striped )
- If you can afford the dollar, this is the BEST option for any mission critical applications (especially databases).
It is possible to combine the advantages (and disadvantages) of RAID 0 and RAID 1 in one single system. This is a nested or hybrid RAID configuration. It provides security by mirroring all data on secondary drives while using striping across each set of drives to speed up data transfers.
Advantages :
If something goes wrong with one of the disks in a RAID 10 configuration, the rebuild time is very fast since all that is needed is copying all the data from the surviving mirror to a new drive. This can take as little as 30 minutes for drives of 1 TB.
Disadvantages :
Half of the storage capacity goes to mirroring, so compared to large RAID 5 or RAID 6 arrays, this is an expensive way to have redundancy.
Create the RAID partitions in Linux using 'mdadm' package'
Steps :-
1. Install the mdadm package
#yum install mdadm*
2. Check the existing disks and any raid enabled on disks
#mdadm -E /dev/sd[b-d]
#mdadm -E /dev/sdb /dev/sdd
3 . Partitioning the new disks with RAID. Create the raid disk based on raid level. for raid 5 3 disks are required. based on that
#fdisk /dev/sdb
- Press ‘n‘ for creating new partition.
- Then choose ‘P‘ for Primary partition. Here we are choosing Primary because there is no partitions defined yet.
- Then choose ‘1‘ to be the first partition. By default it will be 1.
- Here for cylinder size we don’t have to choose the specified size because we need the whole partition for RAID so just Press Enter two times to choose the default full size.
- Next press ‘p‘ to print the created partition.
- Change the Type, If we need to know the every available types Press ‘L‘.
- Here, we are selecting ‘fd‘ as my type is RAID.
- Next press ‘p‘ to print the defined partition.
- Then again use ‘p‘ to print the changes what we have made.
- Use ‘w‘ to write the changes.
Create the RAID device md0
#mdadm -C /dev/md0 -l=5 -n =3 /dev/sd[b-d]
or
#mdadm --create /dev/md0 --level=5 --raid-devices=3 /dev/sdb /dev/sdc /dev/sdd
after creating the raid partitions check the raid status
#cat /proc/mdstat
#mdadm --detail /dev/md0
creating the file system for md0
#mkfs.ext4 /dev/md0
Mount the raid partition
# mkdir /mnt/raid5
# mount /dev/md0 /mnt/raid5/
# ls -l /mnt/raid5/
add it in fs tab
Save the RAID configuration
#mdadm --detail --scan --verbose >> /etc/mdadm.conf
Linux Directory Structure :
Have you wondered why certain programs are located under /bin, or /sbin, or /usr/bin, or /usr/sbin?
For example, less command is located under /usr/bin directory. Why not /bin, or /sbin, or /usr/sbin? What is the different between all these directories?
In this article, let us review the Linux filesystem structures and understand the meaning of individual high-level directories.
1. / – Root
- Every single file and directory starts from the root directory.
- Only root user has write privilege under this directory.
- Please note that /root is root user’s home directory, which is not same as /.
2. /bin – User Binaries
- Contains binary executables.
- Common linux commands you need to use in single-user modes are located under this directory.
- Commands used by all the users of the system are located here.
- For example: ps, ls, ping, grep, cp.
3. /sbin – System Binaries
- Just like /bin, /sbin also contains binary executables.
- But, the linux commands located under this directory are used typically by system aministrator, for system maintenance purpose.
- For example: iptables, reboot, fdisk, ifconfig, swapon
4. /etc – Configuration Files
- Contains configuration files required by all programs.
- This also contains startup and shutdown shell scripts used to start/stop individual programs.
- For example: /etc/resolv.conf, /etc/logrotate.conf
5. /dev – Device Files
- Contains device files.
- These include terminal devices, usb, or any device attached to the system.
- For example: /dev/tty1, /dev/usbmon0
6. /proc – Process Information
- Contains information about system process.
- This is a pseudo filesystem contains information about running process. For example: /proc/{pid} directory contains information about the process with that particular pid.
- This is a virtual filesystem with text information about system resources. For example: /proc/uptime
7. /var – Variable Files
- var stands for variable files.
- Content of the files that are expected to grow can be found under this directory.
- This includes — system log files (/var/log); packages and database files (/var/lib); emails (/var/mail); print queues (/var/spool); lock files (/var/lock); temp files needed across reboots (/var/tmp);
8. /tmp – Temporary Files
- Directory that contains temporary files created by system and users.
- Files under this directory are deleted when system is rebooted.
9. /usr – User Programs
- Contains binaries, libraries, documentation, and source-code for second level programs.
- /usr/bin contains binary files for user programs. If you can’t find a user binary under /bin, look under /usr/bin. For example: at, awk, cc, less, scp
- /usr/sbin contains binary files for system administrators. If you can’t find a system binary under /sbin, look under /usr/sbin. For example: atd, cron, sshd, useradd, userdel
- /usr/lib contains libraries for /usr/bin and /usr/sbin
- /usr/local contains users programs that you install from source. For example, when you install apache from source, it goes under /usr/local/apache2
10. /home – Home Directories
- Home directories for all users to store their personal files.
- For example: /home/john, /home/nikita
11. /boot – Boot Loader Files
- Contains boot loader related files.
- Kernel initrd, vmlinux, grub files are located under /boot
- For example: initrd.img-2.6.32-24-generic, vmlinuz-2.6.32-24-generic
12. /lib – System Libraries
- Contains library files that supports the binaries located under /bin and /sbin
- Library filenames are either ld* or lib*.so.*
- For example: ld-2.11.1.so, libncurses.so.5.7
13. /opt – Optional add-on Applications
- opt stands for optional.
- Contains add-on applications from individual vendors.
- add-on applications should be installed under either /opt/ or /opt/ sub-directory.
14. /mnt – Mount Directory
- Temporary mount directory where sysadmins can mount filesystems.
15. /media – Removable Media Devices
- Temporary mount directory for removable devices.
- For examples, /media/cdrom for CD-ROM; /media/floppy for floppy drives; /media/cdrecorder for CD writer
16. /srv – Service Data
- srv stands for service.
- Contains server specific services related data.
- For example, /srv/cvs contains CVS related data.
File System Types :
Journaling :
One thing you’ll notice when choosing between file systems is that some of them are marked as a “journaling” file system and some aren’t. This is important.
Journaling is designed to prevent data corruption from crashes and sudden power loss. Let’s say your system is partway through writing a file to the disk and it suddenly loses power. Without a journal, your computer would have no idea if the file was completely written to disk. The file would remain there on disk, corrupt.
With a journal, your computer would note that it was going to write a certain file to disk in the journal, write that file to disk, and then remove that job from the journal. If the power went out partway through writing the file, Linux would check the file system’s journal when it boots up and resume any partially completed jobs. This prevents data loss and file corruption.
Journaling does slow disk write performance down a tiny bit, but it’s well-worth it on a desktop or laptop. It’s not as much overhead as you might think. The full file isn’t written to the journal. Instead, only the file metadata, inode, or disk location is recorded in the journal before it’s written to disk.
Every modern file system supports journaling, and you’ll want to use a file system that supports journaling when setting up a desktop or laptop.
File systems that don’t offer journaling are available for use on high-performance servers and other such systems where the administrator wants to squeeze out extra performance. They’re also ideal for removable flash drives, where you don’t want the higher overhead and additional writes of journaling.
- Ext stands for “Extended file system”, and was the first created specifically for Linux. It’s had four major revisions. “Ext” is the first version of the file system, introduced in 1992. It was a major upgrade from the Minix file system used at the time, but lacks important features. Many Linux distributions no longer support Ext.
- Ext2 is not a journaling file system. When introduced, it was the first file system to support extended file attributes and 2 terabyte drives. Ext2’s lack of a journal means it writes to disk less, which makes it useful for flash memory like USB drives. However, file systems like exFAT and FAT32 also don’t use journaling and are more compatible with different operating systems, so we recommend you avoid Ext2 unless you know you need it for some reason.
- Ext3 is basically just Ext2 with journaling. Ext3 was designed to be backwards compatible with Ext2, allowing partitions to be converted between Ext2 and Ext3 without any formatting required. It’s been around longer than Ext4, but Ext4 has been around since 2008 and is widely tested. At this point, you’re better off using Ext4.
- Ext4 was also designed to be backwards compatible. You can mount an Ext4 file system as Ext3, or mount an Ext2 or Ext3 file system as Ext4. It includes newer features that reduce file fragmentation, allows for larger volumes and files, and uses delayed allocation to improve flash memory life. This is the most modern version of the Ext file system and is the default on most Linux distributions.
- XFS was developed by Silicon Graphics in 1994 for the SGI IRX operating system, and was ported to Linux in 2001. It’s similar to Ext4 in some ways, as it also uses delayed allocation to help with file fragmentation and does not allow for mounted snapshots. It can be enlarged, but not shrunk, on the fly. XFS has good performance when dealing with large files, but has worse performance than other file systems when dealing with many small files. It may be useful for certain types of servers that primarily need to deal with large files.
- JFS, or “Journaled File System”, was developed by IBM for the IBM AIX operating system in 1990 and later ported to Linux. It boasts low CPU usage and good performance for both large and small files. JFS partitions can be dynamically resized, but not shrunk. It was extremely well planned and has support in most every major distribution, however its production testing on Linux servers isn’t as extensive as Ext, as it was designed for AIX. Ext4 is more commonly used and is more widely tested.
- Swap is an option when formatting a drive, but isn’t an actual file system. It’s used as virtual memory and doesn’t have a file system structure. You can’t mount it to view its contents. Swap is used as “scratch space” by the Linux kernel to temporarily store data that can’t fit in RAM. It’s also used for hibernating. While Windows stores its paging file as a file on its main system partition, Linux just reserves a separate empty partition for swap space.
- FAT16, FAT32, and exFAT: Microsoft’s FAT file systems are often an option when formatting a drive in Linux. These file systems don’t include a journal, so they’re ideal for external USB drives. They’re a de facto standard that every operating system—Windows, macOS, Linux, and other devices—can read. This makes them the ideal file system to use when formatting an external drive you’ll want to use with other operating systems. FAT32 is older. exFAT is the ideal option, as it supports files over 4 GB in size and partitions over 8 TB in size, unlike FAT32.
Ext2
- Ext2 stands for second extended file system.
- It was introduced in 1993. Developed by Rémy Card.
- This was developed to overcome the limitation of the original ext file system.
- Ext2 does not have journaling feature.
- On flash drives, usb drives, ext2 is recommended, as it doesn’t need to do the over head of journaling.
- Maximum individual file size can be from 16 GB to 2 TB
- Overall ext2 file system size can be from 2 TB to 32 TB
Ext3
- Ext3 stands for third extended file system.
- It was introduced in 2001. Developed by Stephen Tweedie.
- Starting from Linux Kernel 2.4.15 ext3 was available.
- The main benefit of ext3 is that it allows journaling.
- Journaling has a dedicated area in the file system, where all the changes are tracked. When the system crashes, the possibility of file system corruption is less because of journaling.
- Maximum individual file size can be from 16 GB to 2 TB
- Overall ext3 file system size can be from 2 TB to 32 TB
- There are three types of journaling available in ext3 file system.
- Journal – Metadata and content are saved in the journal.
- Ordered – Only metadata is saved in the journal. Metadata are journaled only after writing the content to disk. This is the default.
- Writeback – Only metadata is saved in the journal. Metadata might be journaled either before or after the content is written to the disk.
- You can convert a ext2 file system to ext3 file system directly (without backup/restore).
Ext4
- Ext4 stands for fourth extended file system.
- It was introduced in 2008.
- Starting from Linux Kernel 2.6.19 ext4 was available.
- Supports huge individual file size and overall file system size.
- Maximum individual file size can be from 16 GB to 16 TB
- Overall maximum ext4 file system size is 1 EB (exabyte). 1 EB = 1024 PB (petabyte). 1 PB = 1024 TB (terabyte).
- Directory can contain a maximum of 64,000 subdirectories (as opposed to 32,000 in ext3)
- You can also mount an existing ext3 fs as ext4 fs (without having to upgrade it).
- Several other new features are introduced in ext4: multiblock allocation, delayed allocation, journal checksum. fast fsck, etc. All you need to know is that these new features have improved the performance and reliability of the filesystem when compared to ext3.
- In ext4, you also have the option of turning the journaling feature “off”.
Boot Process in Linux :
The stages involved in Linux Booting Process are:
BIOS
Boot Loader
- MBR
- GRUB
MBR (Master Boot Record)
To overcome this situation GRUB is used with the details of the filesystem in /boot/grub.conf and file system drivers
GRUB (GRand Unified Boot loader)
This loads the kernel in 3 stages
GRUB stage 1:
GRUB Stage 2:
Sample /boot/grub/grub.conf
For more information on GRUB and LILO follow the below link
What is GRUB Boot Loader ?
Kernel is loaded in the following stages:
As per above O/P system will boot into runlevel 5
You can check current runlevel details of your system using below command on the terminal
It uses block level stripping and with this level distributed parity concept came into the picture leaving behind the traditional dedicated parity as used in RAID 3 and RAID 5. Parity information is written to a different disk in the array for each stripe. In case of single disk failure data can be recovered with the help of distributed parity without affecting the operation and other read write operations.
BIOS
Boot Loader
- MBR
- GRUB
Kernel
Init
Runlevel scripts
Init
Runlevel scripts
BIOS
- This is the first thing which loads once you power on your machine.
- When you press the power button of the machine, CPU looks out into ROM for further instruction.
- The ROM contains JUMP function in the form of instrucion which tells the CPU to bring up the BIOS
- BIOS determines all the list of bootable devices available in the system.
- Prompts to select bootable device which can be Hard Disk, CD/DVD-ROM, Floppy Drive, USB Flash Memory Stick etc (optional)
- Operating System tries to boot from Hard Disk where the MBR contains primary boot loader.
Boot Loader
To be very brief this phase includes loading of the boot loader (MBR and GRUB/LILO) into memory to bring up the kernel.MBR (Master Boot Record)
- It is the first sector of the Hard Disk with a size of 512 bytes.
- The first 434 - 446 bytes are the primary boot loader, 64 bytes for partition table and 6 bytesfor MBR validation timestamp.
To overcome this situation GRUB is used with the details of the filesystem in /boot/grub.conf and file system drivers
GRUB (GRand Unified Boot loader)
This loads the kernel in 3 stages
GRUB stage 1:
- The primary boot loader takes up less than 512 bytes of disk space in the MBR - too small a space to contain the instructions necessary to load a complex operating system.
- Instead the primary boot loader performs the function of loading either the stage 1.5 or stage 2 boot loader.
- Stage 1 can load the stage 2 directly, but it is normally set up to load the stage 1.5.
- This can happen when the /boot partition is situated beyond the 1024 cylinder head of the hard drive.
- GRUB Stage 1.5 is located in the first 30 KB of Hard Disk immediately after MBR and before the first partition.
- This space is utilized to store file system drivers and modules.
- This enabled stage 1.5 to load stage 2 to load from any known location on the file system i.e. /boot/grub
GRUB Stage 2:
- This is responsible for loading kernel from /boot/grub/grub.conf and any other modules needed
- Loads a GUI interface i.e. splash image located at /grub/splash.xpm.gz with list of available kernels where you can manually select the kernel or else after the default timeoutvalue the selected kernel will boot
Sample /boot/grub/grub.conf
default=0
timeout=5
splashimage=(hd0,0)/grub/splash.xpm.gz
hiddenmenu
title Red Hat Enterprise Linux Server (2.6.18-194.26.1.el5)
root (hd0,0)
kernel /vmlinuz-2.6.18-194.26.1.el5 ro root=/dev/VolGroup00/root clocksource=acpi_pm divisor=10
initrd /initrd-2.6.18-194.26.1.el5.img
title Red Hat Enterprise Linux Server (2.6.18-194.11.4.el5)
root (hd0,0)
kernel /vmlinuz-2.6.18-194.11.4.el5 ro root=/dev/VolGroup00/root clocksource=acpi_pm divisor=10
initrd /initrd-2.6.18-194.11.4.el5.img
title Red Hat Enterprise Linux Server (2.6.18-194.11.3.el5)
root (hd0,0)
kernel /vmlinuz-2.6.18-194.11.3.el5 ro root=/dev/VolGroup00/root clocksource=acpi_pm divisor=10
initrd /initrd-2.6.18-194.11.3.el5.img
For more information on GRUB and LILO follow the below link
What is GRUB Boot Loader ?
Kernel
This can be considered the heart of operating system responsible for handling all system processes.Kernel is loaded in the following stages:
- Kernel as soon as it is loaded configures hardware and memory allocated to the system.
- Next it uncompresses the initrd image (compressed using zlib into zImage or bzImage formats) and mounts it and loads all the necessary drivers.
- Loading and unloading of kernel modules is done with the help of programs like insmod, and rmmod present in the initrd image.
- Looks out for hard disk types be it a LVM or RAID.
- Unmounts initrd image and frees up all the memory occupied by the disk image.
- Then kernel mounts the root partition as specified in grub.conf as read-only.
- Next it runs the init process
For more information on kernel follow the below link
What is a Kernel in Linux?
What is a Kernel in Linux?
Init Process
- Executes the system to boot into the run level as specified in /etc/inittab
# Default runlevel. The runlevels used by RHS are:
# 0 - halt (Do NOT set initdefault to this)
# 1 - Single user mode
# 2 - Multiuser, without NFS (The same as 3, if you do not have networking)
# 3 - Full multiuser mode
# 4 - unused
# 5 - X11
# 6 - reboot (Do NOT set initdefault to this)
#
id:5:initdefault:
As per above O/P system will boot into runlevel 5
You can check current runlevel details of your system using below command on the terminal
# who -r
run-level 3 Jan 28 23:29 last=S
- Next as per the fstab entry file system's integrity is checked and root partition is re-mountedas read-write (earlier it was mounted as read-only).
Runlevel scripts
A no. of runlevel scripts are defined inside /etc/rc.d/rcx.dRunlevel Directory
0 /etc/rc.d/rc0.d
1 /etc/rc.d/rc1.d
2 /etc/rc.d/rc2.d
3 /etc/rc.d/rc3.d
4 /etc/rc.d/rc4.d
5 /etc/rc.d/rc5.d
6 /etc/rc.d/rc6.d
- Based on the selected runlevel, the init process then executes startup scripts located in subdirectories of the /etc/rc.d directory.
- Scripts used for runlevels 0 to 6 are located in subdirectories /etc/rc.d/rc0.d through /etc/rc.d/rc6.d, respectively.
- For more details on scripts inside /etc/rc.d follow the below link
What are the s and k scripts in the etc rcx.d directories - Lastly, init runs whatever it finds in /etc/rc.d/rc.local (regardless of run level). rc.localis rather special in that it is executed every time that you change run levels.
Troubleshooting in linux :
Recover the deleted /etc/passwd & /etc/shadow /etc/gshadow :
Everything seems fine when your Linux machine works just the way you want it to. But that feeling changes dramatically when your machine starts creating problems that you find really difficult to sort out. Not everyone can troubleshoot a Linux machine efficiently
but you can, if you are ready to stay with us for the next few minutes. Let’s look at what to do when some of your important system files are deleted or corrupted under Red Hat Enterprise Linux 5. The action begins now!
Scene 1: /etc/passwd is deleted
This is an important file in Linux as it contains information about user accounts and passwords. If it’s missing in your system and you try to log in to a user account, you get an error message stating Log-in incorrect and after restarting the system.
Scene 1: /etc/passwd is deleted
This is an important file in Linux as it contains information about user accounts and passwords. If it’s missing in your system and you try to log in to a user account, you get an error message stating Log-in incorrect and after restarting the system.
Now that you have seen the problem and its consequences, it’s time to solve it. Boot into single user mode. At the start of booting, press any key to enter into the GRUB menu. Here you will see a list of the operating systems installed. Just select the one you are working with and press.
It’s time to have some fun with kernel parameters. So highlight the kernel and again press e to edit its parameters.
Next, instruct the kernel to boot into single user mode, which is also known as maintenance mode. Just type 1 after a space and press the Enter key. Now press b to continue the booting process.
Now that you have booted into single user mode, you are probably asking yourself, What is next?. The tricky portion of this exercise is now over and it takes just one command to have your passwd file in its place. Actually, there is a file /etc/passwd-, which is nothing but the backup file for /etc/passwd. So all you need to do is to issue the following command:
Next, instruct the kernel to boot into single user mode, which is also known as maintenance mode. Just type 1 after a space and press the Enter key. Now press b to continue the booting process.
Now that you have booted into single user mode, you are probably asking yourself, What is next?. The tricky portion of this exercise is now over and it takes just one command to have your passwd file in its place. Actually, there is a file /etc/passwd-, which is nothing but the backup file for /etc/passwd. So all you need to do is to issue the following command:
cp /etc/passwd- /etc/passwd
and you are done. Now you can issue the init 5 command to switch to the graphical mode. Everything is fine now. You can also find the backup of /etc/shadow and /etc/gshadow as /etc/shadow- and /etc/gshadow- respectively.
/etc/pam.d/login is deleted :
If your /etc/pam.d/login file is deleted and you try to log in, it won’t ask you to enter your password after entering your username. Instead, it will continuously show the localhost login prompt. Here again, there is a single command that will solve the problem for you:
cp /etc/pam.d/system-auth /etc/pam.d/login
Just boot into the single user mode as done earlier, type this command and you’ll be able to log in normally. There is also a second solution to this problem, which we’ll look at after a while.
/etc/inittab is deleted :
We know that in Linux, init is the first process to be started and it starts all the other processes. The /etc/inittab file contains instructions for the init process and if it’s missing, then no further process can be launched. On starting a system with no inittab file, it will show the following message:
INIT:No inittab file found
and will ask you to enter a runlevel. When you do that, it again shows the message that no more processes are left in this runlevel.
Fixing this problem is not easy because being in the single user mode doesnt help in this case. Here, you need the Linux rescue environment to fix this problem. So set your first boot device to CD and boot with the RHEL5 CD. At the boot prompt, type Linux rescue to enter the rescue environment.
Once you have entered into the rescue environment, your system will be mounted under /mnt/sysimage. Here, reinstall the package that provides the /etc/inittab file. The overall process is given below:
Fixing this problem is not easy because being in the single user mode doesnt help in this case. Here, you need the Linux rescue environment to fix this problem. So set your first boot device to CD and boot with the RHEL5 CD. At the boot prompt, type Linux rescue to enter the rescue environment.
Once you have entered into the rescue environment, your system will be mounted under /mnt/sysimage. Here, reinstall the package that provides the /etc/inittab file. The overall process is given below:
chroot /mnt/sysimage rpm -q --whatprovides /etc/inittab mkdir /a mount /dev/hdc /a Here /dev/hdc is the path of the CD. It may vary on your system, though. rpm Uvh --force /a/Server/initscripts-8.45.25-1.el5.i386.rpm
You can also hit the Tab key after init to auto complete the name.
Now you’ll get your /etc/inittab file back. The same procedure can be applied to recover the /etc/pam.d/login file. In this case, youll have to install the util-linux package. Once you are done with it, type Exit to leave the rescue environment, set your first boot device to hard disk and boot normally.
Now you’ll get your /etc/inittab file back. The same procedure can be applied to recover the /etc/pam.d/login file. In this case, youll have to install the util-linux package. Once you are done with it, type Exit to leave the rescue environment, set your first boot device to hard disk and boot normally.
/boot/grub/grub.conf is deleted :
This file is the configuration file of the GRUB boot loader. If it is deleted and you start your machine, you will see a GRUB prompt that indicates that grub.conf is missing and there is no further instruction for GRUB to carry on its operation.
But don’t worry, as we’ll solve this problem, too, in the next few minutes. You don’t even need to enter single user mode or the Linux rescue environment for this. At the GRUB prompt, you can enter some command that can make your system boot. So here we go: Type root (and hit Tab to find out the hard disks attached to the system. In my case, I got hd0 and fd0the hard disk and floppy disk, respectively). Now we know that GRUB is stored in the first sector of a hard disk, which is hd0,0. So the complete command would be root (hd0,0). Enter this command and press the Enter key to carry on.
You now need to find out the kernel image file. So enter kernel /v and hit Tab to auto complete it. In my system, it’s vmlinuz-2.6.18-128.el5. Please note it down as we’ll require this information further, and then press Enter.
Next, lets figure out the initrd image file. So enter initrd /i and press Tab to auto complete it. For me, its initrd-2.6.18-128.el5.img. Again note it down and press Enter.
Type boot and press Enter, and the system will boot normally.
Now it’s time to create a grub.conf file manually. So create the /boot/grub/grub.conf file and enter the following data in it:
But don’t worry, as we’ll solve this problem, too, in the next few minutes. You don’t even need to enter single user mode or the Linux rescue environment for this. At the GRUB prompt, you can enter some command that can make your system boot. So here we go: Type root (and hit Tab to find out the hard disks attached to the system. In my case, I got hd0 and fd0the hard disk and floppy disk, respectively). Now we know that GRUB is stored in the first sector of a hard disk, which is hd0,0. So the complete command would be root (hd0,0). Enter this command and press the Enter key to carry on.
You now need to find out the kernel image file. So enter kernel /v and hit Tab to auto complete it. In my system, it’s vmlinuz-2.6.18-128.el5. Please note it down as we’ll require this information further, and then press Enter.
Next, lets figure out the initrd image file. So enter initrd /i and press Tab to auto complete it. For me, its initrd-2.6.18-128.el5.img. Again note it down and press Enter.
Type boot and press Enter, and the system will boot normally.
Now it’s time to create a grub.conf file manually. So create the /boot/grub/grub.conf file and enter the following data in it:
splashimage=(hd0,0)/grub/splash.xpm.gz default=0 timeout=5 title Red Hat root (hd0,0) kernel /vmlinuz-2.6.18-128.el5 initrd /initrd-26.18-128.el5.img
Save the file and quit it. You have created a grub.conf file manually to resolve the problem. Don’t forget that the kernel and the initrd image file name may vary on your system. That’s why I asked you to note them down earlier. You can also find them in the /boot folder once you are logged in it’s not a big issue.
So we have looked at solutions to four different problems. I hope this information assists you in learning Linux troubleshooting. Carry on this work and acquire more troubleshooting skills because thats what makes you a true Linux geek.
Why to use RAID?
With the increasing demand in the storage and data world wide the prime concern for the organization is moving towards the security of their data. Now when I use the term security, here it does not means security from vulnerable attacks rather than from hard disk failures and any such relevant accidents which can lead to destruction of data. Now at those scenarios RAID plays it magic by giving you redundancy and an opportunity to get back all your data within a glimpse of time.
Levels
Now with the moving generation and introduction of new technologies new RAID levels started coming into the picture with various improvisation giving an opportunity to organizations to select the required model of RAID as per their work requirement.
Now here I will be giving you brief introduction about some of the main RAID levels which are used in various organizations.
RAID 0
This level strips the data into multiple available drives equally giving a very high read and write performance but offering no fault tolerance or redundancy. This level does not provides any of the RAID factor and cannot be considered in an organization looking for redundancy instead it is preferred where high performance is required.
Calculation:
No. of Disk: 5
Size of each disk: 100GB
Usable Disk size: 500GB
Pros
|
Cons
|
Data is stripped into multiple drives
|
No support for Data Redundancy
|
Disk space is fully utilized
|
No support for Fault Tolerance
|
Minimum 2 drives required
|
No error detection mechanism
|
High performance
|
Failure of either disk results in complete data loss in respective array
|
RAID 1
This level performs mirroring of data in drive 1 to drive 2. It offers 100% redundancy as array will continue to work even if either disk fails. So organization looking for better redundancy can opt for this solution but again cost can become a factor.
Calculation:
No. of Disk: 2
Size of each disk: 100GB
Usable Disk size: 100GB
Pros
|
Cons
|
Performs mirroring of data i.e identical data from one drive is written to another drive for redundancy.
|
Expense is higher (1 extra drive required per drive for mirroring)
|
High read speed as either disk can be used if one disk is busy
|
Slow write performance as all drives has to be updated
|
Array will function even if any one of the drive fails
| |
Minimum 2 drives required
|
RAID 2
This level uses bit-level data stripping rather than block level. To be able to use RAID 2 make sure the disk selected has no self disk error checking mechanism as this level uses external Hamming code for error detection. This is one of the reason RAID is not in the existence in real IT world as most of the disks used these days come with self error detection. It uses an extra disk for storing all the parity information
Calculation:
Formula: n-1 where n is the no. of disk
No. of Disk: 3
Size of each disk: 100GB
Usable Disk size: 200GB
Pros
|
Cons
|
BIT level stripping with parity
|
It is used with drives with no built in error detection mechanism
|
One designated drive is used to store parity
|
These days all SCSI drives have error detection
|
Uses Hamming code for error detection
|
Additional drives required for error detection
|
RAID 3
This level uses byte level stripping along with parity. One dedicated drive is used to store the parity information and in case of any drive failure the parity is restored using this extra drive. But in case the parity drive crashes then the redundancy gets affected again so not much considered in organizations.
Calculation:
Formula: n-1 where n is the no. of disk
No. of Disk: 3
Size of each disk: 100GB
Usable Disk size: 200GB
Pros
|
Cons
|
BYTE level stripping with parity
|
Additional drives required for parity
|
One designated drive is used to store parity
|
No redundancy in case parity drive crashes
|
Data is regenerated using parity drive
|
Slow performance for operating on small sized files
|
Data is accessed parallel
| |
High data transfer rates (for large sized files)
| |
Minimum 3 drives required
|
RAID 4
This level is very much similar to RAID 3 apart from the feature where RAID 4 uses block level stripping rather than byte level.
Calculation:
Formula: n-1 where n is the no. of disk
No. of Disk: 3
Size of each disk: 100GB
Usable Disk size: 200GB
Pros
|
Cons
|
BLOCK level stripping along with dedicated parity
|
Since only 1 block is accessed at a time so performance degrades
|
One designated drive is used to store parity
|
Additional drives required for parity
|
Data is accessed independently
|
Write operation becomes slow as every time a parity has to be entered
|
Minimum 3 drives required
| |
High read performance since data is accessed independently.
|
RAID 5
It uses block level stripping and with this level distributed parity concept came into the picture leaving behind the traditional dedicated parity as used in RAID 3 and RAID 5. Parity information is written to a different disk in the array for each stripe. In case of single disk failure data can be recovered with the help of distributed parity without affecting the operation and other read write operations.
Calculation:
Formula: n-1 where n is the no. of disk
No. of Disk: 4
Size of each disk: 100GB
Usable Disk size: 300GB
Pros
|
Cons
|
Block level stripping with DISTRIBUTED parity
|
In case of disk failure recovery may take longer time as parity has to be calculated from all available drives
|
Parity is distributed across the disks in an array
|
Cannot survive concurrent drive failures
|
High Performance
| |
Cost effective
| |
Minimum 3 drives required
|
RAID 6
This level is an enhanced version of RAID 5 adding extra benefit of dual parity. This level uses block level stripping with DUAL distributed parity. So now you can get extra redundancy. Imagine you are using RAID 5 and 1 of your disk fails so you need to hurry to replace the failed disk because if simultaneously another disk fails then you won't be able to recover any of the data so for those situations RAID 6 plays its part where you can survive 2 concurrent disk failures before you run out of options.
Calculation:
Formula: n-2 where n is the no. of disk
No. of Disk: 4
Size of each disk: 100GB
Usable Disk size: 200GB
Pros
|
Cons
|
Block level stripping with DUAL distributed parity
|
Cost Expense can become a factor
|
2 parity blocks are created
|
Writing data takes longer time due to dual parity
|
Can survive concurrent 2 drive failures in an array
| |
Extra Fault Tolerance and Redundancy
| |
Minimum 4 drives required
|
RAID 0+1
This level uses RAID 0 and RAID 1 for providing redundancy. Stripping of data is performed before Mirroring. In this level the overall capacity of usable drives is reduced as compared to other RAID levels. You can sustain more than one drive failure as long as they are not in the same mirrored set.
Calculation:
Formula: n/2 * size of disk (where n is the no. of disk)
No. of Disk: 8
Size of each disk: 100GB
Usable Disk size: 400GB
Pros
|
Cons
|
No parity generation
|
Costly as extra drive is required for each drive
|
Performs RAID 0 to strip data and RAID 1 to mirror
|
100% disk capacity is not utilized as half is used for mirroring
|
Stripping is performed before Mirroring
|
Very limited scalability
|
Usable capacity is n/2 * size of disk (n = no. of disks)
| |
Drives required should be multiple of 2
| |
High Performance as data is stripped
|
RAID 1+0 (RAID 10)
This level performs Mirroring of data prior stripping which makes it much more efficient and redundant as compared to RAID 0+1. This level can survive multiple simultaneous drive failures. This can be used in organizations where high performance and security are required. In terms of fault Tolerance and rebuild performance it is better than RAID 0+1.
Calculation:
Formula: n/2 * size of disk (where n is the no. of disk)
No. of Disk: 8
Size of each disk: 100GB
Usable Disk size: 400GB
Pros
|
Cons
|
No Parity generation
|
Very Expensive
|
Performs RAID 1 to mirror and RAID 0 to strip data
|
Limited scalability
|
Mirroring is performed before stripping
| |
Drives required should be multiple of 2
| |
Usable capacity is n/2 * size of disk (n = no. of disks)
| |
Better Fault Tolerance than RAID 0+1
| |
Better Redundancy and faster rebuild than 0+1
| |
Can sustain multiple drive failures
|
Uname :-
[root@dedicated2388 ~]# man uname
UNAME(1) User Commands UNAME(1)
NAME
uname - print system information
SYNOPSIS
uname [OPTION]...
DESCRIPTION
Print certain system information. With no OPTION, same as -s.
-a, --all
print all information, in the following order, except omit -p and -i if unknown:
-s, --kernel-name
print the kernel name
-n, --nodename
print the network node hostname
-r, --kernel-release
print the kernel release
-v, --kernel-version
print the kernel version
-m, --machine
print the machine hardware name
-p, --processor
print the processor type or "unknown"
-i, --hardware-platform
print the hardware platform or "unknown"
-o, --operating-system
print the operating system
--help display this help and exit
--version
output version information and exit
Kickstart :-
install
cdrom
lang en_US.UTF-8
keyboard us
langsupport --default=en_US.UTF-8 en_US.UTF-8
network --device eth0 --bootproto=query --hostname=query
rootpw --iscrypted $1$AhgzU2li$Z4r5C3cqWG94lHT7gOw1N.
firewall --enabled
selinux --disabled
authconfig --enableshadow --enablemd5
timezone Asia/Calcutta
bootloader --location=mbr --append="console=xvc0"
#
zerombr yes
clearpart --all
part /boot --fstype ext3 --size=150 --ondisk=sda
part / --fstype ext3 --size=50000
part pv.01 --size=1 --grow --ondisk=sda
part pv.02 --size=1 --grow --ondisk=sda
volgroup rootvg pv.01
volgroup satvg pv.02
logvol /home --fstype=ext3 --name=lv_home --vgname=rootvg --size=8000
logvol swap --name=lv_swap --vgname=satvg --size=8072
# In the above partition layout (with LVM) I have
# used two disks, sda and sdb for different volumes.
# You don't need to use LVM etc. HDA for IDE etc
reboot
#
%packages
@afrikaans-support
@arabic-support
@assamese-support
@base
@bengali-support
@brazilian-support
@breton-support
@british-support
@bulgarian-support
@catalan-support
@chinese-support
@core
@croatian-support
@czech-support
@dns-server
@danish-support
@development-libs
@development-tools
@dutch-support
@editors
@estonian-support
@faeroese-support
@finnish-support
@french-support
@gaelic-support
@galician-support
@german-support
@greek-support
@gujarati-support
@hebrew-support
@hindi-support
@hungarian-support
@icelandic-support
@indonesian-support
@irish-support
@italian-support
@japanese-support
@kannada-support
@korean-support
@legacy-software-support
@malayalam-support
@marathi-support
@norwegian-support
@oriya-support
@polish-support
@portuguese-support
@punjabi-support
@romanian-support
@russian-support
@serbian-support
@sinhala-support
@slovak-support
@slovenian-support
@spanish-support
@swedish-support
@system-tools
@tamil-support
@telugu-support
@thai-support
@turkish-support
@ukrainian-support
@urdu-support
@welsh-support
@base-x
iscsi-initiator-utils
fipscheck
device-mapper-multipath
sgpio
python-dmidecode
imake
expect
emacs-nox
emacs
audit
sysstat
xorg-x11-utils
xorg-x11-server-Xvfb
-psgml
-zisofs-tools
-vnc
-nmap
-screen
-xdelta
-OpenIPMI-tools
-openldap-clients
-samba-client
-bluez-hcidump
-zsh
-xorg-x11-apps
-pirut
-openssh-askpass
-rhn-setup-gnome
-firstboot
-system-config-display
-freeglut
-gdm
-policycoreutils-gui
-rhgb
-synaptics
-krb5-auth-dialog
-system-config-soundcard
-xterm
-dejavu-lgc-fonts
-subscription-manager-firstboot
-linuxwacom
-system-config-services
-vnc-server
-system-config-date
-glx-utils
-wdaemon
-authconfig-gtk
-system-config-printer
-system-config-network
-system-config-users
%post
echo -e "[rhel-remoteftp.rep]\nname=rks \nbaseurl=ftp://10.219.39.62/pub/Server \nenabled=1 \ngpgcheck=0 " > /etc/yum.repos.d/rks.repo
yum install xinetd* -y
yum install caching-nameserver -y
yum install net-snmp* -y
Perfomance tools :
vmstat and iostat :
Linux Performance Monitoring with Vmstat and Iostat Commands
This is our on-going series of commands and performance monitoring in Linux. Vmstat and Iostat both commands are available on all major Unix-like (Linux/Unix/FreeBSD/Solaris) Operating Systems.
If vmstat and iostat commands are not available on your box, please install sysstat package. The vmstat, sarand iostat commands are the collection of package included in sysstat – the system monitoring tools. The iostat generates reports of CPU & all device statistics. You may download and install sysstat using source tarball from link sysstat, but we recommend installing through YUM command.
Install Sysstat in Linux
- vmstat – Summary information of Memory, Processes, Paging etc.
- iostat – Central Processing Unit (CPU) statistics and input/output statistics for devices and partitions.
6 Vmstat Command Examples in Linux
1. List Active and Inactive Memory
In the below example, there are six columns. The significant of the columns are explained in man page of vmstat in details. Most important fields are free under memory and si, so under swap column.
- Free – Amount of free/idle memory spaces.
- si – Swaped in every second from disk in Kilo Bytes.
- so – Swaped out every second to disk in Kilo Bytes.
Note: If you run vmstat without parameters it’ll displays summary report since system boot.
2. Execute vmstat ‘X’ seconds and (‘N’number of times)
With this command, vmstat execute every two seconds and stop automatically after executing six intervals.
3. Vmstat with timestamps
vmstat command with -t parameter shows timestamps with every line printed as shown below.
4. Statistics of Various Counter
vmstat command and -s switch displays summary of various event counters and memory statistics.
5. Disks Statistics
vmstat with -d option display all disks statistics.
6. Display Statistics in Megabytes
The vmstat displays in Megabytes with parameters -S and M(Uppercase & megabytes). By default vmstatdisplays statistics in kilobytes.
6 Iostat Command Examples in Linux
7. Display CPU and I/O statistics
iostat without arguments displays CPU and I/O statistics of all partitions as shown below.
8. Shows only CPU Statistics
iostat with -c arguments displays only CPU statistics as shown below.
9. Shows only Disks I/O Statistics
iostat with -d arguments displays only disks I/O statistics of all partitions as shown.
10. Shows I/O statistics only of a single device.
By default it displays statistics of all partitions, with -p and device name arguments displays only disks I/Ostatistics for specific device only as shown.
11. Display LVM Statistics
With -N (Uppercase) parameter displays only LVM statistics as shown.
12. iostat version.
With -V (Uppercase) parameter displays version of iostat as shown.
mpstat – Processors Statistics
1. Using mpstat command without any option, will display the Global Average Activities by All CPUs.
2. Using mpstat with option ‘-P‘ (Indicate Processor Number) and ‘ALL’, will display statistics about all CPUs one by one starting from 0. 0 will the first one.
3. To display the statistics for N number of iterations after n seconds interval with average of each cpu use the following command.
4. The option ‘I‘ will print total number of interrupt statistics about per processor.
5. Get all the above information in one command i.e. equivalent to “-u -I ALL -p ALL“.
pidstat – Process and Kernel Threads Statistics
This is used for process monitoring and current threads, which are being managed by kernel. pidstat can also check the status about child processes and threads.
Syntax
6. Using pidstat command without any argument, will display all active tasks.
7. To print all active and non-active tasks use the option ‘-p‘ (processes).
8. Using pidstat command with ‘-d 2‘ option, we can get I/O statistics and 2 is interval in seconds to get refreshed statistics. This option can be handy in situation, where your system is undergoing heavy I/O and you want to get clues about the processes consuming high resources.
9. To know the cpu statistics along with all threads about the process id 4164 at interval of 2 sec for 3 times use the following command with option ‘-t‘ (display statistics of selected process).
10. Use the ‘-rh‘ option, to know the about memory utilization of processes which are frequently varying their utilization in 2 second interval.
11. To print all the process of containing string “VB“, use ‘-t‘ option to see threads as well.
12. To get realtime priority and scheduling information use option ‘-R‘ .
Here, I am not going to cover about Iostat utility, as we are already covered it. Please have a look on “Linux Performance Monitoring with Vmstat and Iostat” to get all details about iostat.
sar – System Activity Reporter
Using “sar” command, we can get the reports about whole system’s performance. This can help us to locate the system bottleneck and provide the help to find out the solutions to these annoying performance issues.
The Linux Kernel maintains some counter internally, which keeps track of all requests, their completion time and I/O block counts etc. From all these information, sar calculates rates and ratio of these request to find out about bottleneck areas.
The main thing about the sar is that, it reports all activities over a period if time. So, make sure that sar collect data on appropriate time (not on Lunch time or on weekend.:)
13. Following is a basic command to invoke sar. It will create one file named “sarfile” in your current directory. The options ‘-u‘ is for CPU details and will collect 5 reports at an interval of 2 seconds.
14. In the above example, we have invoked sar interactively. We also have an option to invoke it non-interactively via cron using scripts /usr/local/lib/sa1 and /usr/local/lib/sa2 (If you have used /usr/local as prefix during installation time).
- /usr/local/lib/sa1 is a shell script that we can use for scheduling cron which will create daily binary log file.
- /usr/local/lib/sa2 is a shell script will change binary log file to human-readable form.
Use the following Cron entries for making this non-interactive:
At the back-end sa1 script will call sadc (System Activity Data Collector) utility for fetching the data at a particular interval. sa2 will call sar for changing binary log file to human readable form.
15. Check run queue length, total number of processes and load average using ‘-q‘ option.
16. Check statistics about the mounted file systems using ‘-F‘.
17. View network statistics using ‘-n DEV‘.
18. View block device statistics like iostat using ‘-d‘.
19. To print memory statistics use ‘-r‘ option.
20. Using ‘sadf -d‘, we can extract data in format which can be processed using databases.