Summary of Contents for Red Hat ENTERPRISE LINUX 3 - INTRODUCTION TO SYSTEM ADMINISTRATION
Page 1
Red Hat Enterprise Linux 3 Introduction to System Administration...
Page 2
All other trademarks and copyrights referred to are the property of their respective owners. The GPG fingerprint of the security@redhat.com key is: CA 20 86 86 2B D6 9D FC 65 F6 EC C4 21 91 80 CD DB 42 A6 0E...
Table of Contents Introduction............................i 1. Changes to This Manual ......................i 2. Document Conventions......................ii 3. More to Come ........................v 3.1. Send in Your Feedback ..................v 4. Sign Up for Support ......................v 1. The Philosophy of System Administration ................... 1 1.1.
Page 4
3. Bandwidth and Processing Power ....................31 3.1. Bandwidth ........................31 3.1.1. Buses ......................... 31 3.1.2. Datapaths......................32 3.1.3. Potential Bandwidth-Related Problems ............32 3.1.4. Potential Bandwidth-Related Solutions ............32 3.1.5. In Summary..................... 33 3.2. Processing Power ......................34 3.2.1.
Enterprise Linux Reference Guide. HTML, PDF, and RPM versions of the manuals are available on the Red Hat Enterprise Linux Docu- mentation CD and online at http://www.redhat.com/docs/. Note Although this manual reflects the most current information possible, read the Red Hat Enterprise Linux Release Notes for information that may not have been available prior to our documenta- tion being finalized.
Introduction Chapter 1 The Philosophy of System Administration This chapter has been updated to more accurately reflect Red Hat Enterprise Linux functionality. Chapter 2 Resource Monitoring This chapter has been updated to introduce the OProfile system-wide profiler. Chapter 3 Bandwidth and Processing Power This chapter has been updated to more accurately reflect Red Hat Enterprise Linux functionality.
Page 9
Introduction application This style indicates that the program is an end-user application (as opposed to system software). For example: Use Mozilla to browse the Web. [key] A key on the keyboard is shown in this style. For example: To use [Tab] completion, type in a character and then press the [Tab] key. Your terminal displays the list of files in the directory that start with that letter.
Page 10
Introduction [stephen@maturin stephen]$ leopard login: user input Text that the user has to type, either on the command line, or into a text box on a GUI screen, is displayed in this style. In the following example, text is displayed in this style: To boot your system into the text based installation program, you must type in the text com- mand at the prompt.
If you spot a typo in the Red Hat Enterprise Linux Introduction to System Administration, or if you have thought of a way to make this manual better, we would love to hear from you. Please submit a report in Bugzilla (http://bugzilla.redhat.com/bugzilla) against the component rhel-isa Be sure to mention the manual’s identifier:...
Chapter 1. The Philosophy of System Administration Although the specifics of being a system administrator may change from platform to platform, there are underlying themes that do not. These themes make up the philosophy of system administration. The themes are: Automate everything •...
Chapter 1. The Philosophy of System Administration 1.2. Document Everything If given the choice between installing a brand-new server and writing a procedural document on performing system backups, the average system administrator would install the new server every time. While this is not at all unusual, you must document what you do. Many system administrators put off doing the necessary documentation for a variety of reasons: "I will get around to it later."...
Chapter 1. The Philosophy of System Administration All of these changes should be documented in some fashion. Otherwise, you could find yourself being completely confused about a change you made several months earlier. Some organizations use more complex methods for keeping track of changes, but in many cases a simple revision history at the start of the file being changed is all that is necessary.
Chapter 1. The Philosophy of System Administration module to a faster model, and reboot. Once this is done, you will move the database itself to faster, RAID-based storage. Here is one possible announcement for this situation: System Downtime Scheduled for Friday Night Starting this Friday at 6pm (midnight for our associates in Berlin), all financial applications will be unavail- able for a period of approximately four hours.
Chapter 1. The Philosophy of System Administration 1.3.3. Tell Your Users What You Have Done After you have finished making the changes, you must tell your users what you have done. Again, this should be a summary of the previous messages (invariably someone will not have read them.) However, there is one important addition you must make.
Chapter 1. The Philosophy of System Administration Time (often of critical importance when the time involves things such as the amount of time during • which system backups may take place) Knowledge (whether it is stored in books, system documentation, or the brain of a person that has •...
Chapter 1. The Philosophy of System Administration While you are thinking about security, do not make the mistake of assuming that possible intruders will only attack your systems from outside of your company. Many times the perpetrator is someone within the company. So the next time you walk around the office, look at the people around you and ask yourself this question: What would happen if that person were to attempt to subvert our security? Note...
Chapter 1. The Philosophy of System Administration fantastic administrator would be caught flat-footed. The reason? Our fantastic administrator failed to plan ahead. Certainly no one can predict the future with 100% accuracy. However, with a bit of awareness it is easy to read the signs of many changes: An offhand mention of a new project gearing up during that boring weekly staff meeting is a sure •...
Chapter 1. The Philosophy of System Administration to months. The command is used to manipulate the files controlling the daemon that crontab cron actually schedules each job for execution. cron command (and the closely-related command ) are more appropriate for scheduling batch the execution of one-time scripts or commands.
Chapter 1. The Philosophy of System Administration the choice of an email client tends to be a personal one; therefore, the best approach is to try each client for yourself, and use what works best for you. 1.10.3. Security As stated earlier in this chapter, security cannot be an afterthought, and security under Red Hat Enter- prise Linux is more than skin-deep.
Chapter 1. The Philosophy of System Administration man page — Schedule commands and scripts for execution at a later time with this utility. • at(1) man page — Learn more about the default shell (and shell script writing) with this docu- •...
Page 24
Chapter 1. The Philosophy of System Administration The Red Hat Enterprise Linux System Administration Guide; Red Hat, Inc. — Includes chapters on • managing users and groups, automating tasks, and managing log files. Linux Administration Handbook by Evi Nemeth, Garth Snyder, and Trent R. Hein; Prentice Hall — •...
Chapter 2. Resource Monitoring As stated earlier, a great deal of system administration revolves around resources and their efficient use. By balancing various resources against the people and programs that use those resources, you waste less money and make your users as happy as possible. However, this leaves two questions: What are resources? And: How is it possible to know what resources are being used (and to what extent)?
Chapter 2. Resource Monitoring 1. Monitoring to identify the nature and scope of the resource shortages that are causing the per- formance problems 2. The data produced from monitoring is analyzed and a course of action (normally performance tuning and/or the procurement of additional hardware) is taken to resolve the problem 3.
Chapter 2. Resource Monitoring How many of those I/O operations are reads? How many are writes? • What is the average amount of data read/written with each I/O? • There are more ways of studying disk drive performance; these points have only scratched the surface. The main concept to keep in mind is that there are many different types of data for each resource.
Chapter 2. Resource Monitoring I/O processing, and so on. These statistics also reveal that, when system performance is monitored, there are no boundaries between the different statistics. In other words, CPU utilization statistics may end up pointing to a problem in the I/O subsystem, or memory utilization statistics may reveal an application design flaw.
Chapter 2. Resource Monitoring Page Ins/Page Outs These statistics make it possible to gauge the flow of pages from system memory to attached mass storage devices (usually disk drives). High rates for both of these statistics can mean that the system is short of physical memory and is thrashing, or spending more system resources on moving pages into and out of memory than on actually running applications.
Chapter 2. Resource Monitoring Transfers per Second This statistic is a good way of determining whether a particular device’s bandwidth limitations are being reached. Reads/Writes per Second A slightly more detailed breakdown of transfers per second, these statistics allow the system ad- ministrator to more fully understand the nature of the I/O loads a storage device is experiencing.
Page 31
Chapter 2. Resource Monitoring option, and can cause any changes between updates to be highlighted by using the option, as in the following command: watch -n 1 -d free For more information, refer to the man page. watch command runs until interrupted with [Ctrl]-[C]. The command is something to keep watch watch...
Page 32
Chapter 2. Resource Monitoring 2.5.2.1. The GNOME System Monitor — A Graphical If you are more comfortable with graphical user interfaces, the GNOME System Monitor may be , the GNOME System Monitor displays information related to overall more to your liking. Like system status, process counts, memory and swap utilization, and process-level statistics.
Page 33
Chapter 2. Resource Monitoring The process-related fields are: — The number of runnable processes waiting for access to the CPU • — The number of processes in an uninterruptible sleep state • The memory-related fields are: — The amount of virtual memory used •...
Chapter 2. Resource Monitoring 2.5.4. The Sysstat Suite of Resource Monitoring Tools While the previous tools may be helpful for gaining more insight into system performance over very short time frames, they are of little use beyond providing a snapshot of system resource utilization. In addition, there are aspects of system performance that cannot be easily monitored using such simplistic tools.
Page 35
Chapter 2. Resource Monitoring The device specification, displayed as , where ¦ § • major-number -sequence-number is the device’s major number , and is a sequence ¦ § ¦ § major-number sequence-number number starting at zero. The number of transfers (or I/O operations) per second. •...
Page 36
Chapter 2. Resource Monitoring 2.5.4.4. The command command produces system utilization reports based on the data collected by . As config- sadc ured in Red Hat Enterprise Linux, is automatically run to process the files automatically collected . The report files are written to and are named ¨...
Chapter 2. Resource Monitoring 2.5.5. OProfile The OProfile system-wide profiler is a low-overhead monitoring tool. OProfile makes use of the pro- cessor’s performance monitoring hardware to determine the nature of performance-related problems. Performance monitoring hardware is part of the processor itself. It takes the form of a special counter, incremented each time a certain event (such as the processor not being idle or the requested data not being in cache) occurs.
Page 38
Chapter 2. Resource Monitoring op_to_source Displays annotated source code and/or assembly listings op_visualise Graphically displays collected data These programs make it possible to display the collected data in a variety of ways. The administrative interface software controls all aspects of data collection, from specifying which events are to be monitored to starting and stopping the collection itself.
Page 39
Chapter 2. Resource Monitoring CTR_COUNT[1]= CTR_KERNEL[1]=1 CTR_USER[1]=1 CTR_UM[1]=0 CTR_EVENT_VAL[1]= one_enabled=1 SEPARATE_LIB_SAMPLES=0 SEPARATE_KERNEL_SAMPLES=0 VMLINUX=/boot/vmlinux-2.4.21-1.1931.2.349.2.2.entsmp Next, use to actually start data collection with the command: opcontrol opcontrol --start Using log file /var/lib/oprofile/oprofiled.log Daemon started. Profiler running. Verify that the daemon is running with the command oprofiled ps x | grep -i oprofiled 32019 ?
Chapter 2. Resource Monitoring Where: represents the number of samples collected • sample-count represents the percentage of all samples collected for this specific executable • sample-percent is a field that is not used • unused-field represents the name of the file containing executable code for which sam- •...
The mailing list, unfortunately, appears to have been overrun by spam and is no longer used. http://people.redhat.com/alikins/system_tuning.html — System Tuning Info for Linux Servers. A • stream-of-consciousness approach to performance tuning and resource monitoring for servers.
Chapter 3. Bandwidth and Processing Power Of the two resources discussed in this chapter, one (bandwidth) is often hard for the new system administrator to understand, while the other (processing power) is usually a much easier concept to grasp. Additionally, it may seem that these two resources are not that closely related — why group them together? The reason for addressing both resources together is that these resources are based on the hardware that tie directly into a computer’s ability to move and process data.
Chapter 3. Bandwidth and Processing Power 3.1.1.1. Examples of Buses No matter where in a computer system you look, there are buses. Here are a few of the more common ones: Mass storage buses (ATA and SCSI) • Networks (Ethernet and Token Ring) •...
Chapter 3. Bandwidth and Processing Power 3.1.4. Potential Bandwidth-Related Solutions Fortunately, bandwidth-related problems can be addressed. In fact, there are several approaches you can take: Spread the load • Reduce the load • Increase the capacity • The following sections explore each approach in more detail. 3.1.4.1.
Chapter 3. Bandwidth and Processing Power For example, consider a SCSI adapter that is connected to a PCI bus. If there are performance problems with SCSI disk I/O, it might be the result of a poorly-performing SCSI adapter, even though the SCSI and PCI buses themselves are nowhere near their bandwidth capabilities.
Chapter 3. Bandwidth and Processing Power But how is it that many different applications can seemingly run at once under a modern operating system? The answer is that these are multitasking operating systems. In other words, they create the illusion that many different things are going on simultaneously when in fact that is not possible. The trick is to give each process a fraction of a second’s worth of time running on the CPU before giving the CPU to another process for the next fraction of a second.
Page 48
Chapter 3. Bandwidth and Processing Power 3.2.3.1.1. Reducing Operating System Overhead To reduce operating system overhead, you must examine your current system load and determine what aspects of it result in inordinate amounts of overhead. These areas could include: Reducing the need for frequent process scheduling •...
Page 49
Chapter 3. Bandwidth and Processing Power 3.2.3.2.1. Upgrading the CPU The most straightforward approach is to determine if your system’s CPU can be upgraded. The first step is to determine if the current CPU can be removed. Some systems (primarily laptops) have CPUs that are soldered in place, making an upgrade impossible.
Chapter 3. Bandwidth and Processing Power 3.3. Red Hat Enterprise Linux-Specific Information Monitoring bandwidth and CPU utilization under Red Hat Enterprise Linux entails using the tools discussed in Chapter 2 Resource Monitoring; therefore, if you have not yet read that chapter, you should do so before continuing.
Page 51
Chapter 3. Bandwidth and Processing Power points with device names, it is possible to use this report to determine if, for example, the partition containing is experiencing an excessive workload. /home/ Actually, each line output from is longer and contains more information than this; here is iostat -x the remainder of each line (with the device column added for easier reading): Device:...
Chapter 3. Bandwidth and Processing Power 3.3.2. Monitoring CPU Utilization on Red Hat Enterprise Linux Unlike bandwidth, monitoring CPU utilization is much more straightforward. From a single percent- age of CPU utilization in GNOME System Monitor, to the more in-depth statistics reported by it is possible to accurately determine how much CPU power is being consumed and by what.
Page 53
Chapter 3. Bandwidth and Processing Power To gain more detailed knowledge regarding CPU utilization, we must change tools. If we examine output from , we obtain a slightly different understanding of our example system: vmstat procs memory swap system swpd free buff cache...
Page 54
Chapter 3. Bandwidth and Processing Power The statistics contained in this report are no different from those produced by many of the other tools. The biggest benefit here is that makes the data available on an ongoing basis and is therefore more useful for obtaining long-term averages, or for the production of CPU utilization graphs.
Chapter 3. Bandwidth and Processing Power This report (which has been truncated horizontally to fit on the page) includes one column for each interrupt level (for example, the field illustrating the rate for interrupt level 2). If this were a i002/s multiprocessor system, there would be one line per sample period for each CPU.
The mailing list, unfortunately, appears to have been overrun by spam and is no longer used. http://people.redhat.com/alikins/system_tuning.html — System Tuning Info for Linux Servers. A • stream-of-consciousness approach to performance tuning and resource monitoring for servers.
Chapter 4. Physical and Virtual Memory All present-day, general-purpose computers are of the type known as stored program computers. As the name implies, stored program computers load instructions (the building blocks of programs) into some type of internal storage, where they subsequently execute those instructions. Stored program computers also use the same storage for data.
Chapter 4. Physical and Virtual Memory Very limited expansion capabilities (a change in CPU architecture would be required) • Expensive (more than one dollar/byte) • However, at the other end of the spectrum, off-line backup storage is: Very slow (access times may be measured in days, if the backup media must be shipped long •...
Chapter 4. Physical and Virtual Memory of cache is to function as a very fast copy of the contents of selected portions of RAM, any time a piece of data changes its value, that new value must be written to both cache memory and RAM. Otherwise, the data in cache and the data in RAM would no longer match.
Chapter 4. Physical and Virtual Memory Retrieving data is just as straightforward: 1. The address of the desired data is presented to the address connections. 2. The read/write connection is set to read mode. 3. The desired data is read from the data connections. While these steps seem simple, they take place at very high speeds, with the time spent on each step measured in nanoseconds.
Chapter 4. Physical and Virtual Memory Note Although there is much more to learn about hard drives, disk storage technologies are discussed in more depth in Chapter 5 Managing Storage. For the time being, it is only necessary to keep in mind the huge speed difference between RAM and disk-based technologies and that their storage capacity usually exceeds that of RAM by a factor of at least 10, and often by 100 or more.
Chapter 4. Physical and Virtual Memory A later approach known as overlaying attempted to alleviate the problem by allowing programmers to dictate which parts of their application needed to be memory-resident at any given time. In this way, code only required once for initialization purposes could be written over (overlayed) with code that would be used later.
Chapter 4. Physical and Virtual Memory 4.4. Virtual Memory: The Details First, we must introduce a new concept: virtual address space. Virtual address space is the maximum amount of address space available to an application. The virtual address space varies according to the system’s architecture and operating system.
Chapter 4. Physical and Virtual Memory While the first three actions are relatively straightforward, the last one is not. For that, we need to cover some additional topics. 4.4.2. The Working Set The group of physical memory pages currently dedicated to a specific process is known as the working set for that process.
Chapter 4. Physical and Virtual Memory However, this is no reason to throw up one’s hands and give up. The benefits of virtual memory are too great to do that. And, with a bit of effort, good performance is possible. The thing that must be done is to examine those system resources impacted by heavy use of the virtual memory subsystem.
Page 66
Chapter 4. Physical and Virtual Memory total used free shared buffers cached Mem: 1288720 361448 927272 27844 187632 -/+ buffers/cache: 145972 1142748 Swap: 522104 522104 We note that this system has 1.2GB of RAM, of which only about 350MB is actually in use. As expected for a system with this much free RAM, none of the 500MB swap partition is in use.
Page 67
Chapter 4. Physical and Virtual Memory field is always zero for systems (such as Red Hat Enterprise Linux) using the 2.4 kbmemshrd Linux kernel. The lines for this report have been truncated to fit on the page. Here is the remainder of each line, with the timestamp added to the left to make reading easier: 12:00:01 AM kbswpfree kbswpused...
The mailing list, unfortunately, appears to have been overrun by spam and is no longer used. http://people.redhat.com/alikins/system_tuning.html — System Tuning Info for Linux Servers. A • stream-of-consciousness approach to performance tuning and resource monitoring for servers.
Chapter 4. Physical and Virtual Memory http://www.linuxjournal.com/article.php?sid=2396 — Performance Monitoring Tools for Linux. • This Linux Journal page is geared more toward the administrator interested in writing a customized performance graphing solution. Written several years ago, some of the details may no longer apply, but the overall concept and execution are sound.
Chapter 5. Managing Storage If there is one thing that takes up the majority of a system administrator’s day, it would have to be storage management. It seems that disks are always running out of free space, becoming overloaded with too much I/O activity, or failing unexpectedly. Therefore, it is vital to have a solid working knowledge of disk storage in order to be a successful system administrator.
Chapter 5. Managing Storage 5.1.2. Data reading/writing device The data reading/writing device is the component that takes the bits and bytes on which a computer system operates and turns them into the magnetic or optical variations necessary to interact with the materials coating the surface of the disk platters.
Chapter 5. Managing Storage 5.2. Storage Addressing Concepts The configuration of disk platters, heads, and access arms makes it possible to position the head over any part of any surface of any platter in the mass storage device. However, this is not sufficient; to use this storage capacity, we must have some method of giving addresses to uniform-sized parts of the available storage.
Chapter 5. Managing Storage 5.2.1.2. Head Although in the strictest sense we are selecting a particular disk platter, because each surface has a read/write head dedicated to it, it is easier to think in terms of interacting with a specific head. In fact, the device’s underlying electronics actually select one head and —...
Chapter 5. Managing Storage 5.3. Mass Storage Device Interfaces Every device used in a computer system must have some means of attaching to that computer system. This attachment point is known as an interface. Mass storage devices are no different — they have interfaces too.
Chapter 5. Managing Storage There were also proprietary interfaces from the larger computer vendors of the day (IBM and DEC, primarily). The intent behind the creation of these interfaces was to attempt to protect the extremely lucrative peripherals business for their computers. However, due to their proprietary nature, the de- vices compatible with these interfaces were more expensive than equivalent non-proprietary devices.
Page 77
Chapter 5. Managing Storage electrical and mechanical aspects of the ATA interface but uses the communication protocol from the next interface discussed — SCSI. 5.3.2.2. SCSI Formally known as the Small Computer System Interface, SCSI as it is known today originated in the early 80s and was declared a standard in 1986.
Chapter 5. Managing Storage SCSI ID as the bus’s controller. This also means that, in practice, only 7 (or 15, for wide SCSI) devices may be present on a single bus, as each bus must reserve an ID for the controller. Most SCSI implementations include some means of scanning the SCSI bus;...
Page 79
Chapter 5. Managing Storage 5.4.1. Mechanical/Electrical Limitations Because hard drives are electro-mechanical devices, they are subject to various limitations on their speed and performance. Every I/O request requires the various components of the drive to work to- gether to satisfy the request. Because each of these components have different performance character- istics, the overall performance of the hard drive is determined by the sum of the performance of the individual components.
Chapter 5. Managing Storage RPM is considered adequate only for entry-level drives. This averages approximately 3 milliseconds for a 10,000 RPM drive. 5.4.1.4. Access Arm Movement If there is one component in hard drives that can be considered its Achilles’ Heel, it is the access arm. The reason for this is that the access arm must move very quickly and accurately over relatively long distances.
Chapter 5. Managing Storage This is because, short of the I/O requester being an I/O benchmarking tool that does nothing but produce I/O requests as quickly as possible, some amount of processing must be done before an I/O is performed. After all, the requester must determine the nature of the I/O request before it can be performed.
Page 82
Chapter 5. Managing Storage the files comprising the operating system is not affected even if the partition holding the users’ files becomes full. The operating system still has free space for its own use. Although it is somewhat simplistic, you can think of partitions as being similar to individual disk drives.
Chapter 5. Managing Storage 5.5.1.1.2.3. Logical Partitions Logical partitions are those partitions contained within an extended partition; in terms of use they are no different than a non-extended primary partition. 5.5.1.1.3. Partition Type Field Each partition has a type field that contains a code indicating the partition’s anticipated usage. The type field may or may not reflect the computer’s operating system.
Page 84
Chapter 5. Managing Storage 5.5.2.2. Hierarchical Directory Structure While the file systems used in some very old operating systems did not include the concept of direc- tories, all commonly-used file systems today include this feature. Directories are themselves usually implemented as files, meaning that no special utilities are required to maintain them. Furthermore, because directories are themselves files, and directories contain files, directories can therefore contain other directories, making a multi-level directory hierarchy possible.
Chapter 5. Managing Storage percent of your available disk space. By making it easy to determine which users are in that 20 percent, you can more effectively manage your storage-related assets. Taking this a step further, some file systems include the ability to set per-user limits (often known as disk quotas) on the amount of disk space that can be consumed.
Chapter 5. Managing Storage 5.5.4. Enabling Storage Access Once a mass storage device has been properly partitioned, and a file system written to it, the storage is available for general use. For some operating systems, this is true; as soon as the operating system detects the new mass storage device, it can be formatted by the system administrator and may be accessed immediately with no additional effort.
Chapter 5. Managing Storage client (incurring zero additional cost in software procurement). And you have the best chance for good support and integration with the client operating system. There is a downside, however. This means that the server environment must be up to the task of providing good support for the network-accessible storage technologies required by the clients.
Page 88
Chapter 5. Managing Storage 5.6.2.1.1. RAID Levels The Berkeley researchers originally defined five different RAID levels and numbered them "1" through "5." In time, additional RAID levels were defined by other researchers and members of the storage industry. Not all RAID levels were equally useful; some were of interest only for research purposes, and others could not be economically implemented.
Page 89
Chapter 5. Managing Storage 5.6.2.1.1.2. RAID 1 RAID 1 uses two (although some implementations support more) identical disk drives. All data is written to both drives, making them mirror images of each other. That is why RAID 1 is often known as mirroring.
Page 90
Chapter 5. Managing Storage constant updating of parity as data is written to the array would mean that the parity drive could become a performance bottleneck. By spreading the parity information evenly throughout the array, this impact is reduced. However, it is important to keep in mind the impact of parity on the overall storage capacity of the array.
Page 91
Chapter 5. Managing Storage 5.6.2.1.2. RAID Implementations It is obvious from the previous sections that RAID requires additional "intelligence" over and above the usual disk I/O processing for individual drives. At the very least, the following tasks must be performed: Dividing incoming I/O requests to the individual disks in the array •...
Chapter 5. Managing Storage 1 array is indistinguishable from a non-RAID boot device, the BIOS can successfully start the boot process; the operating system can then change over to software RAID operation once it has gained control of the system. 5.6.3.
Chapter 5. Managing Storage 5.6.3.4. With LVM, Why Use RAID? Given that LVM has some features similar to RAID (the ability to dynamically replace failing drives, for instance), and some features providing capabilities that cannot be matched by most RAID im- plementations (such as the ability to dynamically add more storage to a central storage pool), many people wonder whether RAID is no longer important.
Page 94
Chapter 5. Managing Storage Many times where a user is responsible for using large amounts of storage, it is the second type of person that is found to be responsible. 5.7.1.1.1. Handling a User’s Excessive Usage This is one area in which a system administrator needs to summon all the diplomacy and social skills they can muster.
Page 95
Chapter 5. Managing Storage of the data, to ask you (or your organization’s operations staff — whatever is appropriate for your organization) to restore it. There are a few things to keep in mind so that this does not backfire on you. First and foremost is to not include files that are likely to need restoring;...
Chapter 5. Managing Storage No matter what the reason, there are times when you will be taken by surprise. To plan for these instances, try to configure your storage architecture for maximum flexibility. Keeping spare storage on-hand (if possible) can alleviate the impact of such unplanned events. 5.7.2.
Chapter 5. Managing Storage There is a problem with the first approach — depending on how access is granted, user #2 may have full access to all of user #1’s files. Worse, it might have been done in such a way as to permit all users in your organization access to user #1’s files.
Page 98
Chapter 5. Managing Storage to device names. When adding or removing storage, always make sure you review (and update, if necessary) all device name references used by your operating system. 5.7.4.1. Adding Storage The process of adding storage to a computer system is relatively straightforward. Here are the basic steps: 1.
Page 99
Chapter 5. Managing Storage The second situation is a bit more difficult, if only for the reason that a cable must be procured so that it can connect a disk drive to the channel. The new disk drive may be configured as master or slave (although traditionally the first disk drive on a channel is normally configured as master).
Page 100
Chapter 5. Managing Storage The first step is to see which buses have available space for an additional disk drive. One of three situations is possible: There is a bus with less than the maximum number of disk drives connected to it •...
Page 101
Chapter 5. Managing Storage 5.7.4.1.2. Partitioning Once the disk drive has been installed, it is time to create one or more partitions to make the space available to your operating system. Although the tools vary depending on the operating system, the basic steps are the same: 1.
Page 102
Chapter 5. Managing Storage 5.7.4.1.5. Modifying the Backup Schedule Assuming that the new storage is being used to hold data worthy of being preserved, this is the time to make the necessary changes to your backup procedures and ensure that the new storage will, in fact, be backed up.
Chapter 5. Managing Storage On the other hand, if the data is still being used, then the data should reside on the system most appropriate for that usage. Of course, if this is the case, perhaps it would be easiest to move the data by reinstalling the disk drive on the new system.
Page 104
Chapter 5. Managing Storage Note Device names under Red Hat Enterprise Linux are determined at boot-time. Therefore, changes made to a system’s hardware configuration can result in device names changing when the system reboots. Because of this, problems can result if any device name references in system configuration files are not updated appropriately.
Page 105
Chapter 5. Managing Storage — The first partition on the first ATA drive • /dev/hda1 — The twelfth partition on the second SCSI drive • /dev/sdb12 — The fourth partition on the thirtieth SCSI drive • /dev/sdad4 5.9.1.1.4. Whole-Device Access There are instances where it is necessary to access the entire device and not just a specific partition.
Chapter 5. Managing Storage intended to. Also note that system configurations which do not use file systems (some databases, for example) cannot take advantage of file system labels. 5.9.1.2.2. Using devlabel software attempts to address the device naming issue in a different manner than file devlabel system labels.
Page 107
Chapter 5. Managing Storage 5.9.2.2. EXT3 The ext3 file system builds upon ext2 by adding journaling capabilities to the already-proven ext2 codebase. As a journaling file system, ext3 always keeps the file system in a consistent state, elimi- nating the need for lengthy file system integrity checks. This is accomplished by writing all file system changes to an on-disk journal, which is then flushed on a regular basis.
Chapter 5. Managing Storage 5.9.3. Mounting File Systems To access any file system, it is first necessary to mount it. By mounting a file system, you direct Red Hat Enterprise Linux to make a specific partition (on a specific device) available to the system. Likewise, when access to a particular file system is no longer desired, it is necessary to umount it.
Page 109
Chapter 5. Managing Storage 5.9.3.2. Seeing What is Mounted In addition to mounting and unmounting disk space, it is possible to see what is mounted. There are several different ways of doing this: Viewing • /etc/mtab Viewing • /proc/mounts Issuing the command •...
Chapter 5. Managing Storage 5.9.3.2.2. Viewing /proc/mounts file is part of the proc virtual file system. As with the other files under /proc/mounts /proc/ "file" does not exist on any disk drive in your Red Hat Enterprise Linux system. mounts In fact, it is not even a file;...
Chapter 5. Managing Storage 5.9.4. Network-Accessible Storage Under Red Hat Enterprise Linux There are two major technologies used for implementing network-accessible storage under Red Hat Enterprise Linux: • • The following sections describe these technologies. 5.9.4.1. NFS As the name implies, the Network File System (more commonly known as NFS) is a file system that may be accessed via a network connection.
Chapter 5. Managing Storage Each line represents one file system and contains the following fields: File system specifier — For disk-based file systems, either a device file name ( ), a file • /dev/sda1 system label specification ( ), or a -managed symbolic link ( LABEL=/ devlabel...
Page 113
Chapter 5. Managing Storage 2. View the disk drive’s partition table, to ensure that the disk drive to be partitioned is, in fact, the correct one. In our example, displays the partition table by using the command: fdisk Command (m for help): p Disk /dev/hda: 255 heads, 63 sectors, 1244 cylinders Units = cylinders of 16065 * 512 bytes Device Boot...
Page 114
Chapter 5. Managing Storage 5.9.6.1.2. Formatting the Partition(s) Formatting partitions under Red Hat Enterprise Linux is done using the utility program. How- mkfs ever, does not actually do the work of writing the file-system-specific information onto a disk mkfs drive; instead it passes control to one of several other programs that actually create the file system. This is the time to look at the man page for the file system you have selected.
Page 115
Chapter 5. Managing Storage 5.9.6.2.1. Remove the Disk Drive’s Partitions From /etc/fstab Using the text editor of your choice, remove the line(s) corresponding to the disk drive’s partition(s) from the file. You can identify the proper lines by one of the following methods: /etc/fstab Matching the partition’s mount point against the directories in the second column of •...
Chapter 5. Managing Storage Reading and comparing: done Writing pattern 0xffffffff: done Reading and comparing: done Writing pattern 0x00000000: done Reading and comparing: done Keep in mind that is actually writing four different data patterns to every block on the badblocks disk drive.
Page 117
Chapter 5. Managing Storage 5.9.7.1.2. Per-User Space Accounting Disk quotas can perform space accounting on a per-user basis. This means that each user’s space usage is tracked individually. It also means that any limitations on usage (which are discussed in later sections) are also done on a per-user basis.
Page 118
Chapter 5. Managing Storage If a user continues to use more than the soft limit and the grace period expires, no additional disk usage will be permitted until the user (or group) has reduced their usage to a point below the soft limit.
Page 119
Chapter 5. Managing Storage Disk quotas for user matt (uid 500): Filesystem blocks soft hard inodes soft hard /dev/md3 6618000 6900000 7000000 17397 In this example, user matt has been given a soft limit of 6.9GB and a hard limit of 7GB. No soft or hard limit on inodes has been set for this user.
Chapter 5. Managing Storage 5.9.8. Creating RAID Arrays In addition to supporting hardware RAID solutions, Red Hat Enterprise Linux supports software RAID. There are two ways that software RAID arrays can be created: While installing Red Hat Enterprise Linux • After Red Hat Enterprise Linux has been installed •...
Chapter 5. Managing Storage Some of the more notable sections in this entry are: — Shows the device file name for the RAID array • raiddev — Defines the RAID level to be used by this RAID array • raid-level —...
Chapter 5. Managing Storage The RAID array’s RAID level • The physical partitions that currently make up the array (followed by the partition’s array unit • number) The size of the array • The number of configured devices versus the number of operative devices in the array •...
Chapter 5. Managing Storage 5.10.1. Installed Documentation The following resources are installed in the course of a typical Red Hat Enterprise Linux installation, and can help you learn more about the subject matter discussed in this chapter. man page — NFS configuration file format. •...
Page 124
Chapter 5. Managing Storage Linux Performance Tuning and Capacity Planning by Jason R. Fink and Matthew D. Sherer; Sams • — Contains information on disk, RAID, and NFS performance. Linux Administration Handbook by Evi Nemeth, Garth Snyder, and Trent R. Hein; Prentice Hall — •...
Chapter 6. Managing User Accounts and Resource Access Managing user accounts and groups is an essential part of system administration within an organiza- tion. But to do this effectively, a good system administrator must first understand what user accounts and groups are and how they work. The primary reason for user accounts is to verify the identity of each individual using a computer system.
Page 126
Chapter 6. Managing User Accounts and Resource Access The size of your organization matters, as it dictates how many users your naming convention must support. For example, a very small organization might be able to have everyone use their first name. For a much larger organization this naming convention would not work.
Page 127
Chapter 6. Managing User Accounts and Resource Access 6.1.1.2. Dealing with Name Changes If your organization uses a naming convention that is based on each user’s name, it is a fact of life that you will eventually have to deal with name changes. The most common situation is one where a woman marries (or divorces) and changes her name, but this is not the only time a system administrator might be asked to change a username.
Chapter 6. Managing User Accounts and Resource Access depends on how email delivery is implemented on your operating system, but the two most likely symptoms are: The new user never receives any email — it all goes to the original user. •...
Page 129
Chapter 6. Managing User Accounts and Resource Access 6.1.2.1. Weak Passwords As stated earlier, a weak password fails one of these three tests: It is secret • It is resistant to being guessed • It is resistant to a brute-force attack •...
Page 130
Chapter 6. Managing User Accounts and Resource Access 6.1.2.1.3. Recognizable Words Many attacks against passwords are based on the fact that people are most comfortable with pass- words they can remember. And for most people, passwords that are memorable are passwords that contain words.
Page 131
Chapter 6. Managing User Accounts and Resource Access a physically-secure location that requires multiple people to cooperate in order to get access to the paper. Vaults with multiple locks and bank safe deposit boxes are often used. Any organization that explores this method of storing passwords for emergency purposes should be aware that the existence of written passwords adds an element of risk to their systems’...
Chapter 6. Managing User Accounts and Resource Access Note Keep in mind that just using the first letters of each word in a phrase is not sufficient to make a strong password. Always be sure to increase the password’s character set by including mixed-case alphanumeric characters and at least one special character as well.
Chapter 6. Managing User Accounts and Resource Access Therefore, if your organization requires this kind of environment, you should make a point of docu- menting the exact steps required to create and correctly configure a user account. In fact, if there are different types of user accounts, you should document each one (creating a new finance user account, a new operations user account, etc.).
Page 134
Chapter 6. Managing User Accounts and Resource Access When handling system "lock-downs" in response to terminations, proper timing is important. If the lock-down takes place after the termination process has been completed, there is the potential for unauthorized access by the newly-terminated person. On the other hand, if the lock-down takes place before the termination process has been initiated, it could alert the person to their impending termination, and make the process more difficult for all parties.
Chapter 6. Managing User Accounts and Resource Access that no resources or access privileges are left on the account, and that the account has the resources and privileges appropriate to the person’s new responsibilities. Further complicating the situation is the fact that often there is a transition period where the user per- forms tasks related to both sets of responsibilities.
Chapter 6. Managing User Accounts and Resource Access 6.2.1.2. Determining Group Structure Some of the challenges facing system administrators when creating shared groups are: What groups to create • Who to put in a given group • What type of permissions should these shared resources have •...
Chapter 6. Managing User Accounts and Resource Access The primary advantage of centralizing home directories on a network-attached server is that if a user logs into any machine on the network, they will be able to access the files in their home directory. The disadvantage is that if the network goes down, users across the entire organization will be unable to get to their files.
Page 138
Chapter 6. Managing User Accounts and Resource Access and execute the file. The next set of symbols define group access (again, with full access), while the last set of symbols define the types of access permitted for all other users. Here, all other users may read and execute the file, but may not modify it in any way.
Chapter 6. Managing User Accounts and Resource Access There are two instances where the actual numeric value of a UID or GID has any specific meaning. A UID and GID of zero (0) are used for the user, and are treated specially by Red Hat Enterprise root Linux —...
Page 140
Chapter 6. Managing User Accounts and Resource Access Here is an example of a entry: /etc/passwd root:x:0:0:root:/root:/bin/bash This line shows that the user has a shadow password, as well as a UID and GID of 0. The root root user has as a home directory, and uses for a shell.
Page 141
Chapter 6. Managing User Accounts and Resource Access Date since the account has been disabled — The date (stored as the number of days since the • epoch) since the user account has been disabled. A reserved field — A field that is ignored in Red Hat Enterprise Linux. •...
Chapter 6. Managing User Accounts and Resource Access Encrypted password — The encrypted password for the group. If set, non-members of the group • can join the group by typing the password for that group using the command. If the value newgrp of this field is , then no user is allowed to access the group using the...
Page 143
Chapter 6. Managing User Accounts and Resource Access Application Function Reads in a file consisting of username and password pairs, and /usr/sbin/chpasswd updates each users’ password accordingly. Changes the user’s password aging policies. The command chage passwd can also be used for this purpose. Changes the user’s GECOS information.
Chapter 6. Managing User Accounts and Resource Access click on the the file’s icon (for example, while the icon is displayed in a graphical file manager or on the desktop), and select Properties. 6.4. Additional Resources This section includes various resources that can be used to learn more about account and resource management, and the Red Hat Enterprise Linux-specific subject matter discussed in this chapter.
Chapter 6. Managing User Accounts and Resource Access http://www.crypticide.org/users/alecm/ — Homepage of the author of one of the most popular • password-cracking programs (Crack). You can download Crack from this page and see how many of your users have weak passwords. http://www.linuxpowered.com/html/editorials/file.html —...
Page 146
Chapter 6. Managing User Accounts and Resource Access...
Chapter 7. Printers and Printing Printers are an essential resource for creating a hard copy — a physical depiction of data on paper — version of documents and collateral for business, academic, and home use. Printers have become an indispensable peripheral in all levels of business and institutional computing. This chapter discusses the various printers available and compares their uses in different computing environments.
Chapter 7. Printers and Printing JIS B5 — (182mm x 257mm) • legal — (8 1/2" x 14") • If certain departments (such as marketing or design) have specialized needs such as creating posters or banners, there are large-format printers capable of using A3 (297mm x 420mm) or tabloid (11" x 17") paper sizes.
Chapter 7. Printers and Printing Dot-matrix printers vary in print resolution and overall quality with either 9 or 24-pin printheads. The more pins per inch, the higher the print resolution. Most dot-matrix printers have a maximum resolution of around 240 dpi (dots per inch). While this resolution is not as high as those possible in laser or inkjet printers, there is one distinct advantage to dot-matrix (or any form of impact) printing.
Chapter 7. Printers and Printing Inkjets were originally manufactured to print in monochrome (black and white) only. However, the printhead has since been expanded and the nozzles increased to accommodate cyan, magenta, yellow, and black. This combination of colors (called CMYK) allows the printing of images with nearly the same quality as a photo development lab (when using certain types of coated paper.) When coupled with crisp and highly readable text print quality, inkjet printers are a sound all-in-one choice for monochrome or color printing needs.
Chapter 7. Printers and Printing 7.4.1. Color Laser Printers Color laser printers aim to combine the best features of laser and inkjet technology into a multi- purpose printer package. The technology is based on traditional monochrome laser printing, but uses additional components to create color images and documents.
Chapter 7. Printers and Printing Solid Ink Printers Used mostly in the packaging and industrial design industries, solid ink printers are prized for their ability to print on a wide variety of paper types. Solid ink printers, as the name implies, use hardened ink sticks that that are melted and sprayed through small nozzles on the printhead.
Main Menu Button (on the Panel) => System Settings => Printing, or type the command . This command automatically determines whether to run the graphical redhat-config-printer or text-based version depending on whether the command is executed in the graphical desktop environment or from a text-based console.
Chapter 7. Printers and Printing Networked UNIX (LPD) — a printer attached to a different UNIX system that can be accessed • over a TCP/IP network (for example, a printer attached to another Red Hat Enterprise Linux system running LPD on the network). Networked Windows (SMB) —...
Chapter 7. Printers and Printing 7.9.3. Related Books Network Printing by Matthew Gast and Todd Radermacher; O’Reilly & Associates, Inc. — Com- • prehensive information on using Linux as a print server in heterogeneous environments. The Red Hat Enterprise Linux System Administration Guide; Red Hat, Inc. — Includes a chapter •...
Chapter 8. Planning for Disaster Disaster planning is a subject that is easy for a system administrator to forget — it is not pleasant, and it always seems that there is something else more pressing to do. However, letting disaster planning slide is one of the worst things a system administrator can do.
Page 158
Chapter 8. Planning for Disaster Before taking the approach of first fixing it yourself, make sure that the hardware in question: Is not still under warranty • Is not under a service/maintenance contract of any kind • If you attempt repairs on hardware that is covered by a warranty and/or service contract, you are likely violating the terms of these agreements and jeopardizing your continued coverage.
Page 159
Chapter 8. Planning for Disaster 8.1.1.1.3. Spares That Are Not Spares When is a spare not a spare? When it is hardware that is in day-to-day use but is also available to serve as a spare for a higher-priority system should the need arise. This approach has some benefits: Less money dedicated to "non-productive"...
Page 160
Chapter 8. Planning for Disaster As you might expect, the cost of a contract increases with the hours of coverage. In general, extending the coverage Monday through Friday tends to cost less than adding on Saturday and Sunday coverage. But even here there is a possibility of reducing costs if you are willing to do some of the work. 8.1.1.2.1.1.
Page 161
Chapter 8. Planning for Disaster times can range from eight hours (which effectively becomes "next day" service for a standard busi- ness hours agreement), to 24 hours. As with every other aspect of a service agreement, even these times are negotiable — for the right price. Note Although it is not a common occurrence, you should be aware that service agreements with re- sponse time clauses can sometimes stretch a manufacturer’s service organization beyond its ability...
Chapter 8. Planning for Disaster 8.1.1.2.4. Available Budget As outlined above, service contracts vary in price according to the nature of the services being pro- vided. Keep in mind that the costs associated with a service contract are a recurring expense; each time the contract is due to expire you must negotiate a new contract and pay again.
Page 163
Chapter 8. Planning for Disaster 8.1.2.1. Operating System Failures In this type of failure, the operating system is responsible for the disruption in service. Operating system failures come from two areas: Crashes • Hangs • The main thing to keep in mind about operating system failures is that they take out everything that the computer was running at the time of the failure.
Page 164
Chapter 8. Planning for Disaster 8.1.2.3.1. Documentation Although often overlooked, software documentation can serve as a first-level support tool. Whether online or printed, documentation often contains the information necessary to resolve many issues. 8.1.2.3.2. Self Support Self support relies on the customer using online resources to resolve their own software-related is- sues.
Chapter 8. Planning for Disaster database administrator. In this situation, it can often be cheaper to bring in a specialist from the database vendor to handle the initial deployment (and occasionally later on, as the need arises) than it would be to train the system administrator in a skill that will be seldom used. 8.1.3.
Page 166
Chapter 8. Planning for Disaster Organizations located near the boundaries of a power company might be able to negotiate connec- tions to two different power grids: The one servicing your area • The one from the neighboring power company • The costs involved in running power lines from the neighboring grid are sizable, making this an option only for larger organizations.
Page 167
Chapter 8. Planning for Disaster Noise The power must not include any RFI (Radio Frequency Interference) or EMI (Electro-Magnetic Interference) noise. Current The power must be supplied at a current rating sufficient to run the data center. Power supplied directly from the power company does not normally meet the standards necessary for a data center.
Page 168
Chapter 8. Planning for Disaster 8.1.3.2.3.1. Providing Power For the Next Few Seconds Since the majority of outages last only a few seconds, your backup power solution must have two primary characteristics: Very short time to switch to backup power (known as transfer time) •...
Page 169
Chapter 8. Planning for Disaster Note Strictly speaking, this approach to calculating VA is not entirely correct; however, to get the true VA you would need to know the power factor for each unit, and this information is rarely, if ever, provided. In any case, the VA numbers obtained from this approach reflects worst-case values, leaving a large margin of error for safety.
Chapter 8. Planning for Disaster The point here is that your organization must determine at what point an extended outage will just have to be tolerated. Or if that is not an option, your organization must reconsider its ability to function completely independently of on-site power for extended periods, meaning that very large generators will be needed to power the entire building.
Page 171
Chapter 8. Planning for Disaster 8.1.4.1. End-User Errors The users of a computer can make mistakes that can have serious impact. However, due to their normally unprivileged operating environment, user errors tend to be localized in nature. Because most users interact with a computer exclusively through one or more applications, it is within applications that most end-user errors occur.
Page 172
Chapter 8. Planning for Disaster The procedures exist and are correct, but the operator will not (or cannot) follow them. • Depending on the management structure of your organization, you might not be able to do much more than communicate your concerns to the appropriate manager. In any case, making yourself available to do what you can to help resolve the problem is the best approach.
Page 173
Chapter 8. Planning for Disaster 8.1.4.3.1.1. Change Control The common thread of every configuration change is that some sort of a change is being made. The change may be large, or it may be small. But it is still a change and should be treated in a particular way.
Page 174
Chapter 8. Planning for Disaster 8.1.4.3.2. Mistakes Made During Maintenance This type of error can be insidious because there is usually so little planning and tracking done during day-to-day maintenance. System administrators see the results of this kind of error every day, especially from the many users that swear they did not change a thing —...
Chapter 8. Planning for Disaster 8.2. Backups Backups have two major purposes: To permit restoration of individual files • To permit wholesale restoration of entire file systems • The first purpose is the basis for the typical file restoration request: a user accidentally deletes a file and asks that it be restored from the latest backup.
Chapter 8. Planning for Disaster Application Software This data changes whenever applications are installed, upgraded, or removed. Application Data This data changes as frequently as the associated applications are run. Depending on the specific application and your organization, this could mean that changes take place second-by-second or once at the end of each fiscal year.
Chapter 8. Planning for Disaster As you can see, there is no clear-cut method for deciding on a backup system. The only guidance that can be offered is to ask you to consider these points: Changing backup software is difficult; once implemented, you will be using the backup software •...
Chapter 8. Planning for Disaster The primary advantage gained by using incremental backups is that the incremental backups run more quickly than full backups. The primary disadvantage to incremental backups is that restoring any given file may mean going through one or more incremental backups until the file is found. When restoring a complete file system, it is necessary to restore the last full backup and every subsequent incremental backup.
Page 179
Chapter 8. Planning for Disaster 8.2.4.2. Disk In years past, disk drives would never have been used as a backup medium. However, storage prices have dropped to the point where, in some cases, using disk drives for backup storage does make sense. The primary reason for using disk drives as a backup medium would be speed.
Chapter 8. Planning for Disaster By backing up over the network, the disk drives are already off-site, so there is no need for transporting fragile disk drives anywhere. With sufficient network bandwidth, the speed advantage you can get from backing up to disk drives is maintained. However, this approach still does nothing to address the matter of archival storage (though the same "spin off to tape after the backup"...
Chapter 8. Planning for Disaster The important thing to do is to look at the various restoration scenarios detailed throughout this section and determine ways to test your ability to actually carry them out. And keep in mind that the hardest one to test is also the most critical one.
Chapter 8. Planning for Disaster By thinking about this, you have taken the first step of disaster recovery. Disaster recovery is the ability to recover from an event impacting the functioning of your organization’s data center as quickly and completely as possible. The type of disaster may vary, but the end goal is always the same. The steps involved in disaster recovery are numerous and wide-ranging.
Chapter 8. Planning for Disaster 8.3.2. Backup Sites: Cold, Warm, and Hot One of the most important aspects of disaster recovery is to have a location from which the recovery can take place. This location is known as a backup site. In the event of a disaster, a backup site is where your data center will be recreated, and where you will operate from, for the length of the disaster.
Chapter 8. Planning for Disaster single item must be identified. Often organizations work with manufacturers to craft agreements for the speedy delivery of hardware and/or software in the event of a disaster. 8.3.4. Availability of Backups When a disaster is declared, it is necessary to notify your off-site storage facility for two reasons: To have the last backups brought to the backup site •...
Self support options are available via the many mailing lists hosted by Red Hat (available at https://listman.redhat.com/mailman/listinfo/). These mailing lists take advantage of the combined knowledge of Red Hat’s user community; in addition, many lists are monitored by Red Hat personnel, who contribute as time permits.
Page 186
Chapter 8. Planning for Disaster 8.4.2.1. utility is well known among UNIX system administrators. It is the archiving method of choice for sharing ad-hoc bits of source code and files between systems. The implementation included with Red Hat Enterprise Linux is GNU , one of the more feature-rich implementations.
Page 187
Chapter 8. Planning for Disaster 8.4.2.3. : Not Recommended for Mounted File Systems! dump restore programs are Linux equivalents to the UNIX programs of the same name. dump restore As such, many system administrators with UNIX experience may feel that dump restore viable candidates for a good backup program under Red Hat Enterprise Linux.
Chapter 8. Planning for Disaster 8.4.2.4. The Advanced Maryland Automatic Network Disk Archiver (AMANDA) AMANDA is a client/server based backup application produced by the University of Maryland. By having a client/server architecture, a single backup server (normally a fairly powerful system with a great deal of free space on fast disks and configured with the desired backup device) can back up many client systems, which need nothing more than the AMANDA client software.
files 8.5.2. Useful Websites http://www.redhat.com/apps/support/ — The Red Hat support homepage provides easy access to • various resources related to the support of Red Hat Enterprise Linux. http://www.disasterplan.com/ — An interesting page with links to many sites related to disaster •...
Colophon The manuals are written in DocBook SGML v4.1 format. The HTML and PDF formats are produced using custom DSSSL stylesheets and custom jade wrapper scripts. The DocBook SGML files are written in Emacs with the help of PSGML mode. Garrett LeSage created the admonition graphics (note, tip, important, caution, and warning).
Need help?
Do you have a question about the ENTERPRISE LINUX 3 - INTRODUCTION TO SYSTEM ADMINISTRATION and is the answer not in the manual?
Questions and answers