Diagnosing Performance Bottlenecks


Contents

About this document
    Related information
Memory bottlenecks
CPU bottlenecks
I/O bottlenecks
SMP performance tuning
Tuning methodology
Additional information

About this document

This document describes how to check for resource bottlenecks and identify the processes that tax them. Resources on a system include memory, CPU, and Input/Output (I/O). This document covers bottlenecks across an entire system. This document does not address the bottlenecks of a particular application or general network problems. The following commands are described:

NOTE: PAIDE/6000 must be installed in order to use tprof, svmon, netpmon, and filemon. To check if this is installed, run the following command:

    lslpp -l perfagent.tools. 
If you are at AIX Version 4.3.0 or higher, PAIDE/6000 can be found on the AIX Base Operating System media. Otherwise, to order PAIDE/6000, contact your local IBM representative.

This fax also makes reference to the vmtune and schedtune commands. These commands and their source are found in the /usr/samples/kernel directory. They are installed with the bos.adt.samples fileset.

Related information

Consult Line Performance Analysis - The AIX Support Family offers a system analysis with tuning recommendations. For more information call the IBM AIX Support Center.

Performance Tuning Guide (SC23-2365) - This IBM publication covers performance monitoring and tuning of AIX systems. Contact your local IBM representative to order.

For detailed system usage on a per process basis, a free utlity called UTLD can be obtained by anonymous ftp from ftp.software.ibm.com in the /aix/tools/perftools/utld directory. For more information see the README file /usr/lpp/utld after installation of the utld.obj fileset.


Memory bottlenecks

The following section describes memory bottleneck solutions with the following commands: vmstat, svmon, ps.

  1. vmstat

    Run the following command:

     
       vmstat 1 
    

    NOTE: System may slow down when pi and po are consistently non-zero.

    pi
    number of pages per second paged in from paging space
    po
    number of pages per second paged out to paging space

    When processes on the system require more pages of memory than are available in RAM, working pages may be paged out to paging space and then paged in when they are needed again. Accessing a page from paging space is considerably slower than accessing a page directly from RAM. For this reason, constant paging activity can cause system performance degradation.

    NOTE: Memory is over-committed when the fr:sr ratio is high.

    fr
    number of pages that must be freed to replenish the free list or to accommodate an active process
    sr
    number of pages that must be examined in order to free fr number of pages

    An fr:sr ratio of 1:4 means for every one page freed, four pages must be examined. It is difficult to determine a memory constraint based on this ratio alone and what constitutes a high ratio is workload/application dependent.

    The system considers itself to be thrashing when po*SYS > fr where SYS is a system parameter viewed with the schedtune command. The default value is 0 if a system has 128MB or more which means that memory load control is disabled. Otherwise, the default is 6. Thrashing occurs when the system spends more time paging than performing work. When this occurs, selected processes may be suspended temporarily, and the system may be noticeably slower.

  2. svmon

    As root, run the following command:

        # svmon -Pau 10 | more 
    

    Sample output:

    Pid            Command        Inuse        Pin      Pgspace 
    13794             dtwm         1603          1          449 
    Pid:  13794 
    Command:  dtwm 
    Segid Type Description        Inuse Pin Pgspace Address Range 
    b23 pers /dev/hd2:24849           2   0       0 0..1 
    14a5 pers /dev/hd2:24842          0   0       0 0..2 
    6179 work lib data              131   0      98 0..891 
    280a work shared library text  1101   0      10 0..65535 
    181 work private                287   1     341 0..310:65277..65535 
    57d5 pers code,/dev/hd2:61722    82   0       0 0..135 
    

    This command lists the top ten memory using processes and gives a report about each one. In each process report, look where Type = work and Description = private. Check how many 4K (4096 byte) pages are used under the Pgspace column. This is the minimum number of working pages this segment is using in all of virtual memory. A Pgspace number that grows, but never decreases, may indicate a memory leak. Memory leaks occur when an application fails to deallocate memory.

        341 * 4096 =  1,396,736 or 1.4MB of virtual memory 
    
  3. ps

    Run the following command:

    ps gv | head -n 1; ps gv | egrep -v "RSS" | sort +6b -7 -n -r 
    
    size
    amount of memory in KB allocated from page space for the memory segment of Type = work and Description = private for the process as would be indicated by svmon
    RSS
    amount of memory in KB currently in use (in RAM) for the memory segment of Type = work and Description = private plus the memory segment(s) of Type = pers and Description = code for the process as would be indicated by svmon
    trs
    amount of memory, in KB, currently in use (in RAM) for the memory segment(s) of Type = pers and Description = code for the process as would be indicated by svmon
    %mem
    RSS value divided by the total amount of system RAM in KB multiplied by 100

CPU bottlenecks

The following section describes CPU bottleneck solutions using the following commands: vmstat, tprof, ps.

  1. vmstat

    Run the following command:

        vmstat 1 
    

    NOTE: System may slow down when processes wait on the run queue.

    id
    percentage of time the CPU is idle
    r
    number of threads on the run queue

    If the id value is consistently 0%, the CPU is being used 100% of the time.

    Look next at the r column to see how many threads are placed on the run queue per second. The higher the number of threads forced to wait on the run queue, the more system performance will suffer.

  2. tprof

    To find out how much CPU time a process is using, run the following command as root:

        # tprof -x sleep 30 
    

    This returns in 30 seconds and creates a file in the current directory called __prof.all.

    In 30 seconds, the CPU is checked approximately 3000 times. The Total column is the number of times a process was found in the CPU. If one process has 1500 in the Total column, this process has taken 1500/3000 or half of the CPU time. The tprof output explains exactly what processes the CPU has been running. The wait process runs when no other processes require the CPU and accounts for the amount of idle time on the system.

  3. netpmon

    To find out how much CPU time a process is using, and how much of that time is spent executing network-related code, run the following command as root:

        # netpmon -o /tmp/netpmon.out -O cpu -v;sleep 30;trcstop 
    

    This returns in 30 seconds and creates a file in the /tmp directory called netpmon.out.

    The CPUTime indicates the total amount of CPU time for the process. %CPU is the percentage of CPU usage for the process, and Network CPU% is the percentage of total time that the process spent executing network-related code.

  4. ps

    Run the following commands:

    ps -ef | head -n 1 
    ps -ef | egrep -v "UID|0:00|\ 0\ " | sort +3b -4 -n -r 
    

    Check the C column for a process penalty for recent CPU usage. The maximum value for this column is 120.

        ps -e | head -n 1 
        ps -e | egrep -v "TIME|0:" | sort +2b -3 -n -r 
    

    Check the Time column for process accumulated CPU time.

        ps gu 
        ps gu | egrep -v "CPU|kproc" | sort +2b -3 -n -r 
    

    Check the %CPU column for process CPU dependency. The percent CPU is the total CPU time divided by the the total elapsed time since the process was started.


I/0 bottlenecks

This section describes bottleneck solutions using the following commands: iostat, filemon.

  1. iostat

    NOTE: High iowait will cause slower performance.

    Run the following command:

        iostat 5 
    
    %iowait
    percentage of time the CPU is idle while waiting on local I/O
    %idle
    percentage of time the CPU is idle while not waiting on local I/O

    The time is attributed to iowait when no processes are ready for the CPU but at least one process is waiting on I/O. A high percentage of iowait time indicates that disk I/O is a major contributor to the delay in execution of processes. In general, if system slowness occurs and %iowait is 20% to 25% or higher, investigation of a disk bottleneck is in order.

    %tm_act
    percentage of time the disk is busy

    NOTE: High tm_act percentage can indicate a disk bottleneck.

    When %tm_act or time active for a disk is high, noticeable performance degradation can occur. On some systems, a %tm_act of 35% or higher for one disk can cause noticeably slow performance.


  2. filemon

    To find out what files, logical volumes, and disks are most active, run the following command as root:

        # filemon -u -O all -o /tmp/fmon.out; sleep 30;trcstop 
    

    In 30 seconds, a report is created in /tmp/fmon.out.


SMP performance tuning

Performance tools

  1. SMP only (some SMPs do not support this command)
     
    cpu_state -l
    This displays the current state of each processor (enabled, disabled, or unavailable).

  2. AIX tools that have been adapted in order to display more meaninful information on SMP systems
    ps -m -o THREAD
    The BND column will indicate the processor number to which a process/thread is bound, if it is bound.
     
    pstat -A
    The CPUID column will indicate the processor number to which a process/thread is bound.
     
    sar -P ALL
    Load on all the processors.
     
    vmstat
    Displays kthr (kernel threads).
     
    netpmon -t
    Prints CPU reports on a per-thread basis.
     
  3. Other AIX tools that did not change

    filemon
    iostat
    svmon
    tprof


Tuning methodology

  1. Check availablity of processors.
        cpu_state -l 
    
  2. Check balance between processors.
        sar -P ALL 
    
  3. Identify bound processes/threads.
        ps -m -o THREAD 
        pstat -A 
    
  4. Unbind any bound processes/threads that can and should be unbound.

  5. Continue as with uniprocessor system.

Additional information

  1. KBUFFERS vs. VMM

    The Block I/O Buffer Cache (KBUFFERS) is only used when directly accessing a block device, such as /dev/hdisk0. Normal access through the Journaled File System (JFS) is managed by the Virtual Memory Manager (VMM) and does not use the traditional method for caching the data blocks. Any I/O operations to raw logical volumes or physical volumes does not use the Block I/O Buffer Cache.

  2. I/O Pacing

    Users of AIX occasionally encounter long interactive-application response times when other applications in the system are running large writes to disk. Because most writes are asynchronous, FIFO I/O queues of several megabytes may build up and take several seconds to complete. The performance of an interactive process is severely impacted if every disk read spends several seconds working through the queue. I/O pacing limits the number of outstanding I/O requests against a file. When a process tries to write to a file whose queue is at the high-water mark (should be a multiple of 4 plus 1), the process is suspended until enough I/Os have completed to bring the queue for that file to the low-water mark. The delta between the high and low water marks should be kept small.

    To configure I/O pacing on the system via SMIT, enter the following at the command line as root:

        # smitty chgsys 
    
  3. Async I/O

    Async I/O is performed in the background and does not block the user process. This improves performance because I/O operations and application processing can run concurrently. However, applications must be specifically written to take advantage of Aysnc I/O that is managed by the aio daemons running on the system.

    To configure Async I/O for the system via SMIT, enter the following at the command line as root:

        # smitty aio 
    



[ Doc Ref: 90605198014824     Publish Date: Feb. 09, 2001     4FAX Ref: 2445 ]