CXFS I/O Performance Testing Results

John Clyne and Craig Ruff
5/8/00

This document presents I/O performance results for SGI's CXFS shared file system. The tests were conducted using evaulation equipment provided by both SGI and Ciprico.
 

Configuration

Two SGI systems (an Onyx2 and a O200) were attached via a single channel to a Ciprico 8+1 FiberSTORE array 1 via an 8-port FC switch. The O200 was configured as the metadata server. Equipment specifications are as follows:
 

Host 1 (Magic)

    SGI Onyx2 (Origin2k)
    8x250MHz R10k
    3 GB's RAM
    XIO single channel optical FC adapter
    IRIX 6.5.7f
 

Host 2 (Redcloud)

    SGI Origin200
    2x180 MHz R10k
    128 MB's RAM
    PCI single channel optical FC adapter
    IRIX 6.5.7f
 

Storage

    Ciprico FibreSTORE RAID Array 1
    8+1 50GB, 7200 RPM Seagate drives
 

Switch

    SGI (Brocade) FC switch
    8-ports
 
 

Experiments

The tests were run by perl scripts with the xdd program (version 5.1h)
from the University of Minnesota.  Both systems were idle except for
the test suite and normally started daemons.  When multiple I/O streams
were generated, they were all of the same type (i.e. both read or both
write).  We did not have time to test with multiple processes doing mixed
read/write I/O.  Files of size 256 MB were written or read.

Note that the CXFS tests were run only with an underlying XFS data
block size of 4 KB, due to the lack of proper cluster filesystem mount
and unmount commands.  However, previous XFS tests showed that there
was little variation in performance with different XFS data block
sizes.  Since we did not have enough time to modify our test scripts
and run more CXFS tests, we don't know if the same holds true for
CXFS, especially the non-metadata server clients.

The tests of buffered I/O are hampered by the inability to flush
the buffer caches on command.  To attempt to get around this, a
unmount and mount of the file system is performed between the creation
of the files and the reads.  The large main memory on magic is able to
hold the written files in the buffer cache, and hence the I/O
rates reported for buffered I/O on magic almost certainly reflect
the memory to memory copy speeds.  I did not investigate the use
of I/O POSIX data sync I/O mode operations on these timings.
 
 

Results

In all of the figures below, data transfer rates are plotted along the Y axis in Megabytes (1024^2) per second vs I/O request size n Kilobytes (1024).

Figure 1: Read performance of CXFS using buffered I/O with 1 and 2 input streams.

Figure 2: Read performance of CXFS using direct I/O with 1 and 2 input streams.

Figure 3: Write performance of CXFS using buffered I/O with 1 and 2 input streams.

Figure 4: Write performance of CXFS using direct I/O with 1 and 2 input streams.

Figure 5: Read performance of CXFS comparing buffered and direct I/O (1 stream).

Figure 6: Read performance of CXFS comparing buffered and direct I/O (2 streams) .

Figure 7: Write performance of CXFS comparing buffered and direct I/O (1 stream).

Figure 7: Write performance of CXFS comparing buffered and direct I/O (2 streams) .

Figure 9: Read performance of CXFS comparing buffered and direct I/O (1 stream).

Figure 10: Read performance of CXFS comparing buffered and direct I/O (2 streams).

Figure 11: Write performance of CXFS comparing buffered and direct I/O (1 stream).

Figure 12: Write performance of CXFS comparing buffered and direct I/O (2 streams).

Figure 13: Read performance comparing CXFS with XFS using buffered I/O (1 stream).

Figure 14: Read performance comparing CXFS with XFS using direct I/O (1 stream).

Figure 15: Read performance comparing CXFS with XFS using buffered I/O (2 streams).

Figure 16: Read performance comparing CXFS with XFS using direct I/O (2 streams).

Figure 17: Write performance comparing CXFS with XFS using buffered I/O (1 stream).

Figure 18: Write performance comparing CXFS with XFS using direct I/O (1 stream).

Figure 19: Write performance comparing CXFS with XFS using buffered I/O (2 streams).

Figure 20: Write performance comparing CXFS with XFS using direct I/O (2 streams).
 
 
 

Figure 1: Read performance of CXFS using buffered I/O with 1 and 2 input streams. Top plot shows performance on magic. Bottom plot shows performance on redcloud.




 
 
 
 

Figure 2: Read performance of CXFS using direct I/O with 1 and 2 input streams. Top plot shows performance on magic. Bottom plot shows performance on redcloud.




 
 
 
 

Figure 3: Write performance of CXFS using buffered I/O with 1 and 2 input streams. Top plot shows performance on magic. Bottom plot shows performance on redcloud.




 
 
 
 

Figure 4: Write performance of CXFS using direct I/O with 1 and 2 input streams. Top plot shows performance on magic. Bottom plot shows performance on redcloud.




 
 
 
 

Figure 5: Read performance of CXFS comparing buffered and direct I/O (1 stream). Top plot shows performance on magic. Bottom plot shows performance on redcloud.




 
 
 
 

Figure 6: Read performance of CXFS comparing buffered and direct I/O (2 streams). Top plot shows performance on magic. Bottom plot shows performance on redcloud.




 
 
 
 

Figure 7: Write performance of CXFS comparing buffered and direct I/O (1 stream). Top plot shows performance on magic. Bottom plot shows performance on redcloud.




 
 
 
 

Figure 8: Write performance of CXFS comparing buffered and direct I/O (2 streams). Top plot shows performance on magic. Bottom plot shows performance on redcloud.




 
 
 
 

Figure 9: Read performance of CXFS comparing buffered and direct I/O (1 stream). Top plot shows buffered I/O. Bottom plot shows direct I/O.




 
 
 
 

Figure 10: Read performance of CXFS comparing buffered and direct I/O (2 streams). Top plot shows buffered I/O. Bottom plot shows direct I/O.




 
 
 
 

Figure 11: Read performance of CXFS comparing buffered and direct I/O (1 stream). Top plot shows buffered I/O. Bottom plot shows direct I/O.




 
 
 
 

Figure 12: Read performance of CXFS using buffered and direct I/O (2 streams). Top plot shows buffered I/O. Bottom plot shows direct I/O.




 
 
 
 

Figure 13: Read performance comparing CXFS with XFS using buffered I/O (1 stream). Top plot shows performance on magic. Bottom plot shows performance on redcloud.




 
 
 
 

Figure 14: Read performance comparing CXFS with XFS using direct I/O (1 stream). Top plot shows performance on magic. Bottom plot shows performance on redcloud.




 
 
 
 

Figure 15: Read performance comparing CXFS with XFS using buffered I/O (2 streams). Top plot shows performance on magic. Bottom plot shows performance on redcloud.




 
 
 
 

Figure 16: Read performance comparing CXFS with XFS using direct I/O (2 streams). Top plot shows performance on magic. Bottom plot shows performance on redcloud.




 
 
 
 

Figure 17: Write performance comparing CXFS with XFS using buffered I/O (1 stream). Top plot shows performance on magic. Bottom plot shows performance on redcloud.




 
 
 
 

Figure 18: Write performance comparing CXFS with XFS using direct I/O (1 stream). Top plot shows performance on magic. Bottom plot shows performance on redcloud.




 
 
 
 

Figure 19: Write performance comparing CXFS with XFS using buffered I/O (2 streams). Top plot shows performance on magic. Bottom plot shows performance on redcloud.




 
 
 
 

Figure 20: Write performance comparing CXFS with XFS using direct I/O (2 streams). Top plot shows performance on magic. Bottom plot shows performance on redcloud.
 
 
 
 

This page maintained by John Clyne (clyne@ncar.ucar.edu)

$Date: 2000/05/08 22:17:56 $, $Revision: 1.1 $