Quick Start
User Guide
Mailing Lists

For information on PVFS v2 (our next generation parallel file system) please visit the new PVFS Web Site.


In recent years the disparity between I/O performance and processor performance has led to I/O bottlenecks in many applications, especially those using large data sets. A popular approach for alleviating this kind of bottleneck is the use of parallel file systems, which can be found on many commercial parallel computers.

The goal of the Parallel Virtual File System (PVFS) Project is to explore the design, implementation, and uses of parallel I/O. PVFS serves as both a platform for parallel I/O research as well as a production file system for the cluster computing community. PVFS is currently targeted at clusters of workstations, or Beowulfs.

The PVFS project is conducted jointly between The Parallel Architecture Research Laboratory at Clemson University and The Mathematics and Computer Science Division at Argonne National Laboratory. Additional funding for the PVFS project comes from NASA Goddard Space Flight Center Code 930 and The National Computational Science Alliance through the National Science Foundation's Partnerships for Advanced Computational Infrastructure.

PVFS provides the following features:

  • Compatibility with existing binaries
  • Ease of installation
  • User-controlled striping of files across nodes
  • Multiple interfaces, including a MPI-IO interface via ROMIO
  • Utilizes commodity network and storage hardware

PVFS supports the UNIX I/O interface and allows existing UNIX I/O programs to use PVFS files without recompiling. The familiar UNIX file tools (ls, cp, rm, etc.) will all operate on PVFS files and directories as well. This is accomplished via a Linux kernel module which is provided as a separate package.

PVFS is easy to install. The Quick Start page describes how to set up a simple installation. Scripts and test applications are included to help with configuration, testing for correct operation, and performance evaluation.

PVFS stripes file data across multiple disks in different nodes in a cluster. By spreading out file data in this manner, larger files can be created, potential bandwidth is increased, and network bottlenecks are minimized. A 64-bit interface is implemented as well, allowing large (more than 2GB) files to be created and accessed.

Multiple user interfaces are available. This includes:

  • MPI-IO support through ROMIO (see the ROMIO homepage for details)
  • Traditional Linux file system access through the pvfs-kernel package
  • The native PVFS library interface
We would like to thank Scyld Computing for providing the initial funding for the pvfs-kernel implementation.


October 4, 2004
PVFS 1.6.3 Released!

See the release announcement from the pvfs-users mailing list for details. This release includes many bug fixes as well as a few performance enhancements. Please take note of the migration utilities if you plan to upgrade. The kernel driver has also been updated to support newer Redhat 2.4.x kernels.


All bugs should be reported to the PVFS mailing lists.

We would like to thank everyone who has helped us with testing, patches, funding, and feedback; all of which have helped to make this project possible.


Contact:The PVFS mailing lists.