IOPS (Input/Output operations Per Second) is a standard benchmark (computing) provided by applications such as IOMeter (originally developed by Intel) as well as IOZone (iozone.org) and FIO (git.kernel.dk) and is primarily used with servers to determine their best configuration settings. The specific number of IOPS possible in any server configuration will vary greatly depending upon the variables the tester enters into the program, including the balance of read and write operations, the simulated randomness of the data, the number of worker threads and queue length, as well as the data block sizes. Because of this, published IOPS by drive and SAN vendors are often misleading and generally represent the best case scenario.
The most common performance characteristics that are measured or defined are:
Some hard drives will improve in performance as the number of outstanding IO's increases. This is usually the result of more advanced controller logic on the drive performing command queuing and reordering commonly called either Tagged Command Queuing (TCQ) or Native Command Queuing (NCQ). Most commodity SATA drives either cannot do this, or their implementation is so poor that no performance benefit can be seen. Good SATA drives, such as the Western Digital Raptor will improve slightly--usually by no more than 50%. High end SCSI drives more commonly found in servers generally show much greater improvement, with the Seagate Savvio exceeding 400 IOPS--more than doubling its performance.