In a context of computervirtual memory, a page, memory page, or virtual page is a fixed-length block of main memory, that is contiguous in both physical memory addressing and virtual memory addressing. A page is usually a smallest unit of data for:
transfer between main memory and any other auxiliary store, such as hard disk drive.
Virtual memory abstraction allows a page that does not currently reside in main memory to be addressed and used. If a program tries to access a location in such page, it generates an exception called page fault. The hardware or operating system is notified and loads the required page from auxiliary store automatically. A program addressing the memory has no knowledge of a page fault or a process following it. In consequence a program may be easily allowed to address more RAM than actually exists in the computer.
A transfer of pages between main memory and an auxiliary store, such as hard disk drive, is referred to as paging or swapping.
Page size trade-off
Page size is usually determined by a processor architecture. Traditionally, pages in a system had uniform size, for example 4096 bytes. However, processor designs often allow two or more, sometimes simultaneous, page sizes due to the benefits and penalties. There are several points that can factor into choosing the best page size.
Page size versus page table size
A system with a smaller page size uses more pages, requiring a page table that occupies more space. For example, if a 232 virtual address space is mapped to 4KB (212 bytes) pages, the number of virtual pages is 220 (20 = 32 - 12). However, if the page size is increased to 32KB (215 bytes), only 217 pages are required.
Page size versus TLB usage
Processors need to maintain a Translation Lookaside Buffer (TLB), mapping virtual to physical addresses, which are checked on every memory access. The TLB is typically of limited size, and when it cannot satisfy a given request (a TLB miss) the page tables must be searched manually (either in hardware or software, depending on the architecture) for the correct mapping, a time-consuming process. Larger page sizes mean that a TLB cache of the same size can keep track of larger amounts of memory, which avoids the costly TLB misses.
Internal fragmentation of pages
Rarely do processes require the use of an exact number of pages. As a result, the last page will likely only be partially full, wasting some amount of memory. Larger page sizes clearly increase the potential for wasted memory this way, as more potentially unused portions of memory are loaded into main memory. Smaller page sizes ensure a closer match to the actual amount of memory required in an allocation.
As an example, assume the page size is 1MB. If a process allocates 1025KB, two pages must be used, resulting in 1023KB of unused space.
Page size versus disk access
When transferring from disk, much of the delay is caused by the seek time. Because of this, large, sequential transfers are more efficient than several smaller transfers. Transferring larger pages from disk to memory therefore, does not require much more time than smaller pages.
Determining the page size in a program
Most operating systems allow programs to determine the page size at run time. This allows programs to use memory more efficiently by aligning allocations to this size and reducing overall internal fragmentation of pages.
UNIX and POSIX-based Operating Systems
UNIX and POSIX-based systems use the system function sysconf(), as illustrated in the following example written in the C programming language.
Windows-based operating systems
Win32-based operating system, such as Windows 9x, NT, ReactOS, use the system function GetSystemInfo() from kernel32.dll.
Intel x86 supports 4MB pages (2MB pages if using PAE) in addition to its standard 4kB pages, and other architectures may often have similar features. IA-64 supports as many as eight different page sizes, from 4kB up to 256MB. This support for huge pages (or, in Microsoft Windows terminology, large pages) allows for "the best of both worlds", reducing the pressure on the TLB cache (sometimes increasing speed by as much as 15%, depending on the application and the allocation size) for large allocations while still keeping memory usage at a reasonable level for small allocations.
Huge pages, despite being implemented in most contemporary personal computers, are not in common use except in large servers and computational clusters. Commonly, their use requires elevated privileges, cooperation from the application making the large allocation (usually setting a flag to ask the operating system for huge pages), or manual administrator configuration; operating systems commonly, sometimes by design, cannot page them out to disk.