Certain application level memory-mapped file operations also perform better than their physical file counterparts. Applications can access and update data in the file directly and in-place, as opposed to seeking from the start of the file or rewriting the entire edited contents to a temporary location. Since the memory-mapped file is handled internally in pages, linear file access (as seen, for example, in flat file data storage or configuration files) requires disk access only when a new page boundary is crossed, and can write larger sections of the file to disk in a single operation.
A possible benefit of memory-mapped files is a "lazy loading", thus using small amounts of RAM even for a very large file. Trying to load the entire contents of a file that is significantly larger than the amount of memory available can cause severe thrashing as the operating system reads from disk into memory and simultaneously pages from memory back to disk. Memory-mapping may not only bypass the page file completely, but the system only needs to load the smaller page-sized sections as data is being edited, similarly to demand paging scheme used for programs.
The memory mapping process is handled by the virtual memory manager, which is the same subsystem responsible for dealing with the page file. Memory mapped files are loaded into memory one entire page at a time. The page size is selected by the operating system for maximum performance. Since page file management is one of the most critical elements of a virtual memory system, loading page sized sections of a file into physical memory is typically a very highly optimized system function
Another drawback of memory mapped files relates to a given architecture's address space: a file larger than the addressable space can only have portions mapped at a time, complicating reading it. For example, a 32-bit architecture such as Intel's IA-32 can only directly address 4 GiB files. This drawback is avoided in the case of devices addressing memory when an IOMMU is present.
Another common use for memory-mapped files is to share memory between multiple processes. In modern protected mode operating systems, processes are generally not permitted to access memory space that is allocated for use by another process. (A program's attempt to do so causes invalid page faults or segmentation violations.) There are a number of techniques available to safely share memory, and memory-mapped file I/O is one of the most popular. Two or more applications can simultaneously map a single physical file into memory and access this memory. For example, the Microsoft Windows operating system provides a mechanism for applications to memory-map a shared segment of the system's page file itself and share data via this section.
Most modern operating systems or runtime environments support some form of memory-mapped file access. The function mmap(), which creates a mapping of a file given a file descriptor, starting location in the file, and a length, is part of the POSIX specification, so the wide variety of POSIX-compliant systems, such as UNIX, Linux, Mac OS X or OpenVMS, support a common mechanism for memory mapping files. The Microsoft Windows operating systems also support a group of API functions for this purpose, such as CreateFileMapping() .
The Java programming language provide classes and methods to access memory mapped files, such as .
Ruby has a gem (library) called Mmap, which implements memory-mapped file objects.
The Microsoft .NET runtime environment does not natively include managed access to memory mapped files, but there are third-party libraries which do so
Researchers Submit Patent Application, "Virtual Memory System, Virtual Memory Controlling Method, and Program", for Approval
Sep 26, 2012; By a News Reporter-Staff News Editor at Electronics Newsweekly -- From Washington, D.C., VerticalNews journalists report that a...
WIPO ASSIGNS PATENT TO SHENZHEN NETCOM ELECTRONICS FOR "VIRTUAL MEMORY , VIRTUAL MEMORY SYSTEM AND METHOD" (CHINESE INVENTORS)
Oct 01, 2011; GENEVA, Oct. 1 -- Publication No. WO/2011/116571 was published on Sept. 29. Title of the invention: "VIRTUAL MEMORY, VIRTUAL...
US Patent Issued to International Business Machines on Jan. 18 for "Prefetching in a Virtual Memory System Based Upon Repeated Accesses Across Page Boundaries" (Texas Inventors)
Jan 20, 2011; ALEXANDRIA, Va., Jan. 20 -- United States Patent no. 7,873,792, issued on Jan. 18, was assigned to International Business...
US Patent Issued to SAMSUNG ELECTRONICS on Sept. 24 for "Apparatus and Method of Reducing Page Fault Rate in Virtual Memory System" (South Korean Inventors)
Sep 24, 2013; ALEXANDRIA, Va., Sept. 24 -- United States Patent no. 8,543,791, issued on Sept. 24, was assigned to SAMSUNG ELECTRONICS Co. Ltd....