In the strictest sense, it is any information system capable of integrating, storing, editing, analyzing, sharing, and displaying geographically referenced information. In a more generic sense, GIS applications are tools that allow users to create interactive queries (user created searches), analyze spatial information, edit data, maps, and present the results of all these operations. Geographic information science is the science underlying the geographic concepts, applications and systems, taught in degree and GIS Certificate programs at many universities.
Geographic information system technology can be used for scientific investigations, resource management, asset management, archaeology, environmental impact assessment, urban planning, cartography, criminology, geographic history, marketing, and logistics to name a few. For example, GIS might allow emergency planners to easily calculate emergency response times in the event of a natural disaster, GIS might be used to find wetlands that need protection from pollution, or GIS can be used by a company to site a new business location to take advantage of a previously under-served market.
The early 20th century saw the development of "photo lithography" where maps were separated into layers. Computer hardware development spurred by nuclear weapon research would lead to general purpose computer "mapping" applications by the early 1960s.
The year 1962 saw the development of the world's first true operational GIS in Ottawa, Ontario, Canada by the federal Department of Forestry and Rural Development. Developed by Dr. Roger Tomlinson, it was called the "Canada Geographic Information System" (CGIS) and was used to store, analyze, and manipulate data collected for the Canada Land Inventory (CLI)—an initiative to determine the land capability for rural Canada by mapping information about soils, agriculture, recreation, wildlife, waterfowl, forestry, and land use at a scale of 1:50,000. A rating classification factor was also added to permit analysis.
CGIS was the world's first "system" and was an improvement over "mapping" applications as it provided capabilities for overlay, measurement, and digitizing/scanning. It supported a national coordinate system that spanned the continent, coded lines as "arcs" having a true embedded topology, and it stored the attribute and locational information in separate files. As a result of this, Tomlinson has become known as the "father of GIS," particularly for his use of overlays in promoting the spatial analysis of convergent geographic data. CGIS lasted into the 1990s and built the largest digital land resource database in Canada. It was developed as a mainframe based system in support of federal and provincial resource planning and management. Its strength was continent-wide analysis of complex datasets. The CGIS was never available in a commercial form. In 1964, Howard T Fisher formed the Laboratory for Computer Graphics and Spatial Analysis at the Harvard Graduate School of Design (LCGSA 1965-1991), where a number of important theoretical concepts in spatial data handling were developed, and which by the 1970s had distributed seminal software code and systems, such as 'SYMAP', 'GRID', and 'ODYSSEY' -- which served as literal and inspirational sources for subsequent commercial development -- to universities, research centers, and corporations worldwide.
By the early 1980s, M&S Computing (later Intergraph), Environmental Systems Research Institute (ESRI) and CARIS (Computer Aided Resource Information System) emerged as commercial vendors of GIS software, successfully incorporating many of the CGIS features, combining the first generation approach to separation of spatial and attribute information with a second generation approach to organizing attribute data into database structures. In parallel, the development of two public domain systems began in the late 1970s and early 1980s. MOSS, the Map Overlay and Statistical System project started in 1977 in Fort Collins, Colorado under the auspices of the Western Energy and Land Use Team (WELUT) and the US Fish and Wildlife Service. GIS was begun in 1982 by the U.S. Army Corp of Engineering Research Laboratory (USA-CERL) in Champaign, Illinois, a branch of the U.S. Army Corps of Engineers to meet the need of the United States military for software for land management and environmental planning. The later 1980s and 1990s industry growth were spurred on by the growing use of GIS on Unix workstations and the personal computer. By the end of the 20th century, the rapid growth in various systems had been consolidated and standardized on relatively few platforms and users were beginning to export the concept of viewing GIS data over the Internet, requiring data format and transfer standards. More recently, there are a growing number of free, open source GIS packages which run on a range of operating systems and can be customized to perform specific tasks.
A GIS can also convert existing digital information, which may not yet be in map form, into forms it can recognize and use. For example, digital satellite images generated through remote sensing can be analyzed to produce a map-like layer of digital information about vegetative covers. Another fairly developed resource for naming GIS objects is the Getty Thesaurus of Geographic Names (GTGN), which is a structured vocabulary containing around 1,000,000 names and other information about places
Likewise, census or hydrologic tabular data can be converted to map-like form, serving as layers of thematic information in a GIS.
Raster data type consists of rows and columns of cells, with each cell storing a single value. Raster data can be images (raster images) with each pixel (or cell) containing a color value. Additional values recorded for each cell may be a discrete value, such as land use, a continuous value, such as temperature, or a null value if no data is available. While a raster cell stores a single value, it can be extended by using raster bands to represent RGB (red, green, blue) colors, colormaps (a mapping between a thematic code and RGB value), or an extended attribute table with one row for each unique cell value. The resolution of the raster data set is its cell width in ground units.
Raster data is stored in various formats; from a standard file-based structure of TIF, JPEG, etc. to binary large object (BLOB) data stored directly in a relational database management system (RDBMS) similar to other vector-based feature classes. Database storage, when properly indexed, typically allows for quicker retrieval of the raster data but can require storage of millions of significantly-sized records.
In a GIS, geographical features are often expressed as vectors, by considering those features as geometrical shapes. Different geographical features are expressed by different types of geometry:Points: Zero-dimensional points are used for geographical features that can best be expressed by a single point reference; in other words, simple location. For example, the locations of wells, peak elevations, features of interest or trailheads. Points convey the least amount of information of these file types. Points can also be used to represent areas when displayed at a small scale. For example, cities on a map of the world would be represented by points rather than polygons. No measurements are possible with point features. Lines or polylines: One-dimensional lines or polylines are used for linear features such as rivers, roads, railroads, trails, and topographic lines. Again, as with point features, linear features displayed at a small scale will be represented as linear features rather than as a polygon. Line features can measure distance. Polygons: Two-dimensional polygons are used for geographical features that cover a particular area of the earth's surface. Such features may include lakes, park boundaries, buildings, city boundaries, or land uses. Polygons convey the most amount of information of the file types. Polygon features can measure perimeter and area.
Each of these geometries is linked to a row in a database that describes their attributes. For example, a database that describes lakes may contain a lake's depth, water quality, pollution level. This information can be used to make a map to describe a particular attribute of the dataset. For example, lakes could be coloured depending on level of pollution. Different geometries can also be compared. For example, the GIS could be used to identify all wells (point geometry) that are within of a lake (polygon geometry) that has a high level of pollution.
Vector features can be made to respect spatial integrity through the application of topology rules such as 'polygons must not overlap'. Vector data can also be used to represent continuously varying phenomena. Contour lines and triangulated irregular networks (TIN) are used to represent elevation or other continuously changing values. TINs record values at point locations, which are connected by lines to form an irregular mesh of triangles. The face of the triangles represent the terrain surface.
The file size for vector data is usually much smaller for storage and sharing than raster data. Image or raster data can be 10 to 100 times larger than vector data depending on the resolution. Another advantage of vector data is that it is easy to update and maintain. For example, a new highway is added. The raster image will have to be completely reproduced, but the vector data, "roads," can be easily updated by adding the missing road segment. In addition, vector data allows much more analysis capability, especially for "networks" such as roads, power, rail, telecommunications, etc. For example, with vector data attributed with the characteristics of roads, ports, and airfields, allows the analyst to query for the best route or method of transportation. In the vector data, the analyst can query the data for the largest port with an airfield within 60 miles and a connecting road that is at least two lane highway. Raster data will not have all the characteristics of the features it displays.
There is also software being developed to support spatial and non-spatial decision-making. In this software, the solutions to spatial problems are integrated with solutions to non-spatial problems. The end result it is hoped with these Flexible Spatial Decision-Making Support Systems (FSDSS) will be that non experts can use GIS and spatial criteria with their other non spatial criteria to view solutions to multi-criteria problems that will support decision making.
Existing data printed on paper or PET film maps can be digitized or scanned to produce digital data. A digitizer produces vector data as an operator traces points, lines, and polygon boundaries from a map. Scanning a map results in raster data that could be further processed to produce vector data.
Survey data can be directly entered into a GIS from digital data collection systems on survey instruments using a technique called Coordinate Geometry (COGO). Positions from a Global Positioning System (GPS), another survey tool, can also be directly entered into a GIS.
Remotely sensed data also plays an important role in data collection and consist of sensors attached to a platform. Sensors include cameras, digital scanners and LIDAR, while platforms usually consist of aircraft and satellites.
The majority of digital data currently comes from photo interpretation of aerial photographs. Soft copy workstations are used to digitize features directly from stereo pairs of digital photographs. These systems allow data to be captured in two and three dimensions, with elevations measured directly from a stereo pair using principles of photogrammetry. Currently, analog aerial photos are scanned before being entered into a soft copy system, but as high quality digital cameras become cheaper this step will be skipped.
Satellite remote sensing provides another important source of spatial data. Here satellites use different sensor packages to passively measure the reflectance from parts of the electromagnetic spectrum or radio waves that were sent out from an active sensor such as radar. Remote sensing collects raster data that can be further processed using different bands to identify objects and classes of interest, such as land cover.
When data is captured, the user should consider if the data should be captured with either a relative accuracy or absolute accuracy, since this could not only influence how information will be interpreted but also the cost of data capture.
In addition to collecting and entering spatial data, attribute data is also entered into a GIS. For vector data, this includes additional information about the objects represented in the system.
After entering data into a GIS, the data usually requires editing, to remove errors, or further processing. For vector data it must be made "topologically correct" before it can be used for some advanced analysis. For example, in a road network, lines must connect with nodes at an intersection. Errors such as undershoots and overshoots must also be removed. For scanned maps, blemishes on the source map may need to be removed from the resulting raster. For example, a fleck of dirt might connect two lines that should not be connected.
More advanced data processing can occur with image processing, a technique developed in the late 1960s by NASA and the private sector to provide contrast enhancement, false colour rendering and a variety of other techniques including use of two dimensional Fourier transforms.
Since digital data is collected and stored in various ways, the two data sources may not be entirely compatible. So a GIS must be able to convert geographic data from one structure to another.
The earth can be represented by various models, each of which may provide a different set of coordinates (e.g., latitude, longitude, elevation) for any given point on the earth's surface. The simplest model is to assume the earth is a perfect sphere. As more measurements of the earth have accumulated, the models of the earth have become more sophisticated and more accurate. In fact, there are models that apply to different areas of the earth to provide increased accuracy (e.g., North American Datum, 1927 - NAD27 - works well in North America, but not in Europe). See datum (geodesy) for more information.
Projection is a fundamental component of map making. A projection is a mathematical means of transferring information from a model of the Earth, which represents a three-dimensional curved surface, to a two-dimensional medium—paper or a computer screen. Different projections are used for different types of maps because each projection particularly suits certain uses. For example, a projection that accurately represents the shapes of the continents will distort their relative sizes. See Map projection for more information.
Since much of the information in a GIS comes from existing maps, a GIS uses the processing power of the computer to transform digital information, gathered from sources with different projections and/or different coordinate systems, to a common projection and coordinate system. For images, this process is called rectification.
Such a map can be thought of as a rainfall contour map. Many sophisticated methods can estimate the characteristics of surfaces from a limited number of point measurements. A two-dimensional contour map created from the surface modeling of rainfall point measurements may be overlaid and analyzed with any other map in a GIS covering the same area.
Additionally, from a series of three-dimensional points, or digital elevation model, isopleth lines representing elevation contours can be generated, along with slope analysis, shaded relief, and other elevation products. Watersheds can be easily defined for any given reach, by computing all of the areas contiguous and uphill from any given point of interest. Similarly, an expected thalweg of where surface water would want to travel in intermittent and permanent streams can be computed from elevation data in the GIS.
In the past years, were there any gas stations or factories operating next to the swamp? Any within two miles (3 km) and uphill from the swamp? A GIS can recognize and analyze the spatial relationships that exist within digitally stored spatial data. These topological relationships allow complex spatial modelling and analysis to be performed. Topological relationships between geometric entities traditionally include adjacency (what adjoins what), containment (what encloses what), and proximity (how close something is to something else).
The term "cartographic modeling" was (probably) coined by Dana Tomlin in his PhD dissertation and later in his book which has the term in the title. Cartographic modeling refers to a process where several thematic layers of the same area are produced, processed, and analyzed. Tomlin used raster layers, but the overlay method (see below) can be used more generally. Operations on map layers can be combined into algorithms, and eventually into simulation or optimization models.
Data extraction is a GIS process similar to vector overlay, though it can be used in either vector or raster data analysis. Rather than combining the properties and features of both datasets, data extraction involves using a "clip" or "mask" to extract the features of one data set that fall within the spatial extent of another dataset.
In raster data analysis, the overlay of datasets is accomplished through a process known as "local operation on multiple rasters" or "map algebra," through a function that combines the values of each raster's matrix. This function may weigh some inputs more than others through use of an "index model" that reflects the influence of various factors upon a geographic phenomenon.
Geostatistics is a point-pattern analysis that produces field predictions from data points. It is a way of looking at the statistical properties of those special data. It is different from general applications of statistics because it employs the use of graph theory and matrix algebra to reduce the number of parameters in the data. Only the second-order properties of the GIS data are analyzed.
When phenomena are measured, the observation methods dictate the accuracy of any subsequent analysis. Due to the nature of the data (e.g. traffic patterns in an urban environment; weather patterns over the Pacific Ocean), a constant or dynamic degree of precision is always lost in the measurement. This loss of precision is determined from the scale and distribution of the data collection.
To determine the statistical relevance of the analysis, an average is determined so that points (gradients) outside of any immediate measurement can be included to determine their predicted behavior. This is due to the limitations of the applied statistic and data collection methods, and interpolation is required in order to predict the behavior of particles, points, and locations that are not directly measurable.
Interpolation is the process by which a surface is created, usually a raster dataset, through the input of data collected at a number of sample points. There are several forms of interpolation, each which treats the data differently, depending on the properties of the data set. In comparing interpolation methods, the first consideration should be whether or not the source data will change (exact or approximate). Next is whether the method is subjective, a human interpretation, or objective. Then there is the nature of transitions between points: are they abrupt or gradual. Finally, there is whether a method is global (it uses the entire data set to form the model), or local where an algorithm is repeated for a small section of terrain.
Interpolation is a justified measurement because of a Spatial Autocorrelation Principle that recognizes that data collected at any position will have a great similarity to, or influence of those locations within its immediate vicinity.
Digital elevation models (DEM), triangulated irregular networks (TIN), Edge finding algorithms, Theissen Polygons, Fourier analysis, Weighted moving averages, Inverse Distance Weighted, Moving averages, Kriging, Spline, and Trend surface analysis are all mathematical methods to produce interpolative data.
It should be noted that there are several (potentially dangerous) caveats that are often overlooked when using interpolation. See the full entry for Geocoding for more information.
Various algorithms are used to help with address matching when the spellings of addresses differ. Address information that a particular entity or organization has data on, such as the post office, may not entirely match the reference theme. There could be variations in street name spelling, community name, etc. Consequently, the user generally has the ability to make matching criteria more stringent, or to relax those parameters so that more addresses will be mapped. Care must be taken to review the results so as not to map addresses incorrectly due to overzealous matching parameters.
Cartographic work serves two major functions:
First, it produces graphics on the screen or on paper that convey the results of analysis to the people who make decisions about resources. Wall maps and other graphics can be generated, allowing the viewer to visualize and thereby understand the results of analyses or simulations of potential events. Web Map Servers facilitate distribution of generated maps through web browsers using various implementations of web-based application programming interfaces (AJAX, Java, Flash, etc).
Second, other database information can be generated for further analysis or use. An example would be a list of all addresses within one mile (1.6 km) of a toxic spill.
Today, graphic display techniques such as shading based on altitude in a GIS can make relationships among map elements visible, heightening one's ability to extract and analyze information. For example, two types of data were combined in a GIS to produce a perspective view of a portion of San Mateo County, California.
A GIS was used to register and combine the two images to render the three-dimensional perspective view looking down the San Andreas Fault, using the Thematic Mapper image pixels, but shaded using the elevation of the landforms. The GIS display depends on the viewing point of the observer and time of day of the display, to properly render the shadows created by the sun's rays at that latitude, longitude, and time of day.
An archeochrome is a new way of displaying spatial data. It is a thematic on a 3D map that is applied to a specific building or a part of a building. It is suited to the visual display of heat loss data.
Geographic information can be accessed, transferred, transformed, overlaid, processed and displayed using numerous software applications. Within industry, commercial offerings from companies such as Smallworld, ESRI, Intergraph, Mapinfo and Autodesk dominate, offering an entire suite of tools. Government and military departments often use custom software, open source products, such as GRASS, or more specialized products that meet a well defined need. Although free tools exist to view GIS datasets, public access to geographic information is dominated by online resources such as Google Earth and interactive web mapping.
GIS processing software is used for the task of preparing data for use within a GIS. This transforms the raw or legacy geographic data into a format usable by GIS products. For example an aerial photograph may need to be stretched (orthorectified) using photogrammetry so that its pixels align with longitude and latitude gradations (or whatever grid is needed). This can be distinguished from the transformations done within GIS analysis software by the fact that these changes are permanent, more complex and time consuming. Thus, a specialized high-end type of software is generally used by a person skilled in Remote Sensing and / or GIS processing aspects of computer science. In addition, AutoCAD, normally used for drafts of engineering projects, can be configured for the editing of vector maps, and has some products that have migrated towards GIS use. It is especially useful as it has strong support for digitization. Raw geographic data can be edited in many standard database and spreadsheet applications and in some cases a text editor may be used as long as care is taken to properly format data.
A geodatabase is a database with extensions for storing, indexing, querying, and manipulating geographic information and spatial data. While some geodatabases have functions built in to allow geoprocessing, the primary benefit of a geodatabase is in the "database type" capabilities that it gives to spatial data. Some of these capabilities include easy access using standard database drivers such as ODBC, the ability to easily link or join data tables, also indexing and grouping of spatial datasets independent of software platform.
GIS analysis software takes GIS data and overlays or otherwise combines it so that the data can be visually analysed. It can output a detailed map, image or movie used to communicate an idea or concept with respect to a region of interest. This is usually used by persons who are trained in cartography, geography or a GIS professional as this type of application is complex and takes some time to master. The software performs transformation on raster and vector data sometimes of differing datums, grid system, or reference system, into one coherent image. It can also analyse changes over time within a region. This software is central to the professional analysis and presentation of GIS data. Examples include the ArcGIS family of ESRI GIS applications (which replaced ESRI's older Arc/INFO), Smallworld, Civil Designer, XMap and GRASS.
GIS statistical software uses standard database queries to retrieve and analyse data for decision making. For example, if one has geographic data that includes detailed demographic information, one can determine how many individuals of a certain age, income, and ethnicity live in a given street block. The data is sometimes referenced with postal codes or street locations rather than with geodetic data. This can be used by computer scientists and statisticians with computer science skills, with an objective of characterizing an area to aid in decisions regarding marketing, social services, emergency planning, etc. Standard DBMS can be used or specialized GIS statistical software. These are often housed on servers so that they can be queried with web browsers. Examples are MySQL or ArcSDE.
GIS has seen many implementations on mobile devices. With the widespread adoption of GPS, GIS has been used to capture and integrate data in the field. In the past, gathering GIS in the field was done through marking geographic information onto a paper map and then translating that information into digital format back at the computer. Now, through the use of mobile devices, geographic data can be directly captured out in the field.
With the broad use of non-proprietary and open data formats such as the Shape File format for vector data and the Geotiff format for raster data, as well as the adoption of OGC standards for networked servers, development of open source software continues to evolve, especially for web and web service oriented applications. Well-known open source GIS software includes GRASS GIS, Quantum GIS, MapServer, uDig, OpenJUMP, gvSIG and many others (e.g., see OSGeo or MapTools).
Much open source GIS development has focused on the creation of libraries that provide functionality for third party applications. Such libraries include GDAL/OGR, and GeoTools. These libraries are used by open source and commercial software alike to provide basic functionality.
Many disciplines can benefit from GIS technology. An active GIS market has resulted in lower costs and continual improvements in the hardware and software components of GIS. These developments will, in turn, result in a much wider use of the technology throughout science, government, business, and industry, with applications including real estate, public health, crime mapping, national defense, sustainable development, natural resources, landscape architecture, archaeology, regional and community planning, transportation and logistics. GIS is also diverging into location-based services (LBS). LBS allows GPS enabled mobile devices to display their location in relation to fixed assets (nearest restaurant, gas station, fire hydrant), mobile assets (friends, children, police car) or to relay their position back to a central server for display or other processing. These services continue to develop with the increased integration of GPS functionality with increasingly powerful mobile electronics (cell phones, PDAs, laptops).
The Open Geospatial Consortium (OGC) is an international industry consortium of 334 companies, government agencies and universities participating in a consensus process to develop publicly available geoprocessing specifications. Open interfaces and protocols defined by OpenGIS Specifications support interoperable solutions that "geo-enable" the Web, wireless and location-based services, and mainstream IT, and empower technology developers to make complex spatial information and services accessible and useful with all kinds of applications. Open Geospatial Consortium (OGC) protocols include Web Map Service (WMS) and Web Feature Service (WFS).
GIS products are broken down by the OGC into two categories, based on how completely and accurately the software follows the OGC specifications.
Compliant Products are software products that comply to OGC's OpenGIS Specifications. When a product has been tested and certified as compliant through the OGC Testing Program, the product is automatically registered as "compliant" on this site.
Implementing Products are software products that implement OpenGIS Specifications but have not yet passed a compliance test. Compliance tests are not available for all specifications. Developers can register their products as implementing draft or approved specifications, though OGC reserves the right to review and verify each entry.
Some of them, like Google Maps and OpenLayers, expose an API that enable users to create custom applications. These toolkits commonly offer street maps, aerial/satellite imagery, geocoding, searches, and routing functionality.
Other applications for publishing geographic information on the web include MapInfo's MapXtreme, Intergraph's GeoMedia WebMap (TM), ESRI's ArcIMS, ArcGIS Server, AutoDesk's Mapguide, SeaTrails' AtlasAlive, and the open source MapServer.
In recent years web mapping services have begun to adopt features more common in GIS. Services such as Google Maps and Live Maps allow users to annotate maps and share the maps with others. Conversely, GIS vendors have also created web mapping systems such as ESRI's WebADF that adopt much of the usability and speed of consumer web mapping web sites.
Through a function known as visualization, a GIS can be used to produce images - not just maps, but drawings, animations, and other cartographic products. These images allow researchers to view their subjects in ways that literally never have been seen before. The images are often invaluable for conveying the technical concepts of GIS study subjects to non-scientists.
Prediction of the impact of climate change inheretly involved in many uncertainties sterm from data and model. GIS has been used to in corporated with uncertainty theory to model the coastal impact of climate change, including inudation due to sea-level rise and storm erosion, such as in the east coast of Australia.
As an example, the changes in vegetation vigor through a growing season can be animated to determine when drought was most extensive in a particular region. The resulting graphic, known as a normalized vegetation index, represents a rough measure of plant health. Working with two variables over time would then allow researchers to detect regional differences in the lag between a decline in rainfall and its effect on vegetation.
GIS technology and the availability of digital data on regional and global scales enable such analyses. The satellite sensor output used to generate a vegetation graphic is produced by the Advanced Very High Resolution Radiometer (AVHRR). This sensor system detects the amounts of energy reflected from the Earth's surface across various bands of the spectrum for surface areas of about 1 square kilometer. The satellite sensor produces images of a particular location on the Earth twice a day. AVHRR is only one of many sensor systems used for Earth surface analysis. More sensors will follow, generating ever greater amounts of data.
GIS and related technology will help greatly in the management and analysis of these large volumes of data, allowing for better understanding of terrestrial processes and better management of human activities to maintain world economic vitality and environmental quality.
In addition to the integration of time in environmental studies, GIS is also being explored for its ability to track and model the progress of humans throughout their daily routines. A concrete example of progress in this area is the recent release of time-specific population data by the US Census. In this data set, the populations of cities are shown for daytime and evening hours highlighting the pattern of concentration and dispersion generated by North American commuting patterns. The manipulation and generation of data required to produce this data would not have been possible without GIS.
Using models to project the data held by a GIS forward in time have enabled planners to test policy decisions. These systems are known as Spatial Decision Support Systems.
Ontologies are a key component of this semantic approach as they allow a formal, machine-readable specification of the concepts and relationships in a given domain. This in turn allows a GIS to focus on the meaning of data rather than its syntax or structure. For example, reasoning that a land cover type classified as Deciduous Needleleaf Trees in one dataset is a specialization of land cover type Forest in another more roughly-classified dataset can help a GIS automatically merge the two datasets under the more general land cover classification. Very deep and comprehensive ontologies have been developed in areas related to GIS applications, for example the Hydrology Ontology developed by the Ordnance Survey in the United Kingdom and the SWEET ontologies developed by NASA's Jet Propulsion Laboratory. Also, simpler ontologies and semantic metadata standards are being proposed by the W3C Geo Incubator Group to represent geospatial data on the web.
Recent research results in this area can be seen in the International Conference on Geospatial Semantics and the Terra Cognita -- Directions to the Geospatial Semantic Web workshop at the International Semantic Web Conference.
With the popularization of GIS in decision making, scholars have began to scrutinize the social implications of GIS. It has been argued that the production, distribution, utilization, and representation of geographic information are largely related with the social context. For example, some scholars are concerned that GIS may turn into a tool of omni-surveillance for dictatorship. Other related topics include discussion on copyright, privacy, and censorship. A more optimistic social approach to GIS adoption is to use it as a tool for public participation.