The term CSC emerged in the 1990s to replace the terms workgroup computing (which emphasizes technology over the work being supported and seems to restrict inquiry to small organizational units) or groupware (which became a commercial buzzword and was used to describe many badly designed systems) and computer supported cooperative work (the name of a conference) seems only to address research into experimental systems and the nature of workplaces and organizations doing "work" as opposed to play or war).
Base technologies like netnews, email, chat and wiki could be described as either "social" or "collaborative". Those who say "social" seem to focus on so-called "virtual community" while those who say "collaborative" seem to be more concerned with content management and the actual output. While software may be designed to achieve closer social ties or specific deliverables, it is hard to support collaboration without also enabling relationships to form, and hard to support a social interaction without some kind of shared co-authored works.
True multi-player computer games can be considered a simple form of collaboration, but only a few theorists include this as part of CSC.
However, the relatively new areas of evolutionary computing, massively-parallel algorithms, and even "artificial life" explore the solution of problems by the evolving interaction of large numbers of small actors, or agents, or decision-makers who interact in a largely unconstrained fashion. The "side-effect" of the interaction may be a solution of interest, such as a new sorting algorithm; or there may be a permanent residual of the interaction, such as the setting of weights in a neural network that has now been "tuned" or "trained" to repeatedly solve a specific problem, such as making a decision about granting credit to a person, or distinguishing a diseased plant from a healthy one. Connectionism is a study of systems in which the learning is stored in the linkages, or connections, not in what is normally thought of as content.
This larger definition of "computing", in which not just the data, or the metadata, or the context of the data, but the computer itself is being "processed" makes the term "social computing" have a whole different meaning. The repeated use of the "blogosphere" to process daily news has a "side-effect" of building up linkage maps, trusted sources, RSS aggregator feeds, etc., so that the overall system is, in some sense, learning how to do something better, more rapidly, and more easily.
In control systems theory, it has been shown that closed-loop feedback systems are vastly more robust than open-loop system. The blogosphere has been criticized for having "echos" or repeatedly cycling certain ideas, but the upside is that there are simultaneous closed-loop feedback paths across a wide spectrum of distance and time-constants. These issues of computability and algorithm-order are classic computer-science issues and, in that sense, social computing is again a legitimate "computing" subject, even if it involves flexible collaboration as part of the "hardware". An analogy might be to imagine a "computer" built entirely of field-programmable gate-arrays, where not just the data and the program itself can be modified in flight (as in LISP, where programs and data are indistinguishable), but the hardware and logic and rules of operation also can be modified real-time during a "computation."
If the collaboration is over a large distance and many time-zones, the system will probably encounter significant differences in context between the components, resulting in a whole new set of design and support problems and behaviors, especially misunderstanding of what is taken as implicit or obvious by different collaborators, and therefore not explicitly stated. Such differences may be cultural, geographic, hierarchical, etc. For example, when Hurricane Katrina hit the US in Fall, 2005, there were substantial collaboration and communication difficulties between Federal, State, and local officials. A significant portion of those difficulties were classic issues that very frequently result from attempts to collaborate over a distance, and from people at one level in an organization trying to collaborate with people at an entirely different level of an organization, with each group having different meanings to what "the problem" is that is being addressed and what time-scale is relevant. Computer-supported collaboration research includes academic research into how to minimize, or at least recognize that class of problem in collaboration and take it into account. Similar problems may occur if a conversation or collaboration occurs when it is a work day at one site or in one country, and already a week-end or holiday in another site, and the parties have different levels of stress and focus. These problems are analogies to "flame wars", the abrupt hostility that has been observed to occur when e-mail is used for a conversation, when the parties are no longer getting direct feedback from watching each other's body language.
A final difference between computer-supported collaboration and classic "computing" is that a computer typically remains a closed system, focusing only on what is already "inside the box", and only dealing with it in an abstract or mental fashion. Collaboration that occurs over a significant period of space and time shares properties of "active vision", where the actions possible are more than just analyzing an incoming TV image of an object of interest, but include walking over to the object, picking it up, turning it over, smashing it open, etc. The collaborating "units" are human beings, typically, who remain partly involved in the collaboration, and partly both sensing and actively changing the world around them. A collaborative discussion of baking cookies could include a period in which participants left the room to actually bake such cookies and try them out. A collaborative discussion of politics could include actually voting and changing the political landscape. This inclusion of both active sensors and active "effectors" is again a difference from "computing" that, at a minimum, makes this field at least as complex as "robotics".
Also, as stated earlier, generally a computer is unaltered by the program it is running or the data it is processing; but a collaboration of people and groups is typically substantially and permanently altered by the nature of problems it works on, the enjoyment or frustration with the working process, and the outcome of the work. This lasting residual "side-effect" or "effect" of one "cycle" of the collaboration then may alter the way in which the collaboration tool is use for the next "cycle", as people learn how to use this new method, so short-term, single-session or single-problem studies of collaboration tools may be very misleading as to what the longer-term outcomes may be. Imagine that your desktop computer, after a while, decided it didn't really like to do word-processing any more and preferred to work on addition, and that every time you tried to write e-mail the computer stopped mid-course to become obsessed with doing word-counts. Analogous behaviors in CSC, as with "game-theory", make studying almost any system or design problem frustratingly complicated: either the problems involved seem to be "toy" problems that are doable but unrealistically simple, or the problems become so complex that analysis is impossible.
A "computer" doesn't generally care whether the answer to a problem is "5" or "25", "yes" or "no," but humans involved in a group decision-making process may care very much about the outcome, and the various answers can have winners and losers with potentially very high stakes. At a minimum, this introduces substantial bias into the analysis of any data, as people will tend to selectively see facts that support the conclusions they personally prefer. In a collaboration within a single hospital between multiple clinicians, mediated by an "electronic medical record", there may actually be a substantial amount of dispute and negotiation going on among, say, a group focusing on treating diabetes and another group focusing on treating congestive heart failure. The "collaboration" may actually be much more of a "competition" to frame and define the problem in terms that result in favorable outcomes. Again, this makes CSC design work far more complicated than simply trying to get a group of sensors or computers to share data and work together correctly. In fact, in some cases, participants may have a strong vested interest in the status quo and prefer as an outcome that the "problem" not be solved. A successful CSC system, in their minds, would be one that prevented the solution of the problem supposedly being addressed collaboratively, perhaps while giving a misleading appearance of cooperative effort. This factor complicates research into whether a CSC system is well-designed or not.
For example, in some countries national political elections could be viewed, abstractly, as heavily technology-mediated (and "computer supported") processes, including information distribution, discussion, debate, and an outcome resolution process - yet there may not be a unanimous opinion as to whether this process "works" or "is broken." It is difficult to improve or redesign a system if people can't even agree on whether the system works now or not. The implications of this is that CSC systems are inextricably embedded in social contexts and have to simultaneously address a specific problem, plus the issue of whether collaboration this way increases the ability to address future problems, plus the issue of what really defines which problems need to be addressed in the first place and the relative priority of those problems.
And, not only is there difference and potential competition between parties across organizational and cultural and geographic dimensions -- opinions on all those subjects may differ, and generally will differ, even within any given organization at different hierarchical levels. What works for workers may not work for management. What works for middle-management may not work for upper-management. What works for management may not work for the stockholders. What works for the company may not work for the country. A CSC system has to handle not just "content processing" but also "context processing" in that sense, sorting out the different nested and overlapping value-laden reference frames as well as the data and "the explicitly identified problem" within those reference frames. Part of this is a very abstract technical problem, faced by researchers in distributed artificial intelligence, in getting, say, 20 different surveillance robotic vehicles to talk to each other and compare notes - which is in itself a hard problem. Add to that complexity a new factor that, say, each of the robots has a hidden agenda and is not being totally honest about what it shares.
If the preceding discussion gives the impression that CSC problems are extraordinarily hard to solve, that's correct. In fact, they have been described as "wicked" problems, not only because they are immensely complicated when addressed, but because they look so simple from the outside and are generally under-appreciated. For example, building a disaster-response communication system is vastly more complex than just getting a unified frequency for different agencies to use to communicate with each other, because the words, meanings, contexts, values, and agendas all also have to be communicated and resolved, across space, across time, across a 14 level hierarchy from the national leader to the front-line responder.
The view of a scene from an infection-control specialist's viewpoint and from a military or police viewpoint may suggest exactly opposite actions regarding "rounding up people and concentrating them at the stadium." The ability of a CSC system to facilitate wise decisions and action in that sort of situation might require the type of action described by the Harvard Negotiation Project in the book "Getting to Yes", where "positions" have to be abstracted to "interests", perhaps repeatedly, until a level is reached at which agreement and a common ground can be found between groups that appear, on the surface, to be hopelessly deadlocked. It is an open research question as to what features of a CSC system could simply allow that type of discussion to occur, let alone facilitate it. Very high bandwidth and multiple "back-channel" communication pathways have often proven to be helpful. Apparently very simple things, such as sufficient magnification and resolution on video screens to be able to actually see another person's eyes clearly, can have a dramatic effect on the ability of a system to support trust-building and collaboration at a distance.
Reflecting desired organization protocols and business processes and governance norms directly, so that regulated communication (the collaboration) can be told apart from free-form interactions, is important to collaboration research, if only to know where to stop the study of work and start the study of people. The subfield CMC or computer-mediated communication deals with human relationships.
Problems of method, communication and comprehension in collaborations between ethnographer and system developer are also of special concern.
CSCW 2004 tutorials listed all of the above as desirable skills to know.
Plenary addresses on Open Source Society and Hacking Law' suggest a bold, civilization-building, ambition for this research.
Less ambitiously, specific CSC fields are often studied under their own names with no reference to the more general field of study, focusing instead on the technology with only minimal attention to the collaboration implied, e.g. video games, videoconferences. Since some specialized devices exist for games or conferences that do not include all of the usual boot image capabilities of a true "computer", studying these separately may be justified. There is also separate study of e-learning, e-government, e-democracy and telemedicine. The subfield telework also often stands alone.
However, at this time, collaboration capabilities were limited. As few computers had even local area networks, and processors were slow and expensive, the idea of using them simply to accelerate and "augment" human communication was eccentric in many situations. Computers processed numbers, not text, and the collaboration was in general devoted only to better and more accurate handling of numbers.
Video collaboration is not usually studied. Online videoconferencing and webcams have been studied in small scale use for decades but since people simply do not have built-in facilities to create video together directly, they are properly a communication, not collaboration, concern.
The conference series began when, according to Jonathan Grudin: "Paul Cashman and Irene Grief organized a workshop of people from various disciplines who shared an interest in how people work, with an eye to understanding how technology could support them. They coined the term computer-supported cooperative work to describe this common interest... thousands of researchers and developers have been drawn to it."
According to Grudin, "an earlier approach to group support, Office Automation, had run out of steam. The problems were not primarily technical, although technical challenges certainly existed. The key problem was understanding system requirements. In the mid-1960s, tasks such as filling seats on airplane flights or printing payroll checks had been translated into requirements that resulted (with some trial and error) in successful mainframe systems. In the mid-1970s, minicomputers promised to support groups and organizations in more sophisticated, interactive ways: Office Automation was born. Single-user applications such as word processors and spreadsheets succeeded; office automation tried to integrate and extend these successes to support groups and departments. But what were the precise requirements for such systems?"
Early researchers such as Bill Buxton had focused on non-voice gestures (like humming or whistling) as a way to communicate with the machine while not interfering too directly with speech directed at a person. Some researchers believed voice as command interfaces were bad for this reason, because they encouraged speaking as if to a "slave". A notable innovation was the emergence of the video prototype - Apple Computer used it to test the likely user acceptance of a voice interface. They had very mixed results, and decided not to pursue such an interface at the time (late 1980s).
Since the 1960s researchers had been insisting that links should have types - that for instance a link that "contradicts" another should be easy to differentiate from one that "supports" another: all of the early hypertext systems had schemes for doing exactly this, somehow.
HTML supports simple link types with the REL tag and REV tag. Some standards for using these on the WWW were proposed most notably in 1994 by people very familiar with earlier work in SGML. However, no such scheme has ever been adopted by a large number of web users, and the "semantic web" remains unrealized. Heroic attempts such as crit.org have sometimes collapsed totally.
A lot of CSC researchers ask why, and why the interest in applying a semantic web standard should continue despite so many failed attempts.
Online identity and privacy concerns, especially identity theft, have grown to dominate the CSCW agenda in more recent years. The separate Computers, Freedom and Privacy conferences deal with larger social questions, but, basic concerns that apply to systems and work process design tend still to be discussed as part of CSC research.
Team consensus decision making in software engineering, and the role of revision control, revert, reputation and other functions, has always been a major focus of CSC: There is no software without someone writing it. Presumably, those who do write it, must understand something about collaboration in their own team. This design and code however is only one form of collaborative content:
By the late 1990s with the rise of wikis (a simple repository and data dictionary that was easy for the public to use) the way consensus applied to joint editing, meeting agendas and so on, had become a major concern. Different wikis adopted different social and pseudopolitical structures to combat the problems caused by conflicting points of view and differing opinions on content.
Tools and techniques for designing and running effective "collaboratories" among researchers with similar interests and a common language have been developed over the last 20 years and are well documented in the literature. Much of this hard-won knowledge regards process as much as it does technology.
Study of content management, enterprise taxonomy and the other core instructional capital of the learning organization has become increasingly important due to ISO standards and the use of continuous improvement methods. Natural language and application commands tend to converge over time, becoming reflexive user interfaces. A concern that overlaps with OOPSLA.
The role of social network analysis and outsourcing services like e-lance, especially when combined in services like LinkedIn, is of particular concern in human capital management - again especially in the software industry where it is becoming more and more normal to run 24x7 global distributed shops.
Despite a widely held belief that more automation means more worker productivity, almost all studies of actual attempts to add more "computer power" to the white-collar workplace demonstrated that productivity did not improve, and in many cases went down. Yet, the "investment" in computers and software continued. This productivity paradox remains unresolved. See separate article.