There is scant formal literature on the issue of security through obscurity. Books on security engineering will cite Kerckhoffs' doctrine from 1883, if they cite anything at all. For example, in a discussion about secrecy and openness in Nuclear Command and Control:
In the field of legal academia, Peter Swire has written about the trade-off between the notion that "security through obscurity is an illusion" and the military notion that "loose lips sink ships as well as how competition affects the incentives to disclose.
The principle of security through obscurity was more generally accepted in cryptographic work in the days when essentially all well-informed cryptographers were employed by national intelligence agencies, such as the NSA. Now that cryptographers often work at universities (where researchers publish many [perhaps even nearly all] of their results and publicly test others' designs and results) or in private industry (where results are more often controlled by patents and copyrights than by secrecy), the argument has lost some of its former popularity. An example is PGP released as source code, and generally regarded (when properly used) as a military-grade cryptosystem. The wide availability of high quality cryptography was disturbing to the US government, which seems to have been using a security through obscurity analysis to support its opposition to such work. Indeed, such reasoning is very often used by lawyers and administrators to justify policies which were designed to control or limit high quality cryptography only to those authorized.
As mentioned above, in cryptography, the argument against security by obscurity dates back at least to Kerckhoffs' principle, put forth in 1883 by Auguste Kerckhoffs. The principle states that design of a cryptographic system should not require secrecy and should not cause inconvenience if it falls into the hands of the enemy. This principle has been paraphrased in several ways:
If it is true that any secret piece of information constitutes a point of potential compromise, then fewer secrets makes a more secure system. Therefore, systems that rely on secret design or operational details, apart from the cryptographic key, are inherently less secure; that is, resident vulnerabilities in any such secret details will render the choice of key (eg, short and simple vs. long and complex) largely irrelevant.
The related full disclosure philosophy suggests that security flaws should be disclosed as soon as possible because the strength of the protection provided by keeping the cryptographic key secret has become weaker. In this case there is now effectively more than one key that provides access: the old cryptographic key and a key composed of the newly discovered flaws.
For example, if somebody stores a spare key under the doormat, in case they are locked out of the house, then they are relying on security through obscurity. The theoretical security vulnerability is that anybody could break into the house by unlocking the door using that spare key. Furthermore, since burglars often know likely hiding places, the house owner will experience greater risk of a burglary by hiding the key in this -- not so secure -- way. The owner has in effect added another key—the fact that the entry key is stored under the doormat—to the system, and a very easy to guess one. The cryptographic key is no longer simply "the actual possession of the physical key that is used to open the door", but also it is now "the knowledge of the physical key's location".
In the past, several algorithms, or software systems with secret internal details, have seen those internal details become public. Accidental disclosure has happened several times, for instance in the notable case of GSM confidential cipher documentation being contributed to the University of Bradford. Furthermore, vulnerabilities have been discovered and exploited in software, even when the internal details remained secret. Taken together, these examples suggest that it is difficult or ineffective to keep the details of systems and algorithms secret.
Linus's law that many eyes make all bugs shallow also suggests improved security for algorithms and protocols whose details are published. More people can review the details of such algorithms, identify flaws, and fix the flaws sooner. We would thus expect, and Linus Torvalds claims that it is actually true, the frequency and severity of security compromises will be less severe for open than for proprietary or secret software.
Security through obscurity can also be used to create a risk that can detect or deter potential attackers. For example, consider a computer network that appears to exhibit a known vulnerability. Lacking the security layout of the target, the attacker must consider whether to attempt to exploit the vulnerability or not. If the system is set to detect this vulnerability, it will recognize that it is under attack and can respond, either by locking the system down until proper administrators have a chance to react, by monitoring the attack and tracing the assailant, or by disconnecting the attacker. The essence of this principle is that raising the time or risk involved, the attacker is denied the information required to make a solid risk-reward decision about whether to attack in the first place.
A variant of the defense in the previous paragraph is to have a double-layer of detection of the exploit; both of which are kept secret but one is allowed to be "leaked". The idea is to give the attacker a false sense of confidence that the obscurity has been uncovered and defeated. An example of where this would be used is as part of a honeypot. In neither of these cases is there any actual reliance on obscurity for security; these are perhaps better termed obscurity bait in an active security defense.
However, it can be argued that a sufficiently well-implemented system based on security through obscurity simply becomes another variant on a key-based scheme, with the obscure details of the system acting as the secret key value.
There is a general consensus, even among those who argue in favor of security through obscurity, that security through obscurity should never be used as a primary security measure. It is, at best, a secondary measure; and disclosure of the obscurity should not result in a compromise.
This concept is most commonly encountered in explanations why the number of known vulnerability exploits for products with the largest market share tends to be higher than a linear relationship to market share would indicate, but is also a factor in product choice for large organisations.
Security through minority is good for organisations who will not be subject to targeted attacks, suggesting the use of a product in the long tail. However, finding a new vulnerability in a market leading product is harder, as the low hanging fruit vulnerabilities are more likely to have already been caught, which suggests these products are better for organisations who expect to receive many targeted attacks. The issue is further confused by the fact that new vulnerabilities in minority products cause all known users of that product to become targets. With market leading products, the likelihood of being randomly targeted with a new vulnerability may be lower.
This is closely linked with, and depends upon, the more well-documented term Security through diversity - the wide range of "long tail" minority products is clearly more diverse than a single-entity monolithic market leader, so any random attack will be less likely to succeed.
One instance of deliberate security through obscurity on ITS has been noted: the command to allow patching the running ITS system (altmode altmode control-R) echoed as ##^D. Typing alt alt ^D set a flag that would prevent patching the system even if the user later got it right.