Ethics of full disclosure.
One of the ever-returning topics in computer security fora looks at "full disclosure": Is it ethical to release tools that exploit a security vulnerability? Is it ethical to release information that makes it trivial to produce an exploit? One side of the argument basically says that it's not ethical because releasing exploits doesn't add anything for the white-hat consumer of the news, but makes attacks easy for script kiddies. The other side of the argument often talks about suppliers who don't move swiftly to fix problems unless an exploit is known and publicly available. This side of the argument also notes that it's often not possible to describe a fix without making an exploit obvious.
There is another angle to this, though: Where vulnerabilities are due to design issues, and workarounds are expensive, unavailability of public exploits may lead to continued deployment of insecure setups, despite awareness that security design is flawed. Of course, it's a dangerous assumption to conclude that just because there is no publicly available exploit, possible attackers aren't able to get access to a private one.
"Hi, you realize that your recently-deployed WLAN+VPN setup can be used to steal user names and passwords, possibly on a massive scale?" -- "Well, yeah, we knew about the vulnerability, but it didn't look like it's easily exploitable, and after all, there are no exploits out there." -- "It's extremely easy to exploit. Look, here's how it goes, and yes, I have the software I need to do this. Want a demo?" -- "duh. But we'd be interesting to learn about secure setups."
I wonder, can it be unethical to keep an exploit to a well-known security weakness private?