I wonder if this will further reduce the incentive for companies to disclose that they've been hacked. If, upon disclosure, they know they'll get a red flag, then why should they? Without more information (e.g. the extent of the hack, how long ago it was, how it was handled, what the company has done to make things better) either integrated into the metric, or provided as separate metrics, this might be counter-productive. But who decides on those metrics and sets the hurdle to display a flag or not? I would suggest a government or UN agency, perhaps akin to a CERT org; in fact first.org (as a confederation of CERTs) might be a starting point.
Also, this sort of thing might produce false negatives where different sites/URLs use the same (compromised) backend data.
same here, but then again, if you don't disclose it and cover it up properly, who's to say you've been hacked? it's a bit of a chicken and egg problem. unless some external party can prove you are hacked, which is not always the case, it's hard to apply this law to anyone.
To properly apply it, countries would have to scrape and invesitgate all sites in their locality, which is a problem, as automated processes for this can lead to false positives, and that could be very damaging for a company, as the fines are very high and would be devastating for a company who is wrongly accused by some automated process. (which is why they are careful with implementing such a measure i think, or i hope)
While in good intentions, this project has the potential to concentrate large amounts of power. Who decides what website is "valid" or not? Just as internet companies such as Google and Facebook, unwittingly or not, have become the gatekeepers of news, this would create a similar power structure for websites. Competitors? Spend money hacking one of their lower-priority services to get an entry on haveibeenpwned.com and then a flag on the Firefox browser.
While these power structures are concerning, we should be careful before calling it off prematurely as clearly consumers could stand to benefit a lot from such a service. If the service was coupled with ruthless transparency and digestible explanations, it could be far fairer. A list of "exploited vulnerabilities within the past 2 years", summaries, and references may be more forthcoming than an oversimplified flag indicating "bad or not".
Did you read the article? Firefox is not going to tell you if a site is good or bad. It is going to tell you:
“Firefox is working on the feature in collaboration with "Have I Been Pwned," the popular site that can check your email and tell you if your credentials have been stolen by hackers.“
There is nothing subjective about this. If your email is in the HIBP list you are possibly screwed. This is very real, based on real data.
> If you visit a site on Have I Been Pwned's naughty list, it will throw a flag stating, "You visited hacked site ashleymadison.com."
This does more than just tell users if their credentials have been compromised. That copy is likely to affect traffic in a manner similar to invalid HTTPS certs.
Wouldn't you agree that some degree of subjectivity lies within the specific partnership with Have I Been Pwned and its submissions? Popular does not mean just.
A lot of this is contingent on how the user interface and data are presented in the program. That "there is nothing subjective about this" may be true for only a facet of the whole issue. No doubt, there are potential benefits for everyone here, but some of the worst outcomes have resulted from the best intentions.
Here are a few reasons that I hope you find somewhat convincing:
- Single point of failure. It's ran by one person who can be bribed or manipulated. I've never met Troy Hunt, and from what I can tell he looks like a great upstanding guy[1], but $5 wrenches will work on my kneecaps too.
- What's the process for reporting and publishing? Appeals? There's no transparency here, aside from a list of 5 one-sentence steps in the FAQ[2]. How do we decide what data are "sensitive"? My guess is that there's not too much review.
- As hinted above, with a publicly available shame list, there would exist new incentives to get competitors and enemies on the list. How does this play with a loosely documented review + publish process? With a user interface that might oversimplify the issue?
Please don't misunderstand these points as hostility. I really really like Have I Been Pwned and think it's amazing what Troy is doing. It's just that if we're building new web infrastructure, it's important that we get it right. I think where we can agree is that web users deserve more control over their data by default and shouldn't have to learn how to install and use several fringe programs to do so.
Troy Hunt works for Microsoft. What is to say the employer won’t influence the list for personal gain. E.g if Azure and Aws both got hacked, high chances of AWS appearing in the list but not Azure.
I think he is an MVP which is no relation to being a member of staff.
He advocates for their products from what I can see, probably because he was a .net dev.
To the extent data breaches are only committed by external bad guys it’s fine. If hacking a competitor causes a user visible flag, there is now incentive.
> While in good intentions, this project has the potential to concentrate large amounts of power.
Compare this concern with Google's "Safe Browsing" lists. Granted that's more for phishing, but it definitely could be troubling if used incorrectly. All we can ask is users be vigilant about abuses.
Likely a considerable group of users will ignore a "bad" flag, as much as they have been ignoring all sorts of flags in the past. The less black and white (red or green) the flag is, seems to me, the more people will ignore it.
If users fully understand what it means, ok. But I suspect this will create a "one strike and your out" dynamic for web developers. Nobody writes perfect bug-free software (though some of us try harder than others). A "has ever peen pwned" boolean is not much more useful than "has ever had a bug".
In most other engineering disciplines, “one strike and you’re out” is the norm, because of the amount of damage that a single uncaught error can make. As software takes on more and more important duties in the world, it starts approaching this point as well.
Ashley Madison, Equifax, just two, before even pulling out the good old search engine. I believe at least a significant percentage of breaches are due to malpractice. If that's a distinction being made, I have no problem shaming the perpetrators out of their business.
> Or more reasonably, nearly every major site or service that has ever existed.
So maybe this could help change practices to be safer. I think this mindset that "90% of companies don't prioritize security, so we should just accept this" is dangerous. As a consumer, data breaches can cause massive headaches - if you've ever dealt with identity theft, you know this. The security with which companies store their data should be factored into your decision of whether or not to be their customer, because it does affect the value their product provides you.
I suppose I was saying that not all breaches are created equally. The Target breach that I vaguely recall was initiated by some IoT devices on the far edge of the network (then escalated into critical system). That doesn't make it right, by any means, but somehow it seems a more complex problem to control than simply encrypting core data, specifically if said data are your crown jewels to begin with (credit card numbers -- Equifax' core business). I didn't study the details of the particular Target breach, so I may have things wrong, but I'm merely trying to illustrate that I could have more sympathy for some breaches over others.
The current situation is "∞ strikes and you're out", which is pretty terrible. I think it's worth exploring ways to change that situation, even if the there's a chance it'll be used too strictly.
I'm not sure why everyone is poo-poo-ing this. I think this is going to be a good first attempt at this kind of feature. With a few changes and refinement it could help users steer clear of bad service providers.
I think that everyone is poo-poo-ing this because it means there would be lasting consequences for prioritizing permissionless innovation over solid engineering practices coupled with defense in depth. Given the audience for this site, it's not surprising that there would be consternation over this kind of feature.
>because it means there would be lasting consequences for prioritizing permissionless innovation over solid engineering practices coupled with defense in depth.
Awful, just awful! How could anyone make money with software if I had to engineer it properly? Why do such cruel people exist who force me to do this?
Truly only a villain would.
To be more serious, I hope that developments like this will finally hammer the point home for the "quick'n'dirty" developer crowd, especially node.js ecosystems seems to be prone to this.
No thanks. This is not the way to correct for the problems we now face.
Now Firefox will ping a third party with the user's identity, to validate the user's currently level of safety, based on what other people know about the user ID. Dialing out to this third party will show externally observable traffic indicating authentication whenever threat validation traffic occurs.
Maybe I don't share the same idea about what represents safety.
An investigator found a photocopy of my driver's license in a file cabinet at a super villain's hide out.
So now I legally change my name, and apply for a new license under that name, yes? Well, great,
this is the new normal.
“This is only meant as a vehicle for quick testing as we iterate on the design, and to help visualize different ideas. In its current state it is in no way meant to represent actual production code, or how the feature will work or look like when it ships. Pull requests are welcome but please keep in mind the highly experimental/volatile nature of this repository.”
And:
“The third goal brings up some privacy concerns, since users would need to supply an email address to receive notifications. Who is the custodian of this data? Can we avoid sending user data to haveibeenpwned.com? Can we still offer useful functionality to users who opt out of subscribing their email address? While the project is still in infancy, the idea is to offer as much utility as possible while respecting the user's privacy.”
It is easy to jump ahead and come to some conclusions but this is an experiment currently. If you think Mozilla is going to send your email to a third party without your consent then you are wrong.
They have a prototype, I'm not sure why calling that the current implementation is problematic (I think it implies that change is quite possible...).
If you read my comment, you'll see that I talk about the user needing to provide credentials for them to be disclosed to a third party, so I think you misunderstood my meaning somewhere there.
>Hacked sites, on the other hand, might not be too thrilled about a feature that will shame them about their previous lax security.
A simple flag doesn't reflect the quality of the security. You might not have a flag with terrible security simply because no one cared until now when a website might have invested tremendous efforts in security and meet doom because hackers were numerous, obstinate and smarter. Security is a difficult matter and doesn't make you "lax" because of a breach. I hope the plugin will be descriptive because I don't see that in the readme.
I would be more or equally interested by a plugin shaming websites who store passwords in plain text, restrict characters to 20, prevent you from using nothing else than letters and digits, etc. You could use pwned db to gather intel on the actual level of security of a website and flag them if they use outdated hash algorithms or other bad password storage practice. That would be more objective and force websites to fix their crap.
A badge that just shows "this site has been hacked" wouldn't be very helpful to me.
I'd like the badge to reflect how recent the hack was (and maybe if it was uncovered or publically disclosed). I'd also want to see a log of all "hacks" against the domain and links to the details of each hack.
And as far as I know they will only notify you as you use the specific credentials that have been breached, in which case you should always care about it regardless of how long ago the breach was.
Wouldn't it make more sense to tell you if a site you had visited was breached and when? What if you never go back to a site you used your common credentials on? It seems wrong to just say this site has been hacked when you go there (what major site hasn't been at some point?). With Have I Been Pwnded data, FF should be telling you if any username/password combos it has remembered have shown up on any lists, which sites you've used them on, tell you to change them and remind you not to use the same combos across sites.
That we know about. The vast majority of hacks will never be exposed. This flag will make people feel they are more secure on the web when it doesn't trigger than is probably warranted. Note that between the moment a hack is published (if it ever is) and the moment it is actually done a substantial amount of time can pass.
I like this in concept. I do fear, however, that larger or deep-pocketed companies that end up on “the list” will be able to litigate and lawsuit their way off while the smaller guys will be banished forever.
Firefox is really going overboard with this crap. Labelling sites as good or bad is not okay with me. Even the https warnings are excessive in my opinion but this is another level of arrogance.
I wonder who they will get this information, and if it will be perceived by people in the event of false positives, that it's internet cencorship going on.
With great power comes great responsibility - yah I know this is cliche but it is true.
While I would love to see services like HIBP used to improve the overall security of the Internet, I also have my doubts that it will be useful and I speculate that in some instances it may be actually harmful. Naming and shaming the companies who have bee breached at some point in time is only good for a while but not forever. Just like the prison system, carrying the extra baggage of being labelled a criminal at some point is not constructive and in fact, it has been proven countless of times that it is anything but detrimental.
If the service delivers a one-time notification when the breach occurred (sort of a broadcast) that will be very useful and it will help users stay secure while keeping the companies incentivised to ensure their shit is tight. But if the service offers warnings even for breaches that occurred 2 years ago, then I speculate this is only going to make companies pay out ransoms in the form of "Bug Bounties" when they get breached. With the introduction of GDPR this is only going to make it a more lucrative way to make legitimate money through criminal activities.
Last but not least, I know there are quite a few people on this "forum" that actually do security for a living so they may object but I've been also doing this all my professional life and what I've learned is that no one is protected. Banks are just as easy to hack as anything else and people tend to think that all it matters is customer data - what about employee data? What about your government records? It is not so black and white at the end and while it is easy to point out the problems, it is much harder to fix. I know this sounds like an excuse but most pentesters, security researchers, etc. do not run extensive infrastructures themselves and when they write code, while sometimes technically challenging, it is hardly more than a few thousands of lines monoliths with fairly straightforward logic. Most of this software is poorly written (yes there are quite a few exceptions and we all know them). That does not make them/us great codders - just great at discovering and exploiting vulnerabilities. The reason I am mentioning this is because with my security hat on I can easily point the problem, laugh at it and wonder why it was not fixed but I have also seen the other side too. Building a company from scratch is anything but simple and you have been doing this for more than 5 years you are likely to have legacy systems already, which is crazy but this is how fast technology is moving. So yes, things will get hacked. We need to change our entire mindset how to run companies in order to prevent this and as far as I know, no one has figured it out yet.
Also, this sort of thing might produce false negatives where different sites/URLs use the same (compromised) backend data.