Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Distrusting the web server #538

Open
llebout opened this issue Nov 18, 2018 · 33 comments
Open

Distrusting the web server #538

llebout opened this issue Nov 18, 2018 · 33 comments

Comments

@llebout
Copy link

llebout commented Nov 18, 2018

I would like to discuss the standardization of a mechanism with which user agents can choose (through the User Interface), to trust or to not trust a specific version of a web page or a set of web resources forming a webapp. Example use cases for this would be Tutanota email service, or Protonmail email service. The users of these services could trust a version of the served webapp so that they can know these are not being modified on-the-fly potentially with added backdoors. An update for that webapp would have to be approved again by the user.

@Malvoz
Copy link

Malvoz commented Dec 4, 2018

I have barely really skimmed through the explainer or draft for Webpackage, however the (browser?) functionality you are looking for could potentially make use of it.

There is also this discussion which is somewhat related: https://discourse.wicg.io/t/alternative-to-webpackages-to-gurantee-integrity-document-hash/3026

@llebout
Copy link
Author

llebout commented Feb 27, 2019

@Malvoz This looks very good.
If I understand it correctly, the browser could trust a set of origin signed HTTP requests, and be able to sum these up in a web application package, trusting a whole website/app.
However, I am confused as to why we add another level of cryptography when TLS already provides authenticity?

@mischmerz
Copy link

Some time ago, I wrote an extension for the browser that would generate hashes from all loaded sources, create a hash of hashes and check against the hash of previous loads and, in case they don't match, check the signature of the final hash against the public key downloaded and stored on the initial visit to a website. This would guarantee, that a) the user may decline to continue to use a website if the source has changed and b) verify if the changes to the website are authentic. This worked very well and protected the user even if the website itself would have been compromised. I didn't pursuit this development when Mozilla started to require hoops-jumping in order to get extensions going, but I think it is (and was) a very elegant solution to protect the integrity of any web-site.

Still don't quite understand why we don't have any standards that would accomplish something like that.

@llebout
Copy link
Author

llebout commented Mar 6, 2019

Indeed! TLS should be providing authenticity, what I am concerned about is individual version upgrades for web applications, I don't like that a website administrator can decide to give me some other code without me being explicitly notified about such an update. I believe this could be implemented by monitoring agent cache, on page load (before any script executes), if the web server overrides a file that was cached, the user is notified and must accept the upgrade to continue with site data, otherwise, site data can be cleared and the newer version executes. I realize JavaScript can load code on the fly with eval, that would be something to look for when the application upgrades, because it allows someone to bypass the restrictions. If the web application is multi-origin (external resources included), strict SRI checks must be enforced.

I'm trying to think of how it could fit best in the web eco-system, first the web server must be willing to offer such an ability to users, as without the administrator's cooperation, such a thing cannot work (dynamically generated content etc).

The web server would be sending headers to explicitly encourage supporting agents to perform integrity checks.

ApplicationIntegrityFileHash: sha256:9f86d081884c7d659a2feaa0c55ad015a3bf4f1b2b0b822cd15d6c15b
ApplicationIntegrityFile: relative/path/to/integrity.map

The agent visits the integrity.map file and validates it against the provided hash

integrity.map

relative/path/to/file: sha256:9f86d081884c7d659a2feaa0c55ad015a3bf4f1b2b0b822cd15d6c15b
relative/path/to/file2: sha256:9f86d081884c7d659a2feaa0c55ad015a3bf4f1b2b0b822cd15d6c15b

The agent then checks individual files against their hashes specified in the integrity map.
The agent should not allow sub-resources without strict SRI checks enforced, that recursively includes all sub-resources (first level, second, third, etc).

However: The agent cannot practically prevent JavaScript from subverting these restrictions, if a web user sees JavaScript code in an upgrade of the web application that would possibly subvert the restrictions, they should immediately notify the wider community of the issue.

If any of the hashes do not match the linked-to resources, or that some sub resources are missing SRI checks, the agent should warn the user that the web server is failing integrity checks and that it is not possible to continue because it is dangerous. It should also mention that the web server could be misconfigured.

If the ApplicationIntegrityFileHash changes, then the web application has been updated, the agent should ask the user if they wish to continue with current site data, if they wish to clear site data and then continue, or if they do not wish to continue. There should be no possibility for running the older version of the application, that would be too hard to manage for a web server that has no intention of providing backwards compatibility.

The agent should remember about these ApplicationIntegrity headers so that they are checked against on every site visit.

If the web server stops sending the ApplicationIntegrity headers, then the agent should ask the user if they wish to clear application integrity information and site data then continue, or if they do not wish to continue.

Please comment if you want to suggest any changes to the global idea of this proposal.

@mischmerz
Copy link

@leo-lb thumbs up. That is exactly what we need. Additionally, it could (and should) include the CSP headers and of course would have to disallow eval. However I would like to suggest an addendums (addenda ?): Should the script-code change, users should not have the option to continue with the "old / cached" code because that may simply break things - especially if the new code is a bug-fix and/or makes changes to some data structures. It therefore must be "take it or leave it" . Furthermore, we should investigate the option of signing the hashes ( or the hashes of hashes) so that authorized changes to the script could be distinguished from much more dangerous "unauthorized" changes.
Just my two cents.

@llebout
Copy link
Author

llebout commented Mar 6, 2019

@mischmerz It indeed should include the CSP headers, however, I do not exactly know how to achieve that, or if there's more headers that should be included. I don't think blocking eval is good, eval can be used on script generated by or included in the web application, where it is dangerous is on remote content. And there's not only eval, someone could implement a turing complete script interpreter in JavaScript without using eval, and it would achieve the same result, it is an endless race to potentially unsafe JavaScript features here, and I do not think it is appropriate to run after them. I think web users can very well audit served code and notify the community if there's such concerns, nullifying the website's claims of being more secure due to the Application Integrity feature.

I assumed there was no option to run the currently cached version when facing an update, and I agree it should be explicitly mentioned.

However, should we be adding another level of authenticity when TLS already provides it? Agreed, TLS, requiring private keys to lay on the web server, is not ideal. Especially when you have third parties such as Cloudflare providing reverse proxies. I am worried that implementing authenticity checks increases complexity by a lot (key revocation, expiration, warn about/block weak cryptography, ..).

@llebout
Copy link
Author

llebout commented Mar 6, 2019

I think that CSP pinning is what we want when including CSP headers in the Application Integrity verification.

https://w3c.github.io/webappsec-csp/pinning/

@mischmerz
Copy link

@leo-lb You are right of course - there are many more ways to modify script - not just eval. I personally would never use it as I think it's just too dangerous, but other peoples mileage my vary. The only reason I was thinking about signed fingerprints is the less intrusive way of day to day software updates. I often refactor code. bug-fix or change certain elements - it would scare the living you know what out of me knowing that each change triggers some scary warning leading to the help desks to light up with concerned users. Now - the user-agent should of course inform the users about the change (I do that anyway because I want them to re-load), but if the changes were signed, we could do that in a less scary way - maybe even with a developer note explaining what happened. We could tie the pubkey to the domain's name server (like SPF) giving the domain owners full authority.

@llebout
Copy link
Author

llebout commented Mar 6, 2019

@mischmerz A software update should not be scary, only mismatches or application integrity information clearings should be. Software updates just need to happen in a more transparent relationship with the user. I am against including a developer note about what happens, that could be too easily abused in phishing scenarios, if it is done, I would prefer it hidden behind a second button "Show notes", for example. And, I don't think a DNS record is the right way to transfer the information, DNS information is relayed by intermediaries, and each of these could impersonate your signing key. I think that TLS with HSTS, HPKP and Expect-CT header, is able to provide a solution to these issues, without the need of another authenticity mechanism. You are trusting the providers that allow you to exist on the web for not causing such an "unauthorized change", you inevitably need to.

@llebout
Copy link
Author

llebout commented Mar 6, 2019

And to add, you should definitely separate your web servers serving static content and these exposing API or WebSocket endpoints.
There is less possibility of your web servers serving static content to be compromised, and if the API or WebSocket endpoints do get compromised, they cannot interfere with the static content being served and manipulate code that is being executed on user's computers.

@mischmerz
Copy link

@leo-lb Now I am confused. :) If any (or all) scripts are changed, the user should get a notification with the option to continue to use the web services or to cancel. This by itself is very scary. Imagine something like "The web-site has changed. Do you wish to continue" on your bank web-site. And that happens every time a bug fix is applied. Because the new hashes (or hashes of hashes) are different to the ones the user-agent has stored on previous visits. If we simply use an integrity map and compare hashes, changes done by malicious actors - or made under distress - would not be discovered by the user because the actors would most likely update the integrity map as well. What I suggested was a way to somehow distinguish between authorized and unauthorized changes.

@llebout
Copy link
Author

llebout commented Mar 6, 2019

You definitely should not make frequent changes when using such a mechanism, they should be batched together in an update that happens less often. I think it is possible to add different icons or colors to give different signals to the user on the gravity of things. You are the one that should be scared if your server is compromised, as I said in a previous message, to reduce the probability of your static content being updated by a malicious actor, you can separate the web servers that handle static content and expose APIs. I don't think we have a way, ever, if a malicious actor impersonates what should be identified as being you, to distinguish between somehow "unauthorized" and authorized changes, access to the server should be managed with a strict policy, run the web server software and any other network enabled service with readonly access to files.
A DNS record seems inappropriate for storing signing keys.

@llebout
Copy link
Author

llebout commented Mar 7, 2019

We could have a TOFU (Trust On First Use) signing key being transferred in a header that is saved long term, with predefined renewal periods? That would happen only on the first request, subsequent requests would ignore the header, and after the renewal period (say; 3 months), it could accept the same or a new key by blindly trusting it as well. Advantage being that you can sign updates from separate machines than the web server (TLS private keys being on the web server, signing happens there), and that it reduces the ability for a malicious actor to mount a successful attack.

@llebout
Copy link
Author

llebout commented Mar 7, 2019

Or, we could allow the ApplicationIntegrityFile to be stored on another origin, malicious actor being less likely to control both. However, I am thinking that this could be abused to take a website offline by simply taking the ApplicationIntegrityFile offline, and not very optimized because two TCP connections would need to be initiated (HTTP2 tried to solve that bottleneck, it would be sad to add another one).

@mischmerz
Copy link

YES! TOFU +1 .I was about to suggest the very same thing. Though in my version the web-devs would be free to change the key at any time (key do get lost) triggering a warning to the users and the key's expiration time would be much longer. So the work flow would be: Stop the web-service -> update scripts -> sign script/hashes -> restart web-service. I like that very much. This would address malicious changes even if the web-server would be fully compromised. It would also allow devs to simply delete the key if they fear being pressured by hostile entities. In this case, he/she wouldn't been able to modify content even under distress.

@llebout
Copy link
Author

llebout commented Mar 7, 2019

Sounds good to me!
Summary:

The web server would be sending headers to explicitly encourage supporting agents to perform integrity checks.

ApplicationIntegrityPublicSigningKey: Zmpkc2s7Zmpka3NsO2dqZGZrcztnamRma2w7c2pna2xkZjtqc2tsZ2ZqZHNnajtkZmxz
ApplicationIntegrityKeyExpiration: 2020-01-01
ApplicationIntegrityKeyRenewalWindow: 3
ApplicationIntegrityFile: relative/path/to/integrity.map

The agent visits the integrity.map file and validates it's attached signature against the ApplicationIntegrityPublicSigningKey
integrity.map

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA256

relative/path/to/file: sha256:9f86d081884c7d659a2feaa0c55ad015a3bf4f1b2b0b822cd15d6c15b
relative/path/to/file2: sha256:9f86d081884c7d659a2feaa0c55ad015a3bf4f1b2b0b822cd15d6c15b

In this update, we added two features and fixed five bugs.
-----BEGIN PGP SIGNATURE-----
iEYEARECAAYFAjdYCQoACgkQJ9S6ULt1dqz6IwCfQ7wP6i/i8HhbcOSKF4ELyQB1
oCoAoOuqpRqEzr4kOkQqHRLE/b8/Rw2k
=y6kj
-----END PGP SIGNATURE-----

The agent then checks individual files against their hashes specified in the integrity map.
The agent should not allow sub-resources without strict SRI checks enforced, that recursively includes all sub-resources (first level, second, third, etc).

However: The agent cannot practically prevent JavaScript from subverting these restrictions, if a web user sees JavaScript code in an upgrade of the web application that would possibly subvert the restrictions, they should immediately notify the wider community of the issue.

If any of the verifications fail, or that some sub resources are missing SRI checks, the agent should warn the user that the web server is failing integrity checks and that it is not possible to continue because it is dangerous. It should also mention that the web server could be misconfigured.

If the ApplicationIntegrityFile changes, then the web application has been updated, the agent should ask the user if they wish to continue with site data or if they do not wish to continue. The dialog should also mention a button where the user can read about the changes included in the update, these details are contained in the ApplicationIntegrityFile, before the signature, and one empty line after the hashes. There should be no possibility for running the older version of the application, that would be too hard to manage for a web server that has no intention of providing backwards compatibility. All verifications should be performed again, signature, hashes and SRI checks.

If the ApplicationIntegrityPublicSigningKey changes ApplicationIntegrityKeyRenewalWindow or more days before the cached ApplicationIntegrityKeyExpiration date then the agent warns the user about continuing, it should display the fingerprint of the new signing key so that the user can check it with third party sources before continuing. The user should be given the choice of clearing site data then continue, or to not continue at all.

If the ApplicationIntegrityPublicSigningKey changes within ApplicationIntegrityKeyRenewalWindow days before the cached ApplicationIntegrityKeyExpiration date or after the ApplicationIntegrityKeyExpiration, then the new key is trusted blindly in a TOFU (Trust On First Use) model.

The agent should remember about these ApplicationIntegrity headers so that they are checked against on every site visit.

The ApplicationIntegrityKeyExpiration and ApplicationIntegrityKeyRenewalWindow do not get updated in the agent's cache unless the ApplicationIntegrityPublicSigningKey changes.

If the web server stops sending the ApplicationIntegrity headers, then the agent should ask the user if they wish to clear application integrity information and site data then continue, or if they do not wish to continue.

@llebout
Copy link
Author

llebout commented Mar 7, 2019

I think I have finished editing my previous message to a mature version, tell me what do you think? Also, what about developing a browser extension, necessary development tools to sign web application files and make the web server send the new headers, to test such a feature in practice? This would further confirm that such a thing works, motivate changes in our proposal so it is ready for production, and increase legitimacy of the feature for standardization.

@mischmerz
Copy link

@leo-lb I think you nailed it. Great work. I suppose all key generation and signing can be done with openssl. The new headers can simply be generated by server-side script. As to the extension development - let me check my schedule. I am currently a bit busy with a project, but maybe I find some time or can ask around for some development help. I'll let you know.

@llebout
Copy link
Author

llebout commented Mar 7, 2019

I made a decision tree, please tell me if I got it wrong. I've been in it for some time, and I think it's better if an outsider mind re-checks it.

webapp-integrity

@llebout
Copy link
Author

llebout commented Mar 7, 2019

@mischmerz With apache2 httpd for example, we can use a .htaccess file to add the headers, few bash scripts can be made to sign all files in the web root directory and update the .htaccess. I can create that. However, I have next to zero experience with extension development, even if, I can code in JavaScript.

@hlandau
Copy link

hlandau commented Mar 7, 2019

I previously suggested something along these lines here: w3c/webappsec-subresource-integrity#66

The idea is simply to add subresource integrity support to service workers. Since service workers are JavaScript workers which can proxy all requests to a given origin, they would allow site operators to implement arbitrary verification policy in a service worker. This would establish a TOFU-like relationship where the service worker and integrity data is cached on the first visit to the site.

Since this would use the subresource integrity mechanism, the service worker JS file would be referenced by a cryptographic hash, so it couldn't easily be changed. But that worker could implement logic to verify a chainloaded JS file via other, arbitrary means, implemented in JS (e.g. cryptographic signature). Such a worker could implement upgrades, replacing itself with a new service worker and integrity hash, in the same way.

This would be a lot simpler and more flexible, and would be a fairly minor change.

(It would also create scope for browser extensions which maintain a list of known service workers for added security, akin to HTTPS Everywhere, to close the TOFU loophole.)

@llebout
Copy link
Author

llebout commented Mar 7, 2019

@hlandau I don't exactly know how service workers work, how can this service worker enforce a policy from the first visit? It needs to be loaded from the website first, no? What prevents the website to stop loading the service worker on a given origin? How can the user be alerted of it? Is the service worker permanently registered to proxy requests to a given origin from first visit?
I do agree that the solution we are proposing is clearly lacking flexibility. Delegating that task to custom code should be better, verifying service workers must be written with good care. The issue is that now agents need a JavaScript engine to perform the verification, or does it? What we are proposing could be implemented in curl without spinning up v8 for example.

@hlandau
Copy link

hlandau commented Mar 7, 2019

Is the service worker permanently registered to proxy requests to a given origin from first visit?

Yes.

The issue is that now agents need a JavaScript engine to perform the verification, or does it?

My basis for proposing this is that the use case for this kind of "distrust the server" approach is when the server is transmitting JavaScript to the client. It would provide a basis for better securing browser-based crypto, for example. What exactly do you want to distrust if not JavaScript...?

@llebout
Copy link
Author

llebout commented Mar 7, 2019

What exactly do you want to distrust if not JavaScript...?

One may want to audit the newer code without running any JavaScript, or one may want to serve ISO files on a web server and use such a mechanism to automate integrity verification some currently do with PGP, or that Tails from The Tor Project even automated with a browser extension. ( https://tails.boum.org/contribute/design/verification_extension/ )

For the case of Tails or ISOs, it's not the perfect example, because they are often downloaded once only, and then the ISOs auto-update and perform key verification themselves.

I am simply uneasy at requiring JavaScript for providing such a feature, also, performance of JavaScript implementing cryptography is rather bad, especially when your browser has JIT disabled. It would therefore increase page load times by a lot if the service worker had to validate an application that does not use cryptography (and thus does not suffer from such a latency).

@mischmerz
Copy link

@hlandau Here's the problem: We don't really have immutable ways to TOFU store data when using service workers. All databases available to script may expire, may be deleted through hard reloads (or clear cache) or may even be deleted through script itself. Yes - your approach looks elegant. But I think it's the overall job of the user-agent to provide security independent from the code/script itself.

@hlandau
Copy link

hlandau commented Mar 8, 2019

@mischmerz The service worker wouldn't need to store any data; the cryptographic identities it trusts would be embedded in the script itself. The only thing that would need to be permanently stored is (origin, service worker JS URL, integrity hash) tuple.

@mischmerz
Copy link

So - what happens if I instruct script to update the service worker (after I changed it to evil version with evil) keys? the service worker is not immutable either.

@hlandau
Copy link

hlandau commented Mar 10, 2019

@mischmerz You can't run scripts on the origin unless they were loaded by the service worker, because the service worker intermediates all HTTP requests.

@llebout
Copy link
Author

llebout commented Jul 17, 2019

Found this: https://github.com/tasn/webext-signed-pages

@mischmerz Seems like someone has made that extension already.

@Malvoz
Copy link

Malvoz commented Jul 17, 2019

FWIW, I just remembered this related discussion on SRI for <iframe>:
w3c/webappsec-subresource-integrity#21

@mischmerz
Copy link

mischmerz commented Jul 17, 2019 via email

@ghost
Copy link

ghost commented Mar 31, 2021

Related: Web Applications should not have internet access by default (#578)

@ghost
Copy link

ghost commented Apr 6, 2021

There is also this discussion which is somewhat related: https://discourse.wicg.io/t/alternative-to-webpackages-to-gurantee-integrity-document-hash/3026

However, a document hash could also be used to assess the integrity of a Web Bundle itself, as a Web Bundle is a single binary file, contrary to a PWA application. So a Web Bundle hash could be used to assess if the Web Bundle has not been tampered, by comparing it with the hash present in the manifest bundled with it and also with a hash hosted on a trusted website.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants