-
Notifications
You must be signed in to change notification settings - Fork 1.1k
Discussion for guidelines on removing HSTS preload rulesets #7126
Comments
|
@J0WI You can't get added to the HSTS lists without having |
Also, we should clearly document any requirements we have for the |
I was not able to reproduce the issue in #7081 (comment), but since this already is a subdomain and the main host ( |
I think we have to check the Alexa |
Just a few more thoughts: |
For the https://wiki.openssl.org issue: on review I agree that HSTS works at the subdomain level, so if the header is good for https://openssl.org and includes subdomains, it is also good for https://wiki.openssl.org/index.php/Main_Page , which is what you said. So we agree. Also the submission requirements say:
which is the behavior I'm seeing for the redirect I described, which is fine. The header requirements should be the same for any ruleset, not just the Alexa ones, but I agree that as usual the higher Alexa domains should get more attention. After more thought I've changed my mind and agree with you @J0WI about checking that the submission requirements are met before we delete a ruleset, to make sure that the site admin knows what they're doing. In practice this means:
|
Also, if we reject a deletion pull request because the requirements aren't met, we should just close the pull request with a comment because that's not something the contributor can fix. I've revert the deletions you noticed earlier, thanks for checking them. |
@J0WI When we check the header, we should check the base domain and not worry about |
I merged the revert commits and can confirm that v2ex.com sends a valid header. As soon as we have a guideline about this, we can also add checks for it to our ruleset tests. |
@J0WI Do you think we should stop reviewing and merging other HSTS deletion pull requests until we hear from the others? I think you and I agree on the rules. |
For What if the site does not support http anymore after the preload? Such as calomel.org $ curl -I https://calomel.org | grep preload
strict-transport-security: max-age=31556926; includesubdomains; preload
$ curl -I http://calomel.org --connect-timeout 30
curl: (28) Connection timed out after 30000 milliseconds |
I have created about 128 PRs if my record is correct. There are some statistical data: HTTP Timeout: 1 HSTS Header: 121 |
That a requirement I not really agree with Google, but I think we should strictly follow their guideline. |
For redirecting: once a site is HSTS preloaded, the HTTP version of the site becomes inaccessible, which I guess is why Google checks that HTTP redirects to HTTPS. |
Something else to watch out for: for pull request #7077 , http://raymii.org returns a HTTP 200 but it uses a "meta refresh" to redirect to https://raymii.org/s/ . So unless this configuration was changed after it was accepted into the preload list, it means that:
|
@J0WI You didn't answer the question in #7126 (comment) but I'm assuming deletion pull requests are open for review now. |
Another thing to look for: in pull request #7060 , http://whatwg.org redirects to http://www.whatwg.org , which redirects to https://whatwg.org . So unless the configuration was changed after it was accepted into the preload list, this means that:
|
Thanks @mnordhoff . My interest is knowing which tag/HSTS file my current browser is using. It looks like that is the |
@mnordhoff the most reliable way to determine the current release of ESR and stable that I've found is: STABLE_VERSION=$(curl -D /dev/stdout "https://download.mozilla.org/?product=firefox-latest&os=linux64&lang=en_US" 2>&1 | sed -n '/Location: /{s|.*/firefox-\(.*\)\.tar.*|\1|p;q;}') && \
DEV_VERSION=$(curl -D /dev/stdout "https://download.mozilla.org/?product=firefox-aurora-latest-ssl&os=linux64&lang=en-US" 2>&1 | sed -n '/Location: /{s|.*/firefox-\(.*\)\.tar.*|\1|p;q;}') && \
ESR_VERSION=$(curl -D /dev/stdout "https://download.mozilla.org/?product=firefox-esr-latest&os=linux64&lang=en_US" 2>&1 | sed -n '/Location: /{s|.*/firefox-\(.*\)\.tar.*|\1|p;q;}') This is the method that we use in the https://github.com/EFForg/https-everywhere-docker-base/ repo. Once we get that, I think it makes sense to checkout the tag corresponding to whichever release you're interested in, and then looking at the preload list included there. |
Sounds legit. Or sort of legit. With a little more |
A little guidance here: We should only remove target domains which are HSTS preloaded if the preload is active for all currently supported versions of Firefox (ESR, Stable, and Dev) as well as the current version of Chromium. I'll add this to the |
#8127 is a node utility which fetches the HSTS rulesets from Firefox {Dev, Stable, ESR} and Chromium (head), and deletes targets contained in all of them. It also deletes rulesets completely if all targets would have otherwise been deleted. Once this is merged we can incorporate this into the release process. |
#8128 is the result of running the abovementioned utility |
freedomofpress/securedrop#1517 (comment) provides additional context for the process of HSTS removal, which has not been formalized just yet. I suspect once this process is formalized, we will have to revisit the HSTS pruning utility in #8127 |
To add more context, the first step will likely be to start removing "new" (since Feb. 29, 2016) entries that respond over HTTPS, but do not send exactly one header, or that do not have one of:
Any other removal conditions will be tricky, but we will need to look at them carefully. |
Thanks, @lgarron - I'll add these additional conditions to our internal hsts pruning utility. |
@Hainish lmk when this is ready for a final review |
@lgarron for our hsts pruning utility, we attempt to exempt HTTPS Everywhere target domains from removal if they are likely to be removed from the preload list, to ensure that once you remove these domains they will not be left unprotected by HTTPS Everywhere as well. This means our exemption logic should match up with your removal logic. To this end, I have a few questions:
|
No, since the spec is not case-sensitive.
The scans will be checking for it, but I don't know if it's safe to include in the policy yet. I recommend the more conservative approach, but talk to me if the numbers look burdensome? |
@lgarron in our case, the most conservative approach is to use the most stringent conditions in order to limit the amount of rules removed. We'd like to avoid as much as possible removing target domains that may be removed from the preload list in the future. For this reason we'll implement checking for HTTP redirects as well, in case you implement that in the future. |
Also, if you want to test against the actual requirements checker, https://github.com/chromium/hstspreload should be easy to run if you have |
To wrap up this discussion, the guidelines on removing HSTS preload targets are formalized in the
|
I'll include this in the documentation for |
@gloomy-ghost has submitted several pull requests deleting rulesets that are in the HSTS preload lists: Chromium, Firefox, Tor. ( @gloomy-ghost has helpfully included links to these rulesets in their pull requests.) @J0WI and I have separately handled these and it turns out we disagree on when a ruleset should be deleted. We have mainly discussed this in pull request #7081 , but I'll repeat the discussion below.
@J0WI rejects rulesets for deletion if the
Strict-Transport-Security
header does not contain thepreload
header that is required in the HSTS preload submission requirements, even if the domain is in the preload lists. I assume that thepreload
header was in the lists when the domain was submitted to the lists and was later removed for whatever reason. This turns to be a common situation. My own opinion is that if the domain is in the preload lists and it has aStrict-Transport-Security
header, then we can delete the ruleset even ifpreload
is missing. @J0WI correctly points out that according to the submission requirements,I think it is too restrictive to not delete a ruleset in the preload lists just because it might be removed from the lists later.
Another problem that I found in pull request #7053 : https://wiki.openssl.org has a complete
Strict-Transport-Security:"max-age=31536000; includeSubDomains; preload"
header, but its redirect target https://wiki.openssl.org/index.php/Main_Page doesn't even have aStrict-Transport-Security
header. Should we delete a ruleset if some of the responses have the header but some don't? If so, how many URLs do we have to check? What if onlypreload
is missing?Pinging @fuglede and @Hainish for their input.
The text was updated successfully, but these errors were encountered: