Beefy Boxes and Bandwidth Generously Provided by pair Networks
Clear questions and runnable code
get the best and fastest answer

Re^5: SSL on PerlMonks

by perl-diddler (Chaplain)
on Sep 26, 2017 at 23:48 UTC ( #1200164=note: print w/replies, xml ) Need Help??

in reply to Re^4: SSL on PerlMonks
in thread SSL on PerlMonks

Because there's an uptick in usage of proxies that play MitM in order to provide caching.

Before, when it was only sensitive sites using https, caching ignored them, but with "everybody" doing it, it becomes imperative to add decoding to the proxy efficiencies.

For my own proxy, I can put in exceptions for my bank or credit card and not lose much in security (only for sensitive sites I forget to exclude), but for larger proxies at companies and institutions, its unlikely they'll bother to custom-add sensitive sites for all of their employees/users. They'll likely just rely on access control to the proxy machine -- which will be fine for most sites, but is less secure than if ssl traffic had remained "reserved" for sensitive sites.

It's a classic example of "unintended consequences".

Replies are listed 'Best First'.
Re^6: SSL on PerlMonks
by Your Mother (Bishop) on Sep 27, 2017 at 00:47 UTC

    This still strikes me as a highly subjective take. References? Citations? Statistics? Measurements?

      Are you asking for proof that https makes it more difficult for web caching? I guess I thought that was self evident, but I forget that maybe not everyone notices or measures such things. I've measured my own cache hit usage off and on for over 15 years, running my own at home. To anyone paying attention to cache hit rates in their browsers across their machines, it should be pretty evident that when you hit a ssl site, you see a start of an ssl session at the beginning of contact, and an end-of-session at the end. You no longer see any individual items to cache.

      You have to break apart the ssl stream in order to cache the objects. If you setup your own squid-proxy you'll see this -- nothing subjective about it. That's just for content caching and speedup. Many other private network providers (companies & institutions) want to see what is accessed for purposes of controlling their networks. Blocking the net entirely isn't an option for most such operations. So they go the route of opening the streams.

      If they allow people to bring their own devices in, they can easily tell people they need to install a new root-cert to allow for auditing of content to comply with whoever sets regulations & laws for them and that users of the proxy need to be aware that while no one is specifically reading the content of their traffic, they destinations and contact points are logged and anyone accessing sensitive sites might not want to do it on premises.

      I've only been on the squid list for about 15 years or more, and its easy to see an uptick in conversations related to SSL bumping and solving related problems. Information in the squid wiki has gets very detailed in how to set things up now due to the number of conversations. I don't think anyone is claiming that this is done "covertly" but it is becoming more "routine".

      I can't see who would want to fund a study showing increased https usage is associated with increased SSL bumping, so if you are looking for such stats, you might have to do your own research.

      While many people may have been able to use semi-public computers @ libraries and such for https sites 10-20 years ago, now I wouldn't be so sure about privacy. But feel free to regard it as subjective. Everyone has their own level of comfort.

      I've seen more than one case of companies and ISP's having root-certs when they, officially, weren't supposed to. In most of those cases, the problematic access was said to have been closed and the problem brushed under the carpet... er, problem solved. Right. And with the US policing agencies having been caught multiple times with their hand in our traffic, you think they are going to have problems copping a root cert these days with every social site using https? Before, when it was 99+% banks and such for users, spying on https traffic might have raised a few more eyebrows, but today?

        Proof that using more secure technology makes the web less secure. It's completely counter intuitive to me, like saying pouring more water on something makes it drier. So I would like to see some external, objective validation of the assertion instead of anecdote and conjecture.

Re^6: SSL on PerlMonks
by Anonymous Monk on Sep 27, 2017 at 03:44 UTC
    Because there's an uptick in usage of proxies that play MitM in order to provide caching.
    Maybe I'm misunderstanding something, but https is specifically designed to make this impossible, unless you diddle with the security settings in your browser. Is that what you are saying? That "companies and institutions" are installing diddled browsers on their employees' machines? Don't do personal web-browsing at work. Problem solved.

Log In?

What's my password?
Create A New User
Node Status?
node history
Node Type: note [id://1200164]
and the web crawler heard nothing...

How do I use this? | Other CB clients
Other Users?
Others meditating upon the Monastery: (7)
As of 2018-09-21 16:11 GMT
Find Nodes?
    Voting Booth?
    Eventually, "covfefe" will come to mean:

    Results (187 votes). Check out past polls.

    • (Sep 10, 2018 at 22:53 UTC) Welcome new users!