Making The Case For SSL Inspecting Corporate Traffic

Almost every stakeholder, from Enterprise Security Architect to CISO that I speak with these days wants to be able to inspect their organization’s encrypted traffic and data flowing between the internet and the corporate devices and end users that they are chartered to safeguard.

When asked what are their primary drivers for wanting to enable SSL/TLS inspection the primary top of mind concerns are as follows:

  • Lack of visibility – Upwards of 75-80% of our traffic headed to the internet and SaaS is SSL/TLS encrypted
  • We know that bad actors are leveraging SSL/TLS to mimic legitimate sites to carry out phishing attacks as well as hide malware downloads and Command and Control (C&C) activities
  • I need to know where our data resides – We know bad actors are using SSL/TLS encrypted channels to attempt to circumvent Data Loss Prevention (DLP) controls and exfiltrate sensitive data. Our own employees may intentionally or unintentionally post sensitive data externally

With a pretty clear understanding of the risks faced by not inspecting SSL/TLS encrypted traffic one would assume that every enterprise has already taken steps to enable this right? Well…not neccessarily. There are 2 main issues to overcome in order to implement this initiative, one is a technical hurdle, the other is a political hurdle.

The technical hurdle is essentially ensuring that your enterprise network and security architecture supports a traffic forwarding flow for both your on-prem and off-net roaming users which traverses an active inline SSL/TLS inspection device capable of scaling to the processing load imposed by 75-80% of your internet and SaaS bound traffic being encrypted. In an enterpise network and security architecture where all end user traffic, even remote users, flows through one or more egress security gateway stack choke points comprised of traditional hardware appliances the processing load imposed in doing SSL/TLS interception dramatically reduces the forwarding and processing capacity of those hardware appliances as evidenced in recent testing by NSS labs.

This is critical in that most enterprises would need to augment their existing security appliance processing and throughput capacity by at least 3x to enable comprehensive SSL/TLS inspection. This constitutes a signficant re-investment in legacy security appliance technology that doesn’t align with a more modern direct to cloud shift in their enterprise network and security architecture design

The second concern, and the primary topic of a recent whitepaper issued by Zscaler, is balancing the user privacy concerns of SSL/TLS inspection versus the threat risks of not inspecting a enterprise’s corporate device internet traffic.

Some of the key things to consider in the privacy vs risk assessment and subsequent move to proceed with an SSL/TLS inspection policy are as follows:

  • An organization can not effectively protect the end user and the corporate device from advanced threats without SSL/TLS interception in place
  • An organization will also struggle to prevent sensitive data exfiltration without SSL/TLS interception
  • Organizations should take the time to educate their end users that instituting an SSL/TLS inspection policy is a security safeguard and not a ‘big brother’ control
  • Organizations should inform employees as to the extent of what will and will not be inspected. This should be defined as part of an acceptable usage policy for internet use on corporate issued assets and this policy should be incorporated into their terms of employment agreements
  • Organizations should review this policy with in house legal counsel, external experts and any associated worker’s councils or unions as well as paying careful consideration to regional data safeguard compliance frameworks like GDPR
  • Organizations should take the neccessary steps to ensure appropriate safeguards are put in place for the processing and storing of the logs associated with decrypted transactions such as obfuscating usernames

For a more comprehensive review of how to navigate the security vs privacy concerns and implement a successful SSL/TLS inspection campaign take a look at the recent whitepaper that Zscaler has authored – https://www.zscaler.com/resources/white-papers/encryption-privacy-data-protection.pdf

Disclaimer: The views expressed here are my own and do not necessarily reflect the views of my employer Zscaler, Inc.

Adapting to evolving Ransomware extortion tactics

Effective ransomware controls will now have to go past well maintained backup programs and SSL/TLS inspection backed zero-day threat detection to include comprehensive Data Loss Prevention programs.

In the beginning the cybercriminals launching ransomware campaigns simply demanded infected organizations pay a ransom in cryptocurrency in order to get their encrypted files back

As part of a defense strategy against the impacts of a potential ransomware outbreak, organizations began backing up critical assets in order to be able to more quickly mitigate the impact and resume business critical operations in the event that they were compromised by such an attack. In addition to the obvious benefit of protecting business continuity this also effectively helps mitigate the need to pay the campaign’s ransom.

This tightening of business continuity/disaster recovery plans to lessen the impact of ransomware infections has in turn prompted  ransomware campaign originators to counter by adapting their extortion plans to include new impact elements.

The first shift was noted in mid-December of 2019 via a ‘naming and shaming’ campaign whereby the authors of the Maze ransomware strain began posting a list of the companies who fell victim to their ransomware, yet refused to pay the actual ransom.

Publicly shaming victims was apparently just the beginning. Within less than a month, the Maze Ransomware campaign began to demand that the organization’s actual encrypted data (which they had successfully exfiltrated) would be exposed publicly.  The most recent example being US cable and wire manufacturer Southwire, which was threatened with exfiltration of their data if they did not pay a $6 million ransom. 

In some cases, this exfiltration of potentially sensitive corporate data may be more costly and have longer lasting effects than the short term interruption to critical business functions posed by the temporary lack of access to the ransomware encrypted data itself

To combat and help mitigate this latest round of extortion tactics from ransomware campaigns an enterprise should consider looking at:

  • This should go without saying, but as with any cyber security initiative end user education around not clicking on suspicious links and exhibiting more caution with email attachments is critical
  • Well maintained backup programs of business critical systems and data
  • SSL/TLS decryption to aid zero day threat detection controls like active inline Sandbox solutions applied to both on-prem and roaming user device traffic
  • Implementing caution or coaching pages within your web proxy service that informs an end user that they are about to download a certain file type from a site that falls into a category deemed risky by their organization
  • Consider replacing legacy VPN technology with a more secure zero trust approach (https://www.zscaler.com/blogs/research/remote-access-vpns-have-ransomware-their-hands?utm_source=linkedin&utm_medium=social&utm_campaign=linkedin-remote-access-vpns-have-ransomware-their-hands-blog-2019)
  • A comprehensive Data Loss Prevention program that covers both on-net and off-net users while inspecting SSL/TLS encrypted outbound data 
  • Since no set of security controls is ever infallible, an appropriate amount of cyber security insurance coverage may prove to be a helpful additional compensating control

Disclaimer: The views expressed here are my own and do not necessarily reflect the views of my employer Zscaler, Inc.

Visualizing A Zero Trust Architecture

It’s more than just re-branding VPNs and NGFWs

Photo by Petter Lagson on Unsplash

Enterprise Network and Security Architects are faced with sifting through the myriad of Cyber Security Vendors all espousing their ‘Zero Trust’ offerings. Before we get into how to break down each vendor’s offering lets first start by identifying some of the key principles and benefits of a Zero Trust architecture.

  • Establish user identity and authorization prior to access
  • Access to private applications, not access to the network – (no need for VPN)
  • Since no network access is granted, the focus can shift to application level segmentation as opposed to network level segmentation
  • No inbound listeners means applications are invisible to unauthorized users, you can’t attempt to hack or brute force what you can not even see

So how should one go about visualizing what a security vendor offering actually looks like in order to see if a vendor solution really walks the zero trust walk? I’m going to introduce two scenarios which should help easily draw the distinctions between a re-branded VPN solution and a real zero trust offering

Traditional VPN

Lets picture a scenario where your Security Vendor Sales Rep comes to visit you. He or she checks in at the front reception desk, is given a badge and then escorted to a conference room. On the way to the conference room they can easily survey how many floors are in the building, where there are individual offices, media/printing rooms, open floor plan seating areas, telecom equipment closets and maybe even where the corporate Data Center server room is. If your vendor rep leaves the conference room they could hypothetically walk up and down the hall where they can jiggle the door handles of any office door they see, scan the visible content on whiteboards or on top of desks in the open floor plan seating areas for sensitive information and strike up casual conversations with anyone in any area they can manage to roam through. This is akin to level of trust provided when giving network level access to a user via a traditional VPN. Instead of the fictitious Sales Rep, imagine that this was a malware infected endpoint brought onto the network by one of your remote employees, a contractor or other 3rd party.

Zero Trust

In this model the same Security Vendor Sales Rep visits and checks in at the front desk to get their badge. This time the Rep only sees one door, the door to the conference room. There are no floors, no visbile office doors, media/printing rooms, open seating areas or telecom equipment closet doors. Only the door to the conference room appears as this is the only thing that your Rep is authorized to see or access. There is no hallway to walk down, no office doors to attempt to pry open and no visibility of the internal environment whatsoever. This is more like what access via a zero trust solution should look like.

To take this a bit further, a security vendor might still say that they can support the objectives of the Zero Trust scenario described above. What are some key red flags to look out for to ensure that this isn’t just a rebranded VPN or NGFW solution?

If a prospective security vendor says they meet the objectives of a Zero Trust implementation, but uses language like ‘perimeter’, ‘micro-perimeter’, ‘use your existing NGFW as a network segmentation gateway’, ‘verify and never trust anything ON your network’, or ‘there is no need to rip and replace your existing network appliances’ be very wary that this is likely just a perpetuation of a previous remote access model and not truly architecting for Zero Trust.

Disclaimer: The views expressed here are my own and do not necessarily reflect the views of my employer Zscaler, Inc.

TLS 1.3 – The end of passive-mode packet capture?

After 4 years and 28 draft versions, TLS 1.3 is here and it will force a change to the way we do forensic investigations in Cyber Security. In order to fully understand the impact of TLS 1.3 on Security Incident Response we should first look at the role that packet capture takes in providing forensic data and how it is has historically been implemented.

How is packet capture implemented?

Packet capture can be thought of as a virtual security camera constantly watching what enters and exits the network.  By that defintion it is neccessary to establish a choke point and funnel all of the enterprises’s ingress and egress traffic through such a system.  This tends to enforce a network architecture model known as hub and spoke where traffic from remote locations (Spokes) is backhauled to a centralized location (Hub) where the packet capture function is employed. 

In these Hub locations packet capture is commonly facilitated by utilzing smart inline taps, commonly refferred to as packet brokers, which send copies of the intercepted traffic to the various monitoring devices that might need to analyze the data. These packet brokers provide several functions like load balancing the traffic across destination monitoring systems, removing VLAN or MPLS headers, and pre-stage filtering to parse out specific protocol traffic to send to a particluar device. The addition of metadata like timestamps and geolocation info to the captured packets are also provided.  Most importantly, in the context of this particular blog post, is the ability of the packet broker system to be preloaded with private encryption keys to decrypt SSL encrypted traffic prior to sending the data to monitoring devices. In this model, SSL decryption capability is performed passively, out of band and after the fact, rather than by having the packet broker system act as a man-in-the-middle proxy for the purpose of SSL decryption.

The role of packet capture in incident response

Packet capture provides the Security Incident Response Team the ability to go back in time and look into a potential security incursion that has occurred. This includes examining the behavior of a specific piece of malware such as determining what propagation techniques it uses, what additional files it attempts to download as well as determining the C&C domains and IPs in use that would need to be blocked by inline security controls to prevent additional attempts to download more malware or exfiltrate sensitive data from the impacted Enterprise.  All of this data can be used to write detection signatures to prevent future occurences.  Another benefit to incident response is the ability to replay the originally captured traffic through those newly written signatures to test proper detection of a specific threat. It is also very valuable in helping to determine the impact of a breach such as what accounts and systems were accessed and what sensitive data was actually exfiltrated.

However, with more and more enterprise traffic destined to the open internet and SaaS applications this backhauling model to do centralized packet capture comes at a cost.  There is considerable impact to remote branch office and remote user performance due to the latency incurred in this traffic backhauling to a centralized “Hub” location model. Extending packet capture functionality to remote users that are off the corporate network requires enforcing a full tunnel VPN solution to bring them back onto the network where packet capture can happen at the network ingress/egress boundary.  While it can also be debated that packet capture is primarily used forensically in a way that is analogous to solving crimes rather than preventing them, nevertheless it’s been an important tool in the Enterprise Security Incident Response Team’s toolbox for quite some time now.

So what is changing with TLS 1.3? 

TLS 1.3 (RFC 8446) brings about some important changes which deliver security improvement and performance enhancements over version 1.2. Some of the more salient changes are:

  • Faster session setup times as session completion is done in 1 round-trip versus the 2 round-trips required in TLS 1.2
  • Zero Round Trip (0-RTT) which essentially lets you immediately send data to a server that you’ve previously communicated with also increases performance over TLS 1.2
  • Removal of previously used insecure protocols, ciphers and algorithms
  • Perfect Forward Secrecy (PFS) uses Ephemeral Diffie-Hellman key exchange protocol for generating dynamic one-time per session keys rather than a single static private key for every session

Why is TLS 1.3 going to impact our ability to effectively do packet capture?

This mandate of perfect forward secrecy (PFS) is what is going to force a change to the way we implement packet capture.  PFS will prevent us from going back and doing passive after the fact decryption on traffic as there is no longer a single private key that can be used to decrypt prior sessions.  Prior to the actual packet capture, TLS 1.3 decryption is going to require the use of an active “man-in-the-middle” (MITM) proxy which terminates each unique SSL (TLS 1.3) session from the client and opens a new TLS 1.3 session onward towards the origin content server that the client seeks to communicate with.

This will have both a resource and financial impact on the enterprise in that it will require either the purchase and deployment of dedicated MITM SSL decryption devices, or enabling MITM SSL decryption on previously in use web proxy appliances.  Hopefully these previously purchased proxy appliances can actually support TLS 1.3 interception without requiring a hardware upgrade of the crypto chipset used under the hood. Even if a hardware upgrade/refresh isn’t required to support TLS 1.3, existing proxies will likely struggle to keep up with the performance impact of SSL inspecting all of this traffic and require additional capacity augmentation to be purchased.  The net effect here is that continuing to do packet capture with TLS 1.3 in play will require a signifcant re-investment to the current “hub-and-spoke” centralized outbound security gateway stack model.

Of course full proliferation of TLS 1.3 onto both the client and server side is going to take some time. Major browsers like Chrome and Firefox already support it as of October 2018 and on the server side some large providers like Facebook, Google, Twitter, Microsoft and Cloudflare’s CDN have already started to run TLS 1.3 as well. Despite some early adoption, as of August 2018, Sandvine reports that only half a percent of all the encrypted traffic it sees is TLS 1.3.

Since it will take awhile before TLS 1.3 is maintstream, perhaps now is an opportunitistic time to really rethink our overall longer term network and security architecture strategy and consider whether continued re-investment in centralized backhaul and security appliance refresh is the best approach in the era of increasing cloud application usage and end user mobility.  With applications and users leaving the traditional enterprise network perimeter does it really make sense to force users back onto the corporate network and continue to spend time and resources on a legacy hardware appliance based approach?

At Zscaler, we certainly believe that there is a better way to deliver a modern network and security architecture with full visibility into all of your enterprise’s encrypted traffic without compromises.

Disclaimer: The views expressed here are my own and do not necessarily reflect the views of my employer Zscaler, Inc.

References:

TLS 1.3 is moving forward: what you need to know today to get ready

TLS 1.3 – Impact on Network Based Security – draft-camwinget-tls-use-cases-00

An Overview of TLS 1.3 and Q&A

Why TLS 1.3 isn’t in all browsers yet

How will security look when the entire web is encrypted?

Photo by Rubén Bagüés on Unsplash

Will the entire internet be SSL/TLS encrypted soon?

There were some pretty simple drivers for securing the web that lead to the adoption of SSL/TLS , or HTTPS as it’s commonly referred to.  Principally confidentiality and integrity was desirable before we would ever trust transmitting a credit card number to purchase something in our online shopping cart.  We wanted to ensure that we were sending our sensitive data to the entity that we actually intended to and that this sensitive data is not being transmitted in the clear where it’s at risk of potential interception.

SSL/TLS encryption has found it’s way into the mainstream of almost every popular website, cloud application and mobile App these days.  In fact, as of the time of this writing, 81 of the top 100 web sites default to HTTPS.

The proliferation of free SSL certificates via entities like LetsEncrypt have certainly made securing sites via SSL even easier.

So what’s next?  Google who has led the charge in helping push for a more secure web has just announced that in July of 2018 their Chrome browser will start to actively warn end users when they are accessing a site that is not HTTPS encrypted.   This will no doubt cause a scramble by site owners to ensure that their web sites are encrypted greatly increasing the number of sites on the web that are encrypted via HTTPS.

So what does this mean for the traditional enterprise internet security architecture model?

First and foremost, a further increase in HTTPS traffic is going to further reduce the effectiveness of security stacks that are attempting to do web content filtering, cloud application visibilty and control, advanced threat prevention, sandboxing of zero day threats and Data Loss Prevention (DLP). This is because malicious actors are ironically using the very same protocol that was meant to keep us safe on the web as a way of obscuring their activities like phishing and the distribution of malware like ransomware.

The typical enterprise already experiences HTTPS encryption of somewhere between 50-70% of the traffic that passes through their security gateway stack of appliances.  If they are not currently doing SSL inspection of their traffic then that translates to an effectiveness of only scanning 30-50% of their traffic using their existing security controls. What does that effectiveness rate look like in the wake of more and more of the web becoming encrypted as a result of Google’s upcoming “not secure” notification intentions?

Its time for enterprises to enable SSL inspection in their security controls else those tools are going to be blind to the overwhelming majority of the traffic traversing the web and cloud applications.  This will need to be done in a highly scalable and cost effective way which, as I’ve written about before,  isn’t attainable via coventional enterprise security stack deployment models. The cloud is going to have to be the delivery model for implementing this in a way that is always on regardless of end user location and flexibly scales to meet the enterprise’s demands in a way that is affordable.

For more information on the current threat landscape that is levaraging and hiding inside of SSL/TLS and how Zscaler can help check out this Zscaler Threatlabz webcast on “The Latest In SSL Security Attacks

Disclaimer: The views expressed here are my own and do not necessarily reflect the views of my employer Zscaler, Inc.

Is the future of security cloud based delivery?

It seems every single news article these days contains numerous vendors espousing how they could have prevented the latest malware threat.  Every software and hardware vendor seems to have a solution that could have stopped the WannaCry ransomeware outbreak and will protect from the new Petya variant and then the next variant and so on.

Naturally this piqued my curiousity and begged the question “If every vendor has a preventative solution then why do exploits like this continue to keep happening at such alarming rates and with such devastating financial impact?”

I recently spent some time talking with folks who are the forefront of this who helped me to understand that while preventative measures do exist they are traditionally very complicated to deploy at scale and are only as good as the coverage applied.  First off end system anti-virus software is utterly ineffective in keeping up with adaptive persistent threats in today’s landscape. You are just chasing your tail trying to keep up with the bad actors who actually test their exploits using your commercial anti-virus software.  Not saying don’t use it or bother to keep it updated, just pointing out that this is not going to save us.  Next, and really most important, is the reality that not every enterprise has the same security posture in every location that their users are accessing the internet and cloud based applications from.  For various reasons it’s very difficult to have the same level of advanced security applied in all locations.

In order to really grasp this you first have to look at the history of traditional enterprise WAN design and where the security perimeter got applied.  The legacy enterprise WAN was a Hub-and-Spoke topology designed to provide connectivity between branch offices (spokes) and the Corporate DC (Hub) because that’s where all the applications were running that you needed to access.  With the advent of a mobile workforce VPN concentrators also got added to allow connections to these Corporate DC Hub hosted applications from anywhere.  Internet access breakout was typically implemented at the Corporate DC Hub.  With this Hub-and-Spoke model all end user traffic was coming into the Corporate DC so this is effectively the ‘chokepoint’ where all of the security measures were implemented.

So what do the security measures actually look like in one of these Corporate DCs? Well it’s pretty complex as no single security appliance can handle all of the functions required. Attempting to deliver comprehensive security at scale required multiple disparate components from multiple vendors.  This means forcing your end user traffic through separate appliances for URL filtering, IDP/IPS, anti-malware, Data Loss Prevention (DLP), Next-gen FW, sandboxes and SSL inspection.  This complicated and expensive array of appliances all need to be managed, updated and capacity planned independently as they all scale differently depending on the type of heavy lifting that they are doing.  Then there is the need to interpret logs and threat data coming from all these devices in different formats in order to see whats happening and how effective these security measures really are.

The reality is that not every enterprise has or can deploy all of the above security measures at scale and make them available to every single end user.  Some don’t have a expensive WAN circuit from every one of their remote branch offices to the Corporate DC and instead have deployed at the branch a local subset of the security measures that are normally found in the Corporate DC.  Others may not be able to inspect all SSL encrypted traffic at scale creating a huge blindspot when looking for threats.

Enter WAN transformation…if you read the same tech trade rags that I do you may have heard about this thing called SD-WAN about a hundred times a day.  With ever increasing Enterprise adoption of cloud based SaaS applications the end destination of most user traffic is the cloud and not the Corporate DC Hub where the security perimeter was built.  Maintaining this Hub-and-Spoke model is costly from a WAN circuits perspective and highly inefficient leading to poor cloud based application performance.  This is leading to Enterprises wanting to implement local internet access breakouts at each branch to allow for lower cost yet higher performance access to critically important cloud based applications like Office 365.

So if most of the applications my end users access are in the cloud and I want to provide direct internet access to those applications for high performance how do I secure my traffic headed to directly to the internet?  As mentioned above stamping out a copy of the patch work of security appliances typically deployed in the Corporate DC security perimeter is cost prohibitive and an adminstrative nightmare.  Shortcuts will be taken, coverage won’t be comprehensive and as expected the security posture of the entire Enterprise is only as good as it’s weakest link.

What would be really useful is the ability to point all of my end user locations whether branch offices or my mobile workforce to a cloud based security on-ramp.  Hmm…isn’t this just another version of the hub-and-spoke design?  If done poorly then yeah, it would be.  To do this right you would need to have a cloud based security platform that has a global footprint of DCs colocated at IXPs (Internet Exchange Points) where all the major cloud providers interconnect as well.  This provides high availability as well as high performance in that each end user location is serviced by the closest cloud DC based security platform.  The security platform itself should efficiently scan ALL (including encrypted traffic) of my end user traffic through a comprehensive and optimized pipeline of security functions.  What this would essentially provide is an elastically scalable,  high performance w/ low latency,  advanced security platform that is always on with single pane of glass management and reporting and of course utility based pricing.  Basically all of the promises of the cloud, just now applied to advanced network security.  Adding new branch sites or mobile workforce users in this model and calculating future costs is incredibly simple.  You would no longer need to worry about procuring applicances, capacity planning, designing for HA, software updates, licensing or any of the other hurdles encountered in attempting to implement this in your own environment.

It sounds like unicorns and rainbows…however this cloud based security platform model already exists.  Zscaler with it’s Internet Access service appears to have pioneered the approach of going “all in” on completely cloud based delivery with other companies like Opaq adopting a cloud first security platform model as well.  Traditional security vendors like Palo Alto have come on board last week with Global Protect, their own version of a cloud based offering.  Juniper, through their Software Defined Secure Networking (SDSN) solution, is delivering sandboxing in the cloud via Sky ATP combined with automated mitigation and quarantining via their traditional sw or hw based on-prem security appliances and a growing ecosystem of multi-vendor switches. Besides aspects of their solutions being delivered from the cloud, what else is common in all these offerings is the ability to share detected threat data immediately across all of their customer base.

My goal was not to list every vendor or get into the merits of the specifics of each vendor’s implementation or who you should evaluate…lets leave that as an exploratory exercise for the Enterprise looking for a security solution to accomdate their WAN transformation projects and up level their existing security posture across the Enterprise.  Just wanted to acknowledge that moving advanced security measures to the cloud appears to be the future of Enterprise security and for really good reasons.

 

Disclaimer: The views expressed here are my own and do not necessarily reflect the views of my employer Juniper Networks