ciso

A Tale of Two Breaches

by Alexander Mueller

Much of our public conversation around cybersecurity and data loss in particular imagines one organization, usually a business, trying to defend its castle full of goodies from the barbarian hackers outside. The reality is that data gets passed around quite a bit, and in 2019 it is lost more often because of mistakes and bad practices around how it was circulated. The public has limited visibility into this circulation, and differences in regulation create drastic differences in who hears about what breach, what firms can be held liable for, and then inevitably in their information security practices and level of care.

On one end of the spectrum, industries without any regulation of their data are almost certainly breached more often than is public and more often than they know themselves. The damage of a breach is typically to consumers and not directly to the company breached, so there is a perverse incentive to avoid discovering breaches if you believe no one else will discover either. This dynamic is particular egregious around data collaboration with business partners - in principle, if I give my data to you and you lose it doing something stupid then I am liable as well, but in practice why does anyone want to maintain a bunch of records about who has what just so they can be a liability in court.

This may sound a bit jaded and conspiratorial, but the reality is that for many breaches no one can even say where the data came from originally. These breaches are also lightly publicized because there isn’t much constructive to say about them. There is this illicitly traded database of information on 200 million Americans with no clear provenance - many believe Experian lost this data originally, but this is disputed and to the knowledge of the author Experian has not been proven liable or held accountable in any way. Databases with huge amounts of personal information are often found derelict in the cloud (often with no password!) by security researchers, and invariably it is impossible to find owners for them. This database of medical information found unprotected is one of many examples.

At the other end of the spectrum, firms holding regulated data are in a really painful position because of the data they must share for unavoidable business needs and the difficulty of ensuring that 100% of their data partners are responsible. Good regulations often require firms to maintain records on to whom data is given (HIPAA requires this for example). It is becoming increasingly burdensome for many firms to find enough responsible partners - the nature of your business requires you to share data with partners and if someone else loses it, you are still liable. Cybersecurity in one organization is hard enough!

A great example from just the past few weeks was the data breach at Quest Diagnostics, or perhaps we should say the breach at AMCA. The first breach of the affair to be announced was at the laboratory testing company Quest, but unmentioned or buried but deep in many articles was that the breach had actually occurred at AMCA, a collections agency Quest employed. Days later, with considerably less publicity, a larger story emerged about the many firms caught up in the breach that centered on AMCA. Yet it will still be true going forward that Quest will get a big share of attention related to the incident as they are the largest firm involved, the most visible, and the one who originally collected the data from consumers.

At one end of the market, more regulation is sorely needed. At the other end, we must confront the unique and subtle challenges of securing data not just in a firm but across an ecosystem of many firms that must share data as an essential part of their operations. At Capnion we believe that emerging technologies like homomorphic encryption and zero-knowledge are a hand-in-glove solution to helping this latter group of firms collaborate - don’t share more than you need, don’t share anything in the clear, set up a system with just enough information in it for your business process and nothing else.

Protect your data everywhere: at rest, in transit, in use, even in use by 3rd parties

by John Senay

Some good news for everybody: Capnion is proud to announce the private BETA release of Ghost PII. The goal of Ghost PII is to protect YOUR data while in use.

In the next few posts, I will give a high-level overview on how Ghost PII works and present a few application scenarios.

…So lets get started. Ghost PII actually uses a 2 step process to secure plaintext data. The first process used is called a One-Time Pad, or OTP for short. OTP was invented in 1882 by Frank Miller.  That’s right - Ghost PII is built on a process that is over 137 years old! Why? In 1949 (70 years ago!) Claude Shannon proved mathematically that OTP is unbreakable when truly random numbers are used to generate the key. Why is this important? Quantum computing is on the horizon. QC will provide unparalleled computational power that can break most existing encryption methods. These computers can run unique algorithms and their speed is increasing, with QC firm D-Wave recently announcing they had doubled the power of their previous generation of hardware.  No matter how much computational power is used, with truly random numbers, OTP is unbreakable.

How does Ghost PII generate truly random numbers? And how does Ghost PII make your encrypted data easy to work with?  (Hint: it includes an emerging technology called homomorphic encryption). Sounds like a great pair of lead-ins for another post.

To learn more, contact sales@capnion.com.