I've just been speaking at Computing's Enterprise Security & Risk Management Summit about analogies, and I thought I might note a few points here.
Firstly, good analogies are memorable. They stick to the experience which you have used in the analogy: if you use tooth hygiene as an analogy for vulnerability management, then people will tend to be reminded of vulnerability management when they brush their teeth. I think this is called anchoring.
Analogies also engage people, bringing them into a common set of referents which they have the confidence to manipulate and develop. Once they engage with an analogy, and make it their own, they can reuse it as a tool to enhance understanding in the future.
Be aware that analogies are not a perfect replica of the real world. They will often be simplified, incomplete or have elements which do not match the original subject. That's fine, as long as you bear it in mind: an overly literal approach should be avoided.
And a bonus analogy!
Here's the main concept:
Sandboxing can be helpful: but in order to remove delays, it is common practice to take a punt on the file, let it through and then test a copy of it in the sandbox.
Analogy:
Like having a strange creature turning up at the front door and wanting to be let in; let it in while you look it up on Google,and hope it doesn't eat your hamster in the interim.
Blue String Pudding
Information security musings, sometimes useful ones.
Thursday, 24 November 2016
Friday, 16 September 2016
Autumn topic - Log squirreling
Since it's heading inexorably towards autumn, as evidenced by the squally rain outside my window, I thought that I would choose a suitably seasonal topic.
There's a point of view which argues that there is no such thing as "too much data". Eventually it will be useful. Tools will be invented to trawl through the data automatically, and computers will get faster. Why not hang onto it? So people turn into squirrels and store more and more log files, potentially including personal data (there's an ongoing discussion as to whether IP addresses constitute personal data, which makes things more interesting).
I can immediately think of three reasons this is a bad argument:
But a SIEM isn't a magic box which solves all your problems. It needs someone to tell it what t look for - what an event is, and how to correlate data. If you go for "collect first, ask questions afterwards", you are very likely to have a tonne of data and nothing meaningful. Often, people confronted with this unpalatable truth go for a rather odd solution: collect more data. Upgrade the system to a bigger one. Get more log feeds from more systems, or from different types of sources, and eventually it will all make sense.
That's equivalent to collecting pieces of many jigsaw puzzles for years, storing them in a big box, and never actually putting them together. So you decide to collect more pieces, maybe from even more puzzles and buying a bigger box. How will this help you? Now imagine that some of these pieces are contaminated with plutonium. You REALLY don't need those around unless absolutely necessary. That's like doing this with personal data.
Yes, I hear you saying, but the lovely people who created my SIEM have pre-configured it to look for things I need to care about. They already thought of the incidents I need to be told about, and it's all fine. They are the experts.
Not necessarily. Those fine professionals might know their product inside out, but there is one thing they know pretty much nothing about, even if they have had ten meetings with you. Your business. They can look for generic events, like "someone keeps trying and failing to log in", but how useful is that to you?
To answer that question, I bet you started to think about business impact, maybe "If it was repeated login attempts on a system holding payment records, then I'd be worried". So knowledge of your business is key to useful incident notifications. CAVEAT: Some pre-configured alerts may be suitable if you are in a very regulated business and are very typical of that business.
Without understanding which events are of relevance, you get the second SIEM problem, overload. Hundreds or thousands of cries of "WOLF!" every second. Imagine if every failed login generated an alert. Or if it told you about every time someone connected to FaceBook (which might work for you, but some companies rely on FaceBook).
You'll also have spent a great deal of money for little return.
However, it's not all bad news, squirreling away logfiles isn't a brilliant plan; but tactical acquisition of a subset of that same ocean of data might just work.
If you have a SIEM, or are considering getting one, and are of the squirreling persuasion, stop and think:
Once planned, put your plan into action. Basic project management stuff. Be ready to revise your plans to meet your overall goal. Take advantage of interesting benefits along the way, but don't lose sight of the objectives.
So, in summary, you don't need to squirrel, but you can use some of the data very profitably, if you know what you need it for. Be business-led, not data-driven.
There's a point of view which argues that there is no such thing as "too much data". Eventually it will be useful. Tools will be invented to trawl through the data automatically, and computers will get faster. Why not hang onto it? So people turn into squirrels and store more and more log files, potentially including personal data (there's an ongoing discussion as to whether IP addresses constitute personal data, which makes things more interesting).
I can immediately think of three reasons this is a bad argument:
- Legal compliance: holding onto personal data "because it might come in handy" isn't exactly "keep for only as long as required for the purposes for which it was collected" which is a rough recast of Principle 5 of the UK Data Protection Act.
- As speed of processing increases, the rate of data collection will also increase. It's likely that the one will never catch up with the other.
- By the time you can trawl the data, you may not need it any more.
But a SIEM isn't a magic box which solves all your problems. It needs someone to tell it what t look for - what an event is, and how to correlate data. If you go for "collect first, ask questions afterwards", you are very likely to have a tonne of data and nothing meaningful. Often, people confronted with this unpalatable truth go for a rather odd solution: collect more data. Upgrade the system to a bigger one. Get more log feeds from more systems, or from different types of sources, and eventually it will all make sense.
That's equivalent to collecting pieces of many jigsaw puzzles for years, storing them in a big box, and never actually putting them together. So you decide to collect more pieces, maybe from even more puzzles and buying a bigger box. How will this help you? Now imagine that some of these pieces are contaminated with plutonium. You REALLY don't need those around unless absolutely necessary. That's like doing this with personal data.
Yes, I hear you saying, but the lovely people who created my SIEM have pre-configured it to look for things I need to care about. They already thought of the incidents I need to be told about, and it's all fine. They are the experts.
Not necessarily. Those fine professionals might know their product inside out, but there is one thing they know pretty much nothing about, even if they have had ten meetings with you. Your business. They can look for generic events, like "someone keeps trying and failing to log in", but how useful is that to you?
To answer that question, I bet you started to think about business impact, maybe "If it was repeated login attempts on a system holding payment records, then I'd be worried". So knowledge of your business is key to useful incident notifications. CAVEAT: Some pre-configured alerts may be suitable if you are in a very regulated business and are very typical of that business.
Without understanding which events are of relevance, you get the second SIEM problem, overload. Hundreds or thousands of cries of "WOLF!" every second. Imagine if every failed login generated an alert. Or if it told you about every time someone connected to FaceBook (which might work for you, but some companies rely on FaceBook).
You'll also have spent a great deal of money for little return.
However, it's not all bad news, squirreling away logfiles isn't a brilliant plan; but tactical acquisition of a subset of that same ocean of data might just work.
If you have a SIEM, or are considering getting one, and are of the squirreling persuasion, stop and think:
- What are you really going to use the SIEM for? Business reasons, and make them SPECIFIC. Not that "to help us detect incidents" stuff.
- Think of specific, clearly defined use cases where correlation and alerting will help you (more on this in a later entry)
- Work out the minimum data you need for exactly those use cases
- Work out where you can get it from and how (politics may happen here)
- Make sure the SIEM can understand it (format)
- Work out how it can be connected together to identify the specific incident you are trying to detect
- Agree with the relevant parties how the notification will be responded to (and how it will be verified, especially during early days where it might be a false positive).
Once planned, put your plan into action. Basic project management stuff. Be ready to revise your plans to meet your overall goal. Take advantage of interesting benefits along the way, but don't lose sight of the objectives.
So, in summary, you don't need to squirrel, but you can use some of the data very profitably, if you know what you need it for. Be business-led, not data-driven.
Wednesday, 10 August 2016
Infosec baselines, Cyber Security Essentials and motorcycling
An "information security baseline" is like the CBT (compulsory basic training) that would-be motorcyclists take, to allow them to ride on the road. It’s not like passing the UK driving test, which is analogous to ISO/IEC 27001.
Nor is it like Cyber Security Essentials.
On the scale of “competence to drive”, Cyber Security Essentials rates approximately as high as the sight test (can you read a numberplate at X distance). If you can't pass it it, you DEFINITELY shouldn’t be on the road.
Having passed it doesn’t tell anyone that you are a competent rider, but it tells them that you probably aren’t an unguided missile.
Thursday, 4 August 2016
Encryption is always temporary
The basic concept behind encryption, to many people, is that it prevents the “bad guy” getting to your data. This is one of those cases where over-simplification is your enemy.
Encryption in any form is not a means of preventing access to your data by the wrong person. It is actually a means to delay access by the wrong person.
Why? Well, think about how you break into an encrypted file. You have several options:
- Guess the password
- Beat up the person with the password
- Find somewhere where the data is not encrypted and get it from there (we’ll ignore that one as it’s not really about decryption)
- Find a flaw in the encryption process and break in that way
Each of these options carries costs:
- ethical
- financial
- time
In all cases, you can throw time at the encryption and you will definitely get through it. Or your descendants will, possibly in a much evolved form.
What is good enough encryption? Think about how long the information will be confidential for. We have a tendency to assume that confidential doesn’t come with an expiry date, but even Government classified files are usually released after 30 years.
If it’s about a possible hiring of an employee, then maybe after they are hired the information could even be published - thus the encryption only needs to hold for a few weeks. If it’s scurrilous rumours about a chat show host, then maybe a century later the information will be of no interest to anyone. Research data is often very confidential before publication- and at publication, the aim shifts to getting everyone to read the paper, and providing the raw data to enable the published results to be verified.
You could argue that it will be deleted before then - but the attacker will probably take a copy, so you deleting your copy won’t help.
The future also contains changes which we can predict (computing power increasing, and thus brute force cracking getting easier), those which we can guess (flaws found in currently accepted algorithms), and even those which we can’t imagine (???) which will affect the protected lifespan of a given piece of encrypted data. But we can handle those in part by ensuring that when we encrypt we prevent copying to unsafe storage environments.
That is the take-home message here: don't assume that encryption will protect you from choosing a dodgy place to store your information. Take into account the "confidentiality lifespan" of the information, predictable changes to the encryption landscape, and always apply the principle of defence in depth. No single security measure should be your only protection from disaster- layer your protection.
Wednesday, 13 January 2016
From Data to Knowledge via Understanding - information philosophy
The philosophy of information seems to be handy when planning and designing quite a lot of things in information security. The standard model makes sense (see Wikipedia article on DIKW), but I'm choosing to express this slightly differently.
My idea is that there are four things:
- Data
- Information
- Knowledge
- Wisdom
They are often drawn in a pyramid:
Each concept builds upon the one below it. I think their relationship can be expressed in terms of how you get from one to the next. Thus to turn data to information, you add context, and so on.
This produces:
Credit to my team for helping fill the gaps.
My idea is that there are four things:
- Data
- Information
- Knowledge
- Wisdom
They are often drawn in a pyramid:
Each concept builds upon the one below it. I think their relationship can be expressed in terms of how you get from one to the next. Thus to turn data to information, you add context, and so on.
This produces:
![]() |
Information theory as an algorithm |
Credit to my team for helping fill the gaps.
Thursday, 19 November 2015
Passwords (c) : buying lifetime
Here's my idea- which I shall publish somewhere in a paper, perhaps.
People have trouble seeing the benefit in having a strong password. Why not make the benefit simple and obvious?
So, here's my proposal. Have a graphic next to the password change box which shows two things. Firstly the strength, by maybe bar colour or length. Secondly... the number of days you get to keep your password. The stronger the password, the longer before the system makes you change it. Have this update as you type in your password, in realtime. The strength, and the lifespan you get in return, can be determined by how complex it is, the sensitivity of the system, how careful you have been in the past, who else vouches for you, current threats (do you have a clear and present danger?), and so forth. By picking a stronger password, you buy time for it to live. The system would of course have upper and lower limits for lifetime- a really bad password would just not be accepted at all.
I'm calling this Positive Passwords. The password is stronger, and so it lives longer. Hence it is happy- positive!
I don't know any system which currently does this yet, but I think it is a very worthwhile and simple thing to do, with significant benefits to security and user awareness.
Copyright Bridget Kenyon 2015
Thursday, 25 June 2015
Get me a sandwich and buy me a car
I find people frequently ask for a set of "standard best-practice security measures/controls", to help them do security. Here is an argument against trying to provide this.
Imagine you are scheduled to meet a colleague (whom you don't know well) for a chat over lunch. They mail you to ask you to pick them up a sandwich.
Sounds fine, yes? But you get to the sandwich shop and find there are forty types of sandwich. Some are low in fat. Some are gluten free. Some are bacon-filled, and some vegetarian- and a couple are vegan. There is tomato in many, and chilli in some. Red meat and pork also feature. There are even wraps and mini rolls. And I haven't even started on the seafood options.
You have no idea what your colleague wants. So you can a) pick something you think everyone will like (trust me, there is no such beast), or ask them what they want.
Being asked for "best practice advice" is very like being in this situation. Unless you know the requirements, preferences and tolerances of your customer, you are going to have problems. And if you pick something you would like for yourself, you have missed a really big point: you are not the one who is going to be eating that sandwich.
Now let's amp this up a bit. Imagine that, instead of a sandwich, that same colleague emails and says "Buy me a car". OK, that sounds nuts. Why? Because there are even more variables at play. Is this a toy car? A small run-about for town? A people carrier, or a four-by-four? What make? What model? What options? What colour(s)? What age? The list of questions just goes on and on. You don't have the information to answer them unless you know exactly the requirements. And the cost of guessing wrong could be catastrophic.
Information security controls can be expensive to apply, and getting them wrong can damage the overall image of the team- and the profession.
If you are close to your colleague - or, back in the infosec world, if you are part of the business - you are in a position to identify suitable risk treatments.
Subscribe to:
Posts (Atom)