1.1. Security Definitions
Security can be defined in various ways. One school of thought
defines it as reaching the three goals known as the
CIA triad:
- Confidentiality
-
Information is not disclosed to unauthorized parties.
- Integrity
-
Information remains unchanged in transit or in storage until it is
changed by an authorized party.
- Availability
-
Authorized parties are given timely and uninterrupted access to
resources and information.
Another goal, accountability,
defined as being able to hold users accountable (by maintaining their
identity and recording their actions), is sometimes added to the list
as a fourth element.
The other main school of thought views security as a continuous
process, consisting of phases. Though different people may name and
describe the phases in different ways, here is an example of common
phases:
- Assessment
-
Analysis of the environment and the system security requirements.
During this phase, you create and document a security policy and
plans for implementing that policy.
- Protection
-
Implementation of the security plan (e.g., secure configuration,
resource protection, maintenance).
- Detection
-
Identification of attacks and policy violations by use of techniques
such as monitoring, log analysis, and intrusion detection.
- Response
-
Handling of detected intrusions, in the ways specified by the
security plan.
Both lines of thought are correct: one views the static aspects of
security and the other views the dynamics. In this chapter, I look at
security as a process; the rest of the book covers its static
aspects.
Another way of looking at security is as a state of mind. Keeping
systems secure is an ongoing battle where one needs be alert and
vigilant at all times, and remain one step ahead of adversaries. But
you need to come to terms that being 100 percent secure is
impossible. Sometimes, we cannot control circumstances, though we do
the best we can. Sometimes we slip. Or we may have encountered a
smarter adversary. I have found that being humble increases security.
If you think you are invincible, chances are you
won't be alert to lurking dangers. But if you are
aware of your own limitations, you are likely to work hard to
overcome them and ensure all angles are covered.
Knowing that absolute security is impossible, we must accept
occasional failure as certainty and design and build
defensible systems.
Richard Bejtlich (http://taosecurity.blogspot.com) coined this
term (in a slightly different form: defensible
networks). Richard's interests are
networks but the same principles apply here. Defensible systems are
the ones that can give you a chance in a fight in spite of temporary
losses. They can be defended. Defensible systems are built by
following the essential security principles presented in the
following section.
1.1.1. Essential Security Principles
In this section, I present principles every security professional
should know. These principles have evolved over time and are part of
the information security body of knowledge. If you make a habit of
reading the information security literature, you will find the same
security principles recommended at various places, but usually not
all in one place. Some resources cover them in detail, such as the
excellent book Secrets & Lies: Digital Security in a
Networked World by Bruce Schneier (Wiley). Here are the
essential security principles:
- Compartmentalize
-
Compartmentalization is a concept well
understood by submarine builders and by the captain of the Starship
Enterprise. On a submarine, a leak that is not contained to the
quarter in which it originated will cause the whole submarine to be
filled with water and lead to the death of the entire crew.
That's why submarines have systems in place to
isolate one part of the submarine from another. This concept also
benefits computer security. Compartmentalization is all about damage
control. The idea is to design the whole to consist of smaller
connected parts. This principle goes well together with the next one.
- Utilize the principle of least privilege
-
Each part of the system (a program or a user) should be given the
privileges it needs to perform its normal duties and nothing more.
That way, if one part of the system is compromised, the damage will
be limited.
- Perform defense in depth
-
Defense in depth is about having multiple independent layers of
security. If there is only one security layer, the compromise of that
layer compromises the entire system. Multiple layers are preferable.
For example, if you have a
firewall
in place, an independent intrusion detection system can serve to
control its operation. Having two firewalls to defend the same entry
point, each from a different vendor, increases security further.
- Do not volunteer information
-
Attackers commonly work in the dark and perform reconnaissance to
uncover as much information about the target as possible. We should
not help them. Keep information private whenever you can. But keeping
information private is not a big security tool on its own. Unless the
system is secure, obscurity will not help much.
- Fail safely
-
Make sure that whenever a system component fails, it fails in such a
way as to change into a more secure state. Using an obvious example,
if the login procedure cannot complete because of some internal
problem, the software should reject all login requests until the
internal problem is resolved.
- Secure the weakest link
-
The whole system is as secure as its weakest link. Take the time to
understand all system parts and focus your efforts on the weak parts.
- Practice simplicity
-
Humans do not cope with complexity well. A study has found we can
only hold up to around seven concepts in our heads at any one time.
Anything more complex than that will be hard to understand. A simple
system is easy to configure, verify, and use. (This was demonstrated
in a recent paper, "A Quantitative Study of Firewall
Configuration Errors" by Avishai Wool: http://www.eng.tau.ac.il/~yash/computer2004.pdf.)
1.1.2. Common Security Vocabulary
At this point, a short vocabulary of frequently used security terms
would be useful. You may know some of these terms, but some are
specific to the security industry.
- Weakness
-
A less-than-ideal aspect of a system, which can be used by attackers
in some way to bring them closer to achieving their goals. A weakness
may be used to gain more information or as a stepping-stone to other
system parts.
- Vulnerability
-
Usually a programming error with security consequences.
- Exploit
-
A method (but it can be a tool as well) of exploiting a
vulnerability. This can be used to break in or to increase user
privileges (known as privilege elevation).
- Attack vector
-
An entry point an adversary could use to attempt to break in. A
popular technique for reducing risk is to close the entry point
completely for the attacker. Apache running on port 80 is one example
of an entry point.
- Attack surface
-
The area within an entry point that can be used for an attack. This
term is usually used in discussions related to the reduction of
attack surface. For example, moving an e-commerce administration area
to another IP address where it cannot be accessed by the public
reduces the part of the application accessible by the attacker and
reduces the attack surface and the risk.
1.1.3. Security Process Steps
Expanding on the four generic phases of the security process
mentioned earlier (assessment, protection, detection, and response),
we arrive at seven practical steps that cover one iteration of a
continuous process:
Understand the environment and the security requirements of the
project. Establish a security policy and design the system. Develop operational procedures. Perform maintenance and patch regularly.
The first three steps of this process, referred to as
threat modeling, are covered in the next
section. The remaining steps are covered throughout the book.
1.1.4. Threat Modeling
Threat modeling is a fancy name for rational and methodical thinking
about what you have, who is out there to get you, and how. Armed with
that knowledge, you decide what you want to do about the threats. It
is genuinely useful and fun to do, provided you do not overdo it. It
is a loose methodology that revolves around the following
questions:
What do you have that is valuable (assets)? Why would attackers want to disrupt your operation
(motivation)? Where can they attack (entry points)? How would they attack (threats)? How much would it cost to protect from threats (threat
ranking)? Which threats will you fight against and how
(mitigation)?
The best time to start is at the very beginning, and use threat
modeling for system design. But since the methodology is
attack-oriented, it is never too late to start. It is especially
useful for security assessment or as part of penetration testing (an
exercise in which an attempt is made to break into the system as a
real attacker would). One of my favorite uses for threat modeling is
system administrator training. After designing several threat models,
you will see the recurring patterns. Keeping the previous threat
models is, therefore, an excellent way to document the evolution of
the system and preserves that little bit of history. At the same
time, existing models can be used as starting points in new threat
modeling efforts to save time.
Table 1-1 gives a list of reasons someone may
attack you. This list (and the one that follows it) is somewhat
optimized. Compiling a complete list of all the possibilities would
result in a multipage document. Though the document would have
significant value, it would be of little practical use to you. I
prefer to keep it short, simple, and manageable.
Table 1-1. Major reasons why attacks take place
Reason
|
Description
|
---|
To grab an asset
|
Attackers often want to acquire something valuable, such as a
customer database with credit cards or some other confidential or
private information.
|
To steal a service
|
This is a special form of the previous category. The servers you have
with their bandwidth, CPU, and hard disk space are assets. Some
attackers will want to use them to send email, store pirated
software, use them as proxies and starting points for attacks on
other systems, or use them as zombies in automated distributed denial
of service attacks.
|
Recognition
|
Attacks, especially web site defacement attacks, are frequently
performed to elevate one's status in the
underground.
|
Thrill
|
Some people love the thrill of breaking in. For them, the more secure
a system, the bigger the thrill and desire to break in.
|
Mistake
|
Well, this is not really a reason, but attacks happen by chance, too.
|
Table 1-2 gives a list of typical attacks on web
systems and some ways to handle them.
Table 1-2. Typical attacks on web systems
Attack type
|
Description
|
Mitigation
|
---|
Denial of service
|
Any of the network, web-server, or application-based attacks that
result in denial of service, a condition in which a system is
overloaded and can no longer respond normally.
|
Prepare for attacks (as discussed in Chapter 5).
Inspect the application to remove application-based attack points.
|
Exploitation of configuration errors
|
These errors are our own fault. Surprisingly, they happen more often
than you might think.
|
Create a secure initial installation (as described in Chapter 2-Chapter 4). Plan changes, and
assess the impact of changes before you make them. Implement
independent assessment of the configuration on a regular basis.
|
Exploitation of Apache vulnerabilities
|
Unpatched or unknown problems in the Apache web server.
|
Patch promptly.
|
Exploitation of application vulnerabilities
|
Unpatched or unknown problems in deployed web applications.
|
Assess web application security before each application is deployed.
(See Chapter 10 and Chapter 11.)
|
Attacks through other services
|
This is a "catch-all" category for
all other unmitigated problems on the same network as the web server.
For example, a vulnerable MySQL database server running on the same
machine and open to the public.
|
Do not expose unneeded services, and compartmentalize, as discussed
in Chapter 9.
|
In addition to the mitigation techniques listed in Table 1-2, certain mitigation procedures should always
be practiced:
Implement monitoring and consider implementing intrusion detection so
you know when you are attacked. Have procedures for disaster recovery in place and make sure they
work so you can recover from the worst possible turn of events. Perform regular backups and store them off-site so you have the data
you need for your disaster recovery procedures.
To continue your study of threat modeling, I recommend the following
resources:
1.1.5. System-Hardening Matrix
One problem I frequently had in the past was deciding which of the
possible protection methods to use when initially planning for
installation. How do you decide which method is justifiable and which
is not? In the ideal world, security would have a price tag attached
and you could compare the price tags of protection methods. The
solution I came to, in the end, was to use a system-hardening matrix.
First, I made a list of all possible protection methods and ranked
each in terms of complexity. I separated all systems into four
categories:
Mission critical (most important)
Then I made a decision as to which protection method was justifiable
for which system category. Such a system-hardening matrix should be
used as a list of minimum methods used to protect a system, or
otherwise contribute to its security. Should circumstances require
increased security in a certain area, use additional methods. An
example of a system-hardening matrix is provided in Table 1-3. A single matrix cannot be used for all
organizations. I recommend you customize the example matrix to suit
your needs.
Table 1-3. System-hardening matrix example
Technique
|
Category 4: Test
|
Category 3: Development
|
Category 2: Production
|
Category 1: Mission critical
|
---|
Install kernel patches
| | | |
|
Compile Apache from source
| | |
|
|
Tighten configuration (remove default modules, write configuration
from scratch, restrict every module)
| | |
|
|
Change web server identity
| | |
|
|
Increase logging (e.g., use audit logging)
| | |
|
|
Implement SSL
| | |
|
|
Deploy certificates from a well-known CA
| | |
|
|
Deploy private certificates (where appropriate)
| | | |
|
Centralize logs
|
|
|
|
|
Jail Apache
| |
|
|
|
Use mod_security lightly
| | |
|
|
Use mod_security heavily
| | | |
|
Do server monitoring
| |
|
|
|
Do external availability monitoring
| | |
|
|
Do periodic log monitoring or inspection
|
|
|
|
|
Do real-time log monitoring
| | | |
|
Do periodic manual log analysis
| | |
|
|
Do event correlation
| | | |
|
Deploy host firewalls
| |
|
|
|
Validate file integrity
| | |
|
|
Install network-based web application firewall
| | | |
|
Schedule regular assessments
| | |
|
|
Arrange external vulnerability assessment or penetration testing
| | | |
|
Separate application components
| | | |
|
System classification comes in handy when the time comes to decide
when to patch a system after a problem is discovered. I usually
decide on the following plan:
- Category 1
-
Patch immediately.
- Category 2
-
Patch the next working day.
- Categories 3 and 4
-
Patch when the vendor patch becomes available or, if the web server
was installed from source, within seven days of publication of the
vulnerability.
1.1.6. Calculating Risk
A simple patching plan, such as in the previous section, assumes you
will have sufficient resources to deal with problems, and you will
deal with them quickly. This only works for problems that are easy
and fast to fix. But what happens if there are no sufficient
resources to patch everything within the required timeline? Some
application-level and, especially, architectural vulnerabilities may
require a serious resource investment. At this point, you will need
to make a decision as to which problems to fix now and which to fix
later. To do this, you will need to assign perceived risk to each
individual problem, and fix the biggest problem first.
To calculate risk in practice means to make an educated guess,
usually supported by a simple mathematical calculation. For example,
you could assign numeric values to the following three factors for
every problem
discovered:
- Exploitability
-
The likelihood the vulnerability will be exploited
- Damage potential
-
The seriousness of the vulnerability
- Asset value
-
The cost of restoring the asset to the state it was in before the
potential compromise, possibly including the costs of hiring someone
to do the work for you
Combined, these three factors would provide a quantitive measure of
the risk. The result may not mean much on its own, but it would serve
well to compare with risks of other problems.
If you need a measure to decide whether to fix a problem or to
determine how much to invest in protective measures, you may
calculate annualized loss expectancies (ALE). In
this approach, you need to estimate the asset value and the frequency
of a problem (compromise) occurring within one year. Multiplied,
these two factors yield the yearly cost of the problem to the
organization. The cost is then used to determine whether to perform
any actions to mitigate the problem or to live with it
instead.
|