Tracking down a cracker or cracker tools on a live system is a very scary but exciting thing. It's somewhat like strolling through a really good haunted house: pounding heart, adrenaline, tunnel vision, the whole nine yards, especially when you know the deviant is actually on the box with you.
Try not to give your moves away to the opponent. I tell our other administrators at Rackspace, "Don't lose your head. Think clearly and slowly. What will you need later in the way of evidence? Log files? User .bash_history files? Output from netstat? Copies of trojan binaries?" Think it through, collect what you need and get off the box. Here are some tips below for what to do in these situations.
Does something on your system just not seem right? Worried that your machine's integrity has been compromised? Here are a few things to look for:
Commonly modified files include 1s, find, w, who, 1ast, netstat, login, ps, top, 1sattr, and chattr. Check these files for MD5sum changes with RPM or debsum.
If running ext2/3 file systems, any files with the -i, or immutable, bit set, as in:
# lsattr /bin/* /sbin/* /usr/bin/* /usr/sbin/*|grep "i--" ----i-------- /bin/login ----i-------- /bin/netstat ----i-------- /bin/ps
Strange filenames or directory names with spaces where none should be, as in . . /, or other attempts to make the file appear hidden.
Strange files or directories in places where they do not belong, like /var/spool, /tmp, or /dev. In particular, do an 1s -la | more and look for non link(l), char(c), or block(b) files types (the left-most character from Is) in the /dev/ directory.
Odd, slangy filenames like root, bonez, or war3z in home and system directories.
Modified boot files like /etc/inittab, /etc/rc.d/rc.local, or /etc/rc.d/rc.sysinit.
Unfamiliar or unusual user names in /etc/passwd or /etc/shadow.
Spikes in your system's outgoing bandwidth usage (serving warez or porn files or participating in denial of service (DoS) attacks).
Commands you never entered in /root/.bash_history.
Watch for false positives, or items that make you think you've been compromised when you really haven't. Automated updates, system cron jobs, slow machine performance, poor programming (such as memory leaks), or unusual port connections are not always indicative of a system attack. Look for evidence and proof, not just unthinking reaction.
If you have definitely been cracked, see the next section for more information.
What you need to do in this situation depends on what your machine does. If it's a production server and you can't shut it down, carefully get as much information as you can about the intrusion, without making yourself known. Don't immediately start shutting off services and locking things down. If you're compromised, the time to be secure is over. Now you need to be stealthly, thoughtful, and cool-handed before shutting the system down.
You need filenames, history and log files, and any rootkit packages, source code, or compiled code left behind by the invader. Don't do anything too system-intensive that gives away your presence, though, such as using dd to clone the drive. If there's a live cracker on the system, he could get nervous or have some fun by wiping logs and binaries while you're watching. Copy all of your evidence and forensic data to some place not so obvious, like /root/tmp/stuff/ or the like.
To get the forensic data and evidence off the system and to somewhere safe, use tar and scp to transfer all of the data that you've collected (using nice so as to not draw too much attention to your actions) off to another safer machine like this.
# nice -15 tar cz0 /root/tmp/stuff/| ssh \ root@example.com "cat > /root/my-hacked-tarball.tgz"
Caution |
If you suspect that this might involve any type of prosecution or that legal counsel or the authorities may become involved, be very careful as you collect the forensic data. It is wise to have two people collect and witness the data gathering. It's also a good idea to make two complete images of the drive and to not change or even touch the master drive from this point forward. cClone drives like this later, off-line with tools such as Symantec's Ghost in image mode -ir. Remember that you're recording and handling evidence that can be used in court, and you're working within "the crime scene" as it were. Use appropriate caution. |
If you suspect that the cracker is after you specifically, is intending harm to you and your system, or you're witnessing data destruction in action, then pull the plug (yes the AC wall plug). Most modern journaling file systems, like those used under Linux, can handle the abrupt or ungraceful power-down, and normal shutdown or reboot scripts could contain dangerous self-destruct dd scripts or the like. After you've powered off, hook up another identical hard drive, reboot with a DOS boot floppy, and make a drive image to one or preferably two other backup drives (again using Ghost with the -ir switch). Remove the original drive, label and date it, and lock it away. Remember: evidence. The copy drive can now be hooked to another workstation to be fsck'd and analyzed, or you can boot the system off a forensic boot media such asKnoppixSTD (Security Tools Distribution www.knoppix-std.org). Then you can safely fsck (like scandisk) the cloned drive and perform your forensic research off-line.
Note |
Many compromised systems aren't actually cracked by a live person on the other end of the connection. Rather, they're hit by an automated Internet worm with a scanning exploit (a crowbar executable that leverages a known security vulnerability) and a rootkit. Automated scanning worms are responsible for the bulk of hacked boxes. Although such compromised machines may not have a live person on them, the machine is still tainted and must be wiped and reloaded, even if it's not the work of someone intent on harassing you personally. In most cases, where there's a cracker involved directly on your box, it's someone who just wants to use your box to transfer warez (illegal files), porn, or send spam-or just a kid taking the Internet equivalent of a joyride. Though it's still serious, it's not necessarily time to call the cops. In fact the FBI typically won't even talk to you unless there is proof of at least $5,000 in damages on the line. |
If your system is compromised and your upstream provider notices it before you do, you may get an unsettling e-mail or call. Many providers will give you 24 hours notice to get your files off the drive before they unplug the machine, wipe it, and reload it from scratch. Despite the way it may feel, your ISP is not threatening you. They have to protect their own investments, keep their network safe for others, and prevent their machines from becoming remote scanning tools for all your cracker's pals. Think of it as a type of Internet quarantine.
There are a couple of solutions to this situation. Remember, you're working against the clock. Pick something and do it, rather than wasting time wondering what's the best thing to do. (You do back up your account regularly, right? You can always restore from that file.)
First, consider moving all of your nondynamic content to a new location, including web files, e-mail, and FTP-able data. Scan this content with the chkrootkit tool, and watch for extensions like .pl, .cgi, .php, .sh, and other indicators of executable files. Do not move binary executables. If you're moving data from a file server, watch out for Windows executable extensions like .pif, .exe, .com, and so forth.
Use the find command to find these files.
# find ./ -iregex ".*\ .sh$" -o -iregex ".*\ .exe$" ./runme.exe ./root-me.sh
If you have various types of executables (as in this example), you can move them to a holding place or quarantine like this.
# find ./ -iregex ".*\ .sh$" -exec mv { } /tmp/EXECUTABLES/ \;
The most common way of migrating data directly from a hacked drive to a freshly reinstalled new drive is to move the hacked master drive aside to a secondary bus (for example, an IDE secondary slave). Then reload the operating system on a new drive, and then mount the hacked data as read-only under /mnt/hacked/ (for example) with the command mount -o ro/dev/hdc2/mnt/hacked.
Note |
Some administrators believe that they can "resecure" a compromised machine. Though I've known this to be done in extreme cases, there is almost always a back door somewhere that even the most eagle-eyed administrator will miss. Once a system is compromised, you simply can't trust anything on the system at all. You're going to have to wipe and reinstall before you can breathe easily. |
Even the most apparently secure system, with a working N/HIDS installation and a strong firewall, may be compromised. How can this be? Administrators who throw money and off the shelf "security solutions," and yet still become targets of successful attacks, are usually outraged and bewildered at this turn of events.
For example, lousy server administrators with big expensive firewalls got as mad as hell when the infamous Nimda and Code Red Internet worms drilled right through their firewalls. How could such a thing happen? Well, if a firewall in front of a web server running IIS must, by definition, have port 80 (http) open to the world to serve web traffic, and if that administrator doesn't keep the daemon behind that firewall (US) fully patched and secure, then the firewall is literally useless to protect him from his own poor patching practices, and from the worms that look for such vulnerable systems.
Here's the cold hard truth: security measures of every type are built on assumptions of underlying security-an assumed security foundation if you will. In order for add-on security systems such as NIDS and firewalls to work, your systems need to be based on common foundational security steps and policies, including these basic tenets:
Passwords: You have, and enforce, a strong password policy.
Patches: You apply all critical security patches (as soon as they have been tested in production).
Services: You have shut down or secured as many daemons as possible to decrease your network profile, or attractiveness to would-be crackers or deviants.
Scanning and Reporting: You are running an active host-based intrusion detection system that regularly monitors network activity and portscans, log file red flags, file alteration, and other internal systems-and then reports on such data.
Only when you have your own house in order can you begin to install additional solutions, like firewalls or other monitoring and mitigation systems confidently. Why build up a massive outer defensive wall to protect a house of cards? If you don't patch the holes and lock the doors, someone will get in no matter how scary the outer alarm system looks. Good security isn't contained in a single silver bullet or commercial product-it results from incrementally applied common sense measures over time.
Tip |
Stay secure. If you know you're not good with routine checks and updates, automate your patching with tools like up2date/RHN/yum for Red Hat Linux and Fedora Core, apt - get for Debian, or whichever automatic update tools your Linux distribution offers. It's best to install critical patches only after you've verified that they won't crash your production systems. But if you don't have the time or manpower to manually check your major production patches, then when they are available "turn on" these automated patching systems and let the servers keep themselves up to date. It's easier to fix a crashed daemon than to clean up a hacked server. Remember, critical security patches are released in response to known vulnerabilities, exploits, and common bugs. Not keeping such patches relatively up to date is like flirting with disaster. |
In the Open Source world, vulnerabilities are usually discovered and patches released before exploits exist (unlike in the closed source realm). When a patch is offered, this is usually, on average, from 1 to 3 weeks before a mainstream exploit based on that vulnerability is seen in the wild. That being said, if you bank on this statistic, you may come up short. I have seen a blow worm grow up into its final worm form in as little as 3 days and spread across the net like wildfire. So the real hot time for action is between 24 and 48 hours of vulnerability identification and patch release. This is the time in which you need to examine your own systems and get them tested for the patch(es) and fixed up.