All posts by Steve

VM Operating System History

VMware and other systems that allow you to run multiple virtual machines on a single server have become very popular. But virtual machines are much older than most people think. It started already back in the 1960s.

Computers started to get powerful enough to run time-sharing systems in the 1960s. But developing such complex software was from an easy task. Plenty projects were started but most of them run into trouble and failed to produce anything useful. But a few of the projects managed to develop good software, for example CP-40 and CP-67. CP-40 was a project that tried to develop a time-sharing system for the IBM/360 series of computers. At the end of 1964, Robert J. Creasy and Les Comeau started to design CP-40, which is considered by many the first successful virtual machine operating system.

CP-40 was made up of two main parts, CP and CMS. The control program (CP) ran on the physical hardware and created multiple virtual environments. Today, it would be called the hypervisor. CMS (Cambridge Monitor System) was a simple interactive single-user operating system designed to run in a virtual machine created by CP. An operating system running in a virtual machine is also called a guest operating system. CP-40 (both CP and CMS) were developed at IBM’s Cambridge Scientific Center (CSC) in Cambridge, Massachusetts, with the help of staff from MIT. IBM would later change the name to Conversational Monitor System.

CP-40 was only run on an IBM System/360 Model 40 at CSC, modified by the addition of an associative memory device for dynamic address translation. The new IBM System/360 Model 67 (S/360-67) offered a number of new interesting features. CP-67 was developed to take advantage of these new features. CP-67 became a success, it run on at least 44 different machines. This despite the fact that IBM was not especially keen on distributing it. It was delivered in source for as unsupported software, it was up to the users to turn the code into a working operating system. One reason for IBM’s lukewarm support of CP/CMS was probably because IBM tried to promote the Time Sharing System (TSS/360) as the operating system for the IBM/360 computers. 44 computers may not sound much today but in 1970, it was a significant part of the total number of mainframes around the world.

IBM failed to develop TSS/360 into a reliable system and the product was after a while withdrawn. Thanks to the success of CP-67, IBM started to get more interested. One of the reasons for the success of CP-67, or CP/CMS as it was also called, was that it was distributed to sites in source code format. Everyone running the operating system could make improvements. A VM community was created which distributed improvements. After a while, CP/CMS had become an efficient operating system, making it feasible to run IBM’s main operating systems as guest systems under it. This feature would turn out to be crucial for the survival of CP/CMS and the future versions of the VM operating system.

IBM mainframes were expensive and having a test system was often very important. Even inside IBM, CP/CMS was used to run multiple operating systems on the same physical machine. The IBM/360 mainframes had been a huge success, in 1970 the successor was announced, the IBM/370 series. Initially, IBM hoped that there would be no need for customers to run CP/CMS. IBM was not interested in funding the development of CP/CMS for IBM/370. But the VM community found some unorthodox ways of funding the development and the result was VM/370. According to some sources, the ability to run virtual IBM/370 machines under CP was crucial for the survival of the VM operating system. This feature was used inside IBM by the MVS developers, without VM they would not have enough IBM/370 systems for creating IBM’s primary mainframe operating system.

IBM would after a while start to distribute VM/370 in source code format but it was quite clear that it was not IBM’s main operating system for mainframes. According to some sources, one of the main reasons was that VM/370 was much more efficient than MVS. This reduced the amount of hardware customers needed, which reduced IBM’s turnover and profit. Obviously, this is not an official IBM statement.

VM would continue to be developed but it was always the second operating system for IBM mainframes. VM would over the years survive numerous attempts to kill it. The main reason for its survival was its efficiency. VM continued outperformed MVS, especially for interactive use. Another reason was the possibility to run MVS as a guest operating system under VM, saving customers a lot of money by not having to buy an additional IBM mainframe for testing purposes.

The current version of VM is z/VM, which includes the possibility of running Linux as a guest operating system. Nowadays, hypervisors as they are typically called have become very popular. But instead of running on mainframes, they are running on Intel platforms. But it all began back in the 1960s with CP-40 and the early IBM/360 mainframes.

Bitcoin Security

Bitcoin has become very popular but there has also been several security issues. Is Bitcoin really safe?

A lot of sophisticated security has been included in Bitcoin. Although you may have a heard about plenty of Bitcoin security breaches, Bitcoin itself is actually very secure. Only one major vulnerability in the bitcoin protocol has been detected. That was back in 2010 but it was quickly fixed. But while Bitcoin may be secure, humans using Bitcoin often create security problems.

Theft of Bitcoins has happened several times, in some cases huge amount of Bitcoins have been stolen. The largest thefts have involved Bitcoin exchanges but several users have also lost Bitcoins stored in their wallets.

Bitcoin exchanges are marketplaces that allow users to sell or buy bitcoins using different currencies. Given the huge amounts of money, both bitcoins and other currencies, handled by the exchanges, they are prime targets for many attackers.

Bitcoins are stored in electronic wallets. The wallets can exist on a server in the Internet, on a user’s computer or offline. A wallet at an Internet site is similar to a bank account. Just be aware of that while most bank accounts are insured by financial authorities, Bitcoin wallets are not.

Obviously, there is not much a user can do about improving the security of a Bitcoin exchange. But by keeping just small amounts of bitcoins at an exchange you can limit your potential losses. It is a good practice to withdraw any excess funds as soon as possible from a Bitcoin exchange, leaving just a small amount in your account. You can move your funds either to a standard bank accounts, which typically is insured, or to an offline wallet. Unfortunately, several Bitcoin exchanges have made huge losses due to attacks, often inside attacks, and customers have lost some or all of their deposits.

Given that Bitcoin wallets are typically not insured, it is a good idea to keep most of your bitcoins offline. Limit the amount of bitcoins in your online wallets to what you need for the next few weeks. If you are going to do any large transfer, take the amount from an offline wallet and transfer the funds to the receiver as quickly as possible to limit the risk of theft. Nowadays, almost all online wallets offer additional security, so that your account is not protected just by a password. Always enable this option, sometimes called 2-Factor Authentication.

It is important to remember that you can lose your bitcoins also without being attacked by someone. If you have stored your bitcoins on your hard drive and the drive crashes, your bitcoins are gone unless you have made backups. Needless to say, always backup your bitcoins and store the backups in a safe place, or preferably in a number of safe places.

Another way of losing your bitcoins is by forgetting the password. There are some possibilities of recovering lost passwords but basic brute-force attacks are not feasible at the moment. It may be an option some time in the future when computers have become cheaper and more powerful. If you have some idea about what the password is, you can generate a list of possible password and try them. It is a long shot, so make sure to write down your passwords and keep them in a safe place. Computer-generated strong passwords are highly recommended but make sure that you have stored the passwords in several safe places, not just in one single password management program.

There are plenty of products which protects your bitcoins. One of the best solutions are small USB-key sized device with a small screen and two buttons which you connect to your computer when you need to transfer some bitcoins. The advantage of such devices is that even if your computer gets hacked or stolen, your bitcoins are still safe on the USB device, which you hopefully keep in a safe place when not needed. These USB devices typically also offer backups, so that you can recover your bitcoins even if the USB device is destroyed. At the moment, the Trezor hardware wallet is a very good solution for your bitcoins.

Why Is A DMZ Important

It has for many years been risky to connect computers to the Internet. In most cases, some kind of firewall is used to protect the computers behind the firewall. A more sophisticated way of protecting computers is to use a Demilitarized Zone (DMZ), sometimes also called perimeter network.

Servers that need to communicate with both internal and external computers create a security problem for companies. Placing such computers in the internal network, behind the firewalls, means that the firewalls need to allow a lot of traffic through. On the other hand, if the computers are placed outside the firewalls, they are very vulnerable for attacks. The solution to this dilemma is generally a DMZ, a zone between the Internet and the company’s internal network.

A DMZ can be designed in a number of ways but typically, a DMZ is placed outside the company’s (internal) firewall but has a firewall (external) between itself and the Internet. This means that the internal firewall, will only let through traffic from hosts in the DMZ, generally also restricted to specific ports from specific hosts. The external firewall will only let through traffic to the servers in DMZ, also that generally limited to specific ports for every server.

This way, the company’s internal network is relatively well secured at the same time as it is possible to reach some of the company’s computers from the Internet. Obviously, a DMZ can be implemented in many other ways but the basic principles are the same. Although not as secure, it is possible to let the same physical firewall be both the external and internal firewall. Nowadays, most companies have much more complex solutions for DMZs. It is quite common to have multiple DMZs.

It is worth remembering that a DMZ’s purpose is to protect the internal company network from the untrusted Internet, or any other untrusted network. Threats from the inside are seldom covered by a DMZ.

It is very common to place servers such as mail, DNS and http (web) servers in the DMZ. For example, incoming mail is delivered to the mail server in DMZ which will forward the emails to the internal mail server. This makes it easy to configure the firewalls for email. Often one additional connection is allowed, so that it is possible to manage the mail server in DMZ from the internal network. By having a DMZ, DNS often becomes relatively small security risk. The DNS server in DMZ does only need to know a few servers in the internal company network, therefore there is not much gained if someone manages to compromise the DNS server. The sensitive DNS data is stored on the DNS servers behind the internal firewall.

Do you need a DMZ for your home network? Probably not unless you have servers at home which must reached from the Internet. If the traffic is just outgoing, a firewall/router with NAT (network address translation) is a relatively secure solution.

Brute Force Attacks on wp-login.php

If you administer WordPress web sites you have probably noticed that your sites are being attacked every now and then. It looks like WordPress sites have become a very popular target for automated attacks and probes. Most of the attacks are very basic and should not be successful against a site with a reasonable level of security. But the number of attacks seems to be increasing which makes it prudent to make sure that your sites are well protected against the most common attacks.

There are plenty of ways a WordPress site can be attacked but one of the most common and which is easy to automate is a brute force attack on wp-login.php. This means that the attacker is trying to find a valid username/password combination for the site. Such attacks are easy to detect if you check your log files. If the wp-login.php file has thousands of hits, your site has been attacked.

Unfortunately, by default WordPress is not able to tell you much about what happened. But you have a number of useful plug-ins which can be used for both logging failed login attempts and if not completely preventing brute force attacks at least slowing them done. Nowadays it is a good idea to use a plugin that temporarily blocks further login attempts from the same IP address after a couple of failed login attempts. One of the most popular is the login lockdown plugin.

Unfortunately, it is easy to change your IP address. An attacker can use proxies to make it look like your site is being attacked from several different sources. Still, by temporarily blocking login attempts from specific IP addresses, you slow down an attacker. If you install a plugin that logs what usernames have been tested, you will most likely detect the user “admin” is used in 99% of the cases. In other words, one of the first things you should do is to create a new administrator with another login name, for example mike or john. Then remove the admin account. If you are installing a new WordPress blog, avoid calling the administrator account admin. Of course, what ever you call your administrator account, always use strong passwords.

If you are using a static IP address, you can limit access to the wp-admin files, which includes wp-login.php by editing the .htaccess file for the web site. For example you can put the following into your .htaccess file. (Note, replace aaa.bbb.ccc.ddd with your IP address)

RewriteCond %{REQUEST_URI} wp-login|wp-admin
RewriteCond %{REMOTE_ADDR} !^aaa.bbb.ccc.ddd
RewriteRule . – [R=404,L]

But that means that you can only access your site from the given IP address which often complicates things. Another solution is to password protect your wp-login.php. If you have more than one WordPress site in your hosting account, you can create a .htaccess file in your home directory which will protect all your WordPress blogs.

Unfortunately, in order to create this extra level of protection, you need to create a password file which contains the username and the encrypted password. But if you don’t know how to do this, you can use one of the free htpassword generator available on the Internet, just search for htpasswd generator. The user name is not really important, user0, is sufficient if you just want to block automated brute force attacks from reaching your wp-login.php. A simple password is OK, we only want to prevent brute force attacks on wp-login.php. Never use the same password as the administrator account on your blogs. The file looks similar to the following:

user0:RwzaR7erWUpdd

The part up to the colon, is the username (user0 in this case) and the part after the colon is the encrypted password. Put this into a file in the home directory of your hosting account.

You also need to modify (or create) the .htaccess file in your home directory. Put in the following lines:

ErrorDocument 401 “Unauthorized Access”
ErrorDocument 403 “Forbidden”
<FilesMatch “wp-login.php”>
AuthName “Authorized Only”
AuthType Basic
AuthUserFile /home/username/.wppass
require valid-user
</FilesMatch>

You need to change /home/username/.wppass to the name (including full path) of the file containing the username and encrypted password.

Note that while this will stop automated brute force attacks trying to log on to your site, it will not protect you against other kind of attacks and not help against any security flaws in WordPress or the plugins you are using. To feel a little bit more secure, you should back up your wordpress sites regularly and store the backups on your local computer.

DNS Security Overview

Without DNS, the Internet would not work but most people don’t know what DNS is. DNS stands for Domain Name Services and provides the mapping of IP names to IP addresses as well as some other mappings. DNS automatically converts the names we type in the address bar of our web browser to the corresponding IP addresses. In order to find a web server, IP addresses are used but humans prefer to use host names which are much easier to remember.

DNS has no central database, instead it is made up of thousands of DNS servers which are responsible for the IP addresses of one or more subnets. This is a very flexible solution which works very well for a huge network like the Internet. But it also has a number of potential security issues.

In the good old days, security was seldom a problem on the Internet. Most services and protocols were designed without paying any attention to security. DNS was no exception, it had virtually no security in the early days. BIND, Berkeley Internet Name Domain, was the most widely used implementation of the DNS protocol. BIND is still used today and fortunately, it has become reasonable secure.

But the first versions of BIND did not really have any security at all, it was first in the mid 1990s that DNS security become an urgent issue. In the early days, it was easy to get the complete zone from a DNS server, giving an attacker the names and IP addresses of all computers in a network. The name server trusted everyone, something that made DNS cache poisoning very easy. The DNS server would accept any DNS information it received, regardless of source or if it had asked for the information or not.

DNS cache poisoning was first used as a joke by some technically gifted students but it could also be used for criminal purposes. It is easiest to explain by using an example, let’s say you need to pay a bill and your bank is mybank.com. You open an Internet browser and go to mybank.com. In order to find the site, your computer needs the IP address for mybank.com so it asks the DNS server. In this case, a DNS cache poisoning attack requires two things, a fake record in the DNS cache giving a false IP address for mybank.com and a site that looks like real mybank.com site.

Your computer gets the false IP address from the DNS server and your browser goes to the false mybank.com site. The site looks real so you log in and your login credentials will be recorded by the attacker. Now the attacker has got what he wanted and he may now redirect you to the real mybank.com site and log you in automatically, minimizing the chance that you get suspicious. But DNS cache poisoning can also be used to install malware on your computer. You may think that you are downloading patches or updates from Microsoft but instead you are downloading from a bogus site which installs additional software onto your computer.

Nowadays, DNS servers are not that gullible but DNS cache poisoning is still a threat. One reason for this is that many DNS servers are still running old DNS software which is not as secure as the latest versions. It is also possible to poison the cache on your personal computer. This is not as efficient as attacking a DNS server since only one computer will be directed wrongly. But it can still create a lot of trouble for individual computers and users.

WordPress Security Overview

WordPress has become tremendous success. It is a very good for building websites, especially for beginners, and it is free. The huge number of WordPress sites on the Internet has also made it an interesting target for attacks. If you administer a WordPress site, you need to know about WordPress security, otherwise your site could become a victim of an attack.

The number of attacks on WordPress sites has increased enormously lately. The main reason for this is most likely that software that automatically attacks WordPress sites has become available. This means that it is very important to make sure that your sites are well protected.

Unfortunately, it is impossible to guarantee that a WordPress site, or any other site for that matter, is completely secure against all possible attacks. But that said, you can make sure that your site has a high level of security. Most of the attacks are either brute-force password attacks or looking for sites that have not closed well known security holes. If you make sure that your site is safe against such attacks more than 99% of all attacks will not be able to penetrate your site.

First of all, make sure that you don’t use the standard admin account. Virtually all brute-force attacks are trying to guess the password of the admin user. Always change the administrator user account to another username and delete the admin user. There is no good way of hiding usernames from a sophisticated attacker but simple brute-force programs don’t investigate, they always attack the admin username. So by simply changing the username of the administrator account you have managed to protect yourself against most of the attacks.

But you should make sure that all usernames have strong passwords. If the only account is the administrator account, then this is easy. If you have several user accounts, you can use a plugin that forces the users to use strong passwords.

Even with strong passwords and no admin user, you should not let attackers try thousands of login attempts on your site. Install a plugin that limits the number of login attempts from the same IP-address. Such plugins will block additional login attempts after for example three failed logins from an IP-address for a specified amount of time. Just be careful that you don’t set the limits to strict so that you block yourself out for several hours just because you typed the password wrong once. Also be aware that by using proxies an attacker can easily change his IP-address.

It is also possible to limit access to the login page, and all administration pages, but that is only useful if you are always working from the same IP-address. There are also ways of hiding the WordPress login page.

You need also to make sure that your plugins are up-to-date. Fortunately WordPress will automatically check new versions of your plugins, themes and WordPress itself. Also be careful with having a lot of active plugins, this can slow down your site and there is always a risk that a plugin has a security hole.

Given that it is impossible to guarantee that a site, WordPress or not, is hundred percent secure, you should back up your WordPress sites regularly. Also for this you can install a plugin which backups up the site and lets you download the backup to your local machine.

VMware Security Overview

VMware has become very popular and is nowadays used to build huge virtual environments. The virtual nature of VMware is creating new potential security issues. That said, VMware security has improved a lot over the years. Here is an overview of VMware security.

Securing an VMware environment is a complex task, this is especially true for large virtual environments. At the same time, VMware security is very important for organisations that use VMware as one of the cornerstones in their virtualization strategy. A security flaw in VMware could make it possible to compromise all systems in the Cloud.

The success of VMware has also made it an interesting target to attack. In the early days, security on the hypervisor level was not much of an issue. It was more important that the guest operating systems, such as Windows, were secured. Very few attacks on the hypervisor level had been reported. One reason for this was that very few people had enough knowledge about VMware to be able to find ways to attack it.

Nowadays, VMware releases security patches regularly. Attacks on VMware sites are nothing odd any longer. VMware also had an embarrassing problem, parts of the confidential VMware ESX source code has turned up on the Internet. Some people were looking for help from experts who could find security flaws in the source code.

So what kind of potential security issues exist in a virtual environment like VMware? There are a number of especially operational threats. New virtual machines can be created quickly and easily, which is very handy but from security point of view this is a big problem. A VM (virtual machine) which has not been hardened can compromise the whole virtual environment. The virtual networks are seldom monitored which can let attackers collect sensitive data for a very long time without anyone noticing the security problem.

Another practical problem is who should administer what in a virtual environment. This is an issue which can quickly turn into a political battle when different teams try to maximize their control of the environment. Sometimes administrators have full control over the whole virtual environment which is bad practice from security point of view. Granular access is good for security but it is taking time to introduce the concept in the VMware world.

What about VM Escape? This refers to the scenario where malicious code running on one VM can escape out of the VM and infect the whole virtual environment. No real examples of VM Escapes have been found and it is thought to be impossible to achieve. But there has been incidents there malicious code in one VM has managed to propagate itself to other VMs, albeit only in special cases and with limited success.

As mentioned earlier, VMware viruses and malware used to be unknown but with the increasing popularity of VMware such programs have started to become VMware-aware. One reason for this is that many anti-virus companies check out viruses and malware on VMs, after all VMs are very useful for such purposes. Due to this, some viruses check if they are running in a virtual machine and if that is the case, they exit or at least change their behavior from what it would be on a physical machine.

The First Internet Worm

In the early days, the Internet was a peaceful place. Almost only universities were connected and online transactions were still a couple of years down the line. Of course some computers got attacked once in a while. But that was isolated incidents and very little damaged was caused. There was very little need for strict security and few worried about IT security. Much changed after the so called Morris Worm in November 1988.

The creator of the worm was Robert Tappan Morris, a Cornell graduate student in computer science. The worm may have managed to infect up to 20% of all computers on the Internet. This made the worm very successful but also short-lived. The worm infected the same computer multiple times, overloading many computers. The rapid growth of the worm forced system administrators to quickly find ways to kill it.

With today’s standard, the worm code was very basic. But back in those days, security was lax so the worm spread very quickly despite that it was taking advantage of security flaws that in some cases had been known for years. Nowadays security is much tighter, such basic attacks would not cause any significant damage today. The code of the worm was an interesting mix of some sophisticated parts combined with a couple of basic flaws.

The worm was not created to cause damage, it was more an attempt to create something that would travel around in the network. But Morris misjudged the success of the worm, far too many systems had basic security holes. But the worst damage was done because the worm infected the same computer multiple times. Some computers got infected with so many copies of the worm that the systems got overloaded and crashed. It became, by accident, the first known Denial-of-Service attack, things simply went out of control.

Exactly how many computers were infected is not known. Some estimations indicated that 20% of all computers on the Internet were infected. But many think that the number was much lower, probably less than 10%. The worm targeted only two types of computers, VAXs running BSD Unix and Sun-3 computers, also running a BSD-like UNIX version. Back in 1988 these computers were very popular on the
Internet.

The worm tried four different attacks. Three of them tried to exploit known security flaws in UNIX while the fourth was trying to take advantage of weak passwords. But as mentioned, the worm did not do any damage to the system, except for overloading the systems by infecting the same system multiple times. It did not try to gain root privileges, which is needed to alter or damage a UNIX system. Its only purpose was to infect new systems.

The Morris Worm changed the Internet, but slowly. Before the incident, computer security had not been much of an issue. Some people would say that the Internet community was trusting and naïve. After that so many computers got infected so easily, computer security became more important. Some people called it a wake-up call. But pretty soon most of the Internet society forgot all about potential security problems. It would take quite some time before Internet security become a hot topic. One reason for this was that back in 1988, the Internet was mainly an academic network with no commercial traffic. Once the Internet started to get commercial traffic, security became a priority.

Biometrics in Computer Security

Biometrics has become a hot topic in computer security. Using biometrics has a number of advantages but finding reliable solutions is still a problem. A lot of money has been spent on finding commercially viable solutions. But it is still a long way to go before biometrics will be used instead of passwords for authentication.

Biometric authentication has several advantages compared with passwords, the standard solution in the computer world. Passwords can be forgotten or stolen. Using biometric identification to recognize a person based on her physiological or behavioral characteristics avoids such problems. Using biometrics for authentication isn’t actually something new, fingerprints may have been used already in the ancient Babylonia 4000 years ago.

But finding a reliable biometric system for authentication is far from easy. It should be something unique, so every person can be identified. At the same time it needs to stay relatively constant over time and should be both quick and easy to apply. Fingerprints have been used long before computers entered the stage but other biometric systems are also being tested. But as mentioned, no reliable solution has been found that could be for example built into laptops.

Biometrics also creates potential privacy concerns. Users are of course aware when they are asked to enter a password but biometrics data can be collected without active participation of the users, which means that they may not even be aware that data is being collected. Governments can for example collect detailed information of all the flights people have made, even if there is no security justification to store such information. If DNA data is collected, it is even possible to trace illnesses and genetic conditions that have nothing to do with the identification process.

In movies, biometric authentication is often depicted as very easy and reliable. But in practice it is often very difficult for computers to determine the accurate response, which may require a lot of processing as well.

One problem with biometrics compared with passwords is that is not replaceable. While a stolen password can be easily replaced, biometrics is not easy to replace.

There are several international biometrics standards available. And a number of new ISO/IEC standards are being developed. One of the main problems is to find a reliable system that makes it possible to identify people without being too strict. The biometrics profiles of two people may overlap, in which case it is important that the system can identify both people accurately without rejecting people because they can’t safely be uniquely identified.

The enrollment of people in biometrics systems is more complicated than issuing passwords. Typically, special equipment is required which is only available in a few places. Some people have biometrics that are far outside the normal values which may cause the biometrics system to reject them.

At the moment, no biometrics technology has become dominant. Several different biometrics systems are being used. Fingerprint, iris, face, hand geometry and voice are the most widely used at the moment. Fingerprints are the most commonly used biometrics right now. Fingerprints have a number of advantages, they are unique, don’t change over time and relatively easy for computers to recognize. But some people associate finger prints with criminals and not everyone like to touch a sensor that has been touched by unknown people. Fingerprints also require active participation. Face recognition does not require active participation but has a number of disadvantages. Faces can be obstructed by glasses, hair, hats and many other things. Faces also change over time and good cameras are needed to get accurate results.

How To Backup Windows

Needless to say, backing up your computer regularly is very important. There are a lot of ways you couldlose the data on your hard drive. Unfortunately, people often realize too late how important backups are. Nowadays, you have a lot of different backup solutions to choose from. It can be difficult to find the best way of backing your data. Here is an overview of how you can backup data on a Windows computer.

Backing up Windows used to require a third-party tool but in Windows 7 Microsoft introduced a new backup utility. The new tool can do both image backups and file backups. Image backups are used to save the Windows operating system while file backups are used to backup user data. You can do full or incremental backups. It is also possible to do a system recovery disk which is very handy if disaster hits.

The new Windows backup utility has a lot of useful features. Backups can be scheduled, so you always have a recent backup of your data. You can also let Windows manage the space used for backups, Windows automatically deletes the oldest backup when there is no space for new backups. All in all, the Microsoft backup utility in Windows 7 and Windows 8, is a good tool.

If you want different backup cycles for files on the same drive, the Windows backup utility is not the best tool. It works best when you back up the whole drive, rather than selecting individual files. Restores, especially restores of image backups, are unnecessary complicated. At the moment, you can use an external disk or writable DVD as backup destination but not the Internet. This may be possible in later versions. But still, now you have a decent backup utility included with Windows. Make sure to use it.

But you may still prefer other backup solutions for your Windows computers. The new Microsoft backup utility is a good solution for many but there are better backup tools on the market. For many it does not make sense to pay for an additional backup utility but in some cases it makes sense to go for a more sophisticated backup solution. Software such as Genie Backup Manager, Norton Ghost, Paragon Hard Disk Manager Suite and Acronis TrueImage are excellent programs which offer much more than just basic backups but for many Windows users the additional cost is hard to justify.

Always make sure that you have a couple of image backups. It is the quickest way to restore the Windows system. It is what you need if your disk crashes or if your computer gets infected with viruses. Your personal data you should back up frequently, using standard file backups. If you have a lot of data, you may prefer to do incremental backups. But remember that restores will take longer time if you have many incremental backups. Restores from full backups are easy.

Online backups offer some advantages but you should be aware that there are also some, at least potential, drawbacks. The main advantage is that you can restore files even if you are not at home, your backups are in the cloud. This also protects your data in case of fire or theft at home. Unfortunately, some people store their backup DVDs next to their computer and sometimes thieves take both the computer and the DVDs. Many of the online backup providers have very good interface, making it very easy to back up your data.

But there are a lot of backup providers and it may not be easy to find one that fits your needs and budget. Privacy and security issues are also potential problems with online backups. One security issue to remember is that if your online backup account gets hi-jacked, all your backed up data can be downloaded. You probably don’t want to back up sensitive personal data using the Internet. For sensitive data, you may want to have a closer look at encryption solutions.