Friday, December 25, 2009
Gpg4win
The Gpg4win initiative aims to provide a current Gpg4win Windows installation package including the GnuPG encryption tool and associated applications. The documentation ("Gpg4win Compendium" and "Novices") is directly maintained as part of the Gpg4win project.
Another goal is to support both relevant cryptographic standards, OpenPGP and S/MIME in a unified way.
Gpg4win is an international effort. Due to the origin of the project and many members, there is a full German translation. Additional translators are welcome!
The main difference compared to all similar approaches (mainly GnuPP, GnuPT, Windows Privacy Tools and GnuPG-Basics) is that the first piece developed was the Gpg4win-Builder. This builder allows to easily create new gpg4win.exe installers with updated components. It runs best on a GNU/Linux system. Almost all products are even automatically cross-compiled for integration into the installer.
This concept raises hope to practically prevent quick aging of the installer package because updating is easier and does not depend on a single person.
You can choose all or some of the following modules during installation:
GnuPG: The core; this is the actual encryption tool.
Kleopatra: A certificate manager for OpenPGP and X.509 (S/MIME) and common crypto dialogs.
GPA: Another certificate manager for OpenPGP and X.509 (S/MIME).
GpgOL: A plugin for Microsoft Outlook 2003 and 2007 (email encryption).
GpgEX: A plugin for Microsoft Explorer (file encryption).
Claws Mail: A complete email program including the plugin for GnuPG.
Gpg4win Compendium: The new (German!) documentation about Gpg4win2 (translation already scheduled).
Gpg4win for Novices: The old English handbook about Gpg4win1 (for newbies).
Gpg4win can be installed and tested with just a few mouse clicks. Of course you should be administrator of your system or have administration rights.
Saturday, November 7, 2009
Waht is CrypTool ?
The current version offers beside others the following highlights:
Numerous classic and modern cryptographic algorithms (encryption and decryption, key generation, secure passwords, authentication, secure protocols, ...)
Visualisation of several methods (e.g. Caesar, Enigma, RSA, Diffie-Hellman, digital signatures, AES)
Cryptanalysis of certain algorithms (e.g. Vigenère, RSA, AES)
Crypt-analytical measuring methods (e.g. entropy, n-grams, autocorrelation)
Auxiliary methods (e.g. primality tests, factorisation, base64 coding)
Tutorial about number theory
Comprehensive online help
Supportive script with further information about cryptology
From its original use of information security training for a company, CrypTool has developed into an outstanding open source project for cryptology related topics.
Since spring 2008, the CrypTool project has been operating the Crypto Portal for Teachers. Thus far, the portal is only available in German and is intended to act as a platform for teachers to share teaching materials about cryptology and related links.
Since spring 2009, the CrypTool project has also been operating the webseite CrypTool-Online. This portal gives people interested in cryptology the possiblity to try out a variety of ciphers and encryption methods in their own browser without downloading and installing any kind of software. On this website for first-time users and young people we provide cryptology in an appealing and easy way. For advanced tasks and problems there is still the offline version of CrypTool which can be downloaded and installed.
Currently the CrypTool team is working on two projects intended to become the successors of the current release version CrypTool 1.4.x which has been written in C++. Both follow-up projects use state-of-the-art standards of software development, but are still in beta status:
CrypTool 2.0 is developed in C# with Visual Studio 2008 (Express Edition) and WPF. In July 2008 the first beta version (for developers and end users) has been released. It is continuously updated. CrypTool 2.0 provides a fully developed architecture and usable cryptographic functionality combined with a pathbreaking drag-and-drop GUI.
JCrypTool is developed in Java and based on Eclipse RCP. The current beta version (called milestone 5, intended for developers and users) has been released in September 2009. JCrypTool is platform independent (Windows, Linux, Mac) and makes use of the FlexiProvider (a powerful toolkit developed by the TU Darmstadt) and BouncyCastle for the Java Cryptography Architecture JCA.
Friday, November 6, 2009
Microsoft Security Essentials Antiviruse
Microsoft Security Essentials is a free* download from Microsoft that is simple to install, easy to use, and always kept up to date so you can be assured your PC is protected by the latest technology. It’s easy to tell if your PC is secure — when you’re green, you’re good. It’s that simple.
Microsoft Security Essentials runs quietly and efficiently in the background so that you are free to use your Windows-based PC the way you want—without interruptions or long computer wait times.
Learn more at the Microsoft Malware Protection Center
Find information, definitions, and analyses of all the latest threats that Microsoft Security Essentials can help protect you against in the Microsoft Malware Protection Center
Need security for your business? Protect your computers with Microsoft Forefront Client Security
*Your PC must run genuine Windows to install Microsoft Security Essentials. Learn more about genuine
Thursday, October 22, 2009
Free open-source disk encryption
Introduction
TrueCrypt is a software system for establishing and maintaining an on-the-fly-encrypted volume (data storage device). On-the-fly encryption means that data is automatically encrypted or decrypted right before it is loaded or saved, without any user intervention. No data stored on an encrypted volume can be read (decrypted) without using the correct password/keyfile(s) or correct encryption keys. Entire file system is encrypted (e.g., file names, folder names, contents of every file, free space, meta data, etc).
Files can be copied to and from a mounted TrueCrypt volume just like they are copied to/from any normal disk (for example, by simple drag-and-drop operations). Files are automatically being decrypted on the fly (in memory/RAM) while they are being read or copied from an encrypted TrueCrypt volume. Similarly, files that are being written or copied to the TrueCrypt volume are automatically being encrypted on the fly (right before they are written to the disk) in RAM. Note that this does not mean that the whole file that is to be encrypted/decrypted must be stored in RAM before it can be encrypted/decrypted. There are no extra memory (RAM) requirements for TrueCrypt. For an illustration of how this is accomplished, see the following paragraph.
Let's suppose that there is an .avi video file stored on a TrueCrypt volume (therefore, the video file is entirely encrypted). The user provides the correct password (and/or keyfile) and mounts (opens) the TrueCrypt volume. When the user double clicks the icon of the video file, the operating system launches the application associated with the file type – typically a media player. The media player then begins loading a small initial portion of the video file from the TrueCrypt-encrypted volume to RAM (memory) in order to play it. While the portion is being loaded, TrueCrypt is automatically decrypting it (in RAM). The decrypted portion of the video (stored in RAM) is then played by the media player. While this portion is being played, the media player begins loading next small portion of the video file from the TrueCrypt-encrypted volume to RAM (memory) and the process repeats. This process is called on-the-fly encryption/decryption and it works for all file types, not only for video files.
Encryption Algorithms
TrueCrypt volumes can be encrypted using the following algorithms:
Algorithm | Designer(s) | Key Size (Bits) | Block Size (Bits) | Mode of Operation |
---|---|---|---|---|
AES | J. Daemen, V. Rijmen | 256 | 128 | XTS |
Serpent | R. Anderson, E. Biham, L. Knudsen | 256 | 128 | XTS |
Twofish | B. Schneier, J. Kelsey, D. Whiting, D. Wagner, C. Hall, N. Ferguson | 256 | 128 | XTS |
AES-Twofish | 256; 256 | 128 | XTS | |
AES-Twofish-Serpent | 256; 256; 256 | 128 | XTS | |
Serpent-AES | 256; 256 | 128 | XTS | |
Serpent-Twofish-AES | 256; 256; 256 | 128 | XTS | |
Twofish-Serpent | 256; 256 | 128 | XTS | |
For information about XTS mode, please see the section Modes of Operation.
Sunday, October 11, 2009
Two-Factor Authentication (2FA)
Two-Factor Authentication (2FA) is also known as Dual Factor Authentication (DFA)
When you think of all that happens online and you consider all that goes on in the ‘networked’ world, you can start to appreciate the tremendous need for strong security measures to protect online assets, data and communications.
Authentication is the cornerstone of any vigilant network security solution. And the authentication method used to protect the vast majority (90+%!) of networks (user names and passwords) is a 50 year-old solution designed when there were no networks, no Internet… in fact, next to no computers!
Passwords suffer from a number of weaknesses that make them an ineffective security measure for your network - they are easy to steal, easy to hack and hard to remember. The result is both reduced network security and increased help-desk costs for resetting passwords.
Solving the problem = Dual Factor Authentication (DFA)
Dual Factor Authentication (DFA), also known as Two-Factor Authentication (2FA) is directly analogous to the way one ‘authenticates’ to an Banking Machine – you use something only you have (your unique bank card) and something only you know (your secret PIN) to identify yourself to the system.
It is very similar in the networked world, the ‘something only you have’ is a password-generating authenticator or token. The ‘something only you know’ is, again, a secret PIN.
Token = One-Time Passwords
Your token is your key to the network – it generates a new password every time you logon. Your PIN validates that you are the rightful owner of the token. You can choose from several varieties of tokens all of which do the same thing, they generate a new secure, random ‘One-Time Password’ for every logon. Anyone key-logging or shoulder surfing your password will have a worthless string of letters and numbers as the password will work once and only once. Next logon a new random, One-Time Password is generated.
This secure method of dual factor authentication (DFA) does what static passwords cannot, it gives you the confidence and peace-of-mind that a user logging on to the network, really is who he or she claims to be and not someone just using a stolen, lost or shared password.
We are a leader and innovator in Dual Factor Authentication (DFA) /& Two-Factor Authentication (2FA) with our multi-award winning server and managed services based solutions.
Most two-factor systems rely on a password or PIN and something else, but that "something else" varies widely. In some cases, the "something else" is your computer. The system takes a hardware and software snapshot of your computer configuration and uses that information to identify you. This approach has the advantage of being as simple as using a password. The disadvantages are that the system has to go snooping around in your computer to identify you, and this setup ties your "identity" to a single computer.
Windows' authentication architecture makes it easy to add new forms of authentication. Windows uses a DLL called Graphical Identification and Authentication (GINA) to connect the authentication method to the Windows authentication system. It's easy to write alternate DLLs for GINA, to use any authentication method the software designer wants.
Friday, October 9, 2009
FTester -- Firewall and IDS Testing tool
Description:
The Firewall Tester (FTester) is a tool designed for testing firewalls filtering policies and Intrusion Detection System (IDS) capabilities.
The tool consists of two perl scripts, a packet injector (ftest) and the listening sniffer (ftestd). The first script injects custom packets, defined in ftest.conf, with a signature in the data part while the sniffer listens for such marked packets. The scripts both write a log file which is in the same form for both scripts. A diff of the two produced files (ftest.log and ftestd.log) shows the packets that were unable to reach the sniffer due to filtering rules if these two scripts are ran on hosts placed on two different sides of a firewall. Stateful inspection firewalls are handled with the 'connection spoofing' option. A script called freport is also available for automatically parse the log files.
Of course this is not an automated process, ftest.conf must be crafted for every different situation. Examples and rules are included in the attached configuration file.
The IDS (Intrusion Detection System) testing feature can be used either with ftest only or with the additional support of ftestd for handling stateful inspection IDS, ftest can also use common IDS evasion techniques. Instead of using the configuration syntax currently the script can also process snort rule definition file.
These two scripts were written because I was tired of doing this by hand (with packet-crafting tools and tcpdump). I hope that you enjoy them.
Andrea Barisani andrea@inversepath.com
Features:
- firewall testing
- IDS testing
- simulation of real tcp connections for stateful inspection firewalls and IDS
- connection spoofing
- IP fragmentation / TCP segmentation
- IDS evasion techniques
Requirements:
The following perl modules are required: Net::RawIP, Net::PcapUtils, NetPacket
Download:
The most recent release is ftester-1.0.tar.gz
All releases at available at http://dev.inversepath.com/ftester.
Documentation:
Man page (ftester.8)
README
TISC Insight, Volume 5, Issue 6: Testing firewalls and IDS with Ftester
Saturday, October 3, 2009
Virtualization With vmware
Transform your Business with Virtualization
Virtualization dramatically improves the efficiency and availability of resources and applications in your organization. Internal resources are underutilized under the old “one server, one application” model and IT admins spend too much time managing servers rather than innovating. An automated datacenter, built on a VMware virtualization platform, lets you respond to market dynamics faster and more efficiently than ever before. VMware vSphere delivers resources, applications—even servers—when and where they’re needed. VMware customers typically save 50-70% on overall IT costs by consolidating their resource pools and delivering highly available machines with VMware vSphere.
- Run multiple operating systems on a single computer including Windows, Linux and more.
- Let your Mac run Windows creating a virtual PC environment for all your Windows applications.
- Reduce capital costs by increasing energy efficiency and requiring less hardware and increasing your server to admin ratio
- Ensure your enterprise applications perform with the highest availability and performance
- Build up business continuity through improved disaster recovery solutions and deliver high availability throughout the datacenter
- Improve enterprise desktop management & control with faster deployment of desktops and fewer support calls due to application conflicts
What is Virtualization?
Virtualization is a proven software technology that is rapidly transforming the IT landscape and fundamentally changing the way that people compute. Today’s powerful x86 computer hardware was designed to run a single operating system and a single application. This leaves most machines vastly underutilized. Virtualization lets you run multiple virtual machines on a single physical machine, sharing the resources of that single computer across multiple environments. Different virtual machines can run different operating systems and multiple applications on the same physical computer. While others are leaping aboard the virtualization bandwagon now, VMware is the market leader in virtualization. Our technology is production-proven, used by more than 150,000 customers, including 100% of the Fortune 100.
How Does Virtualization Work?
The VMware virtualization platform is built on a business-ready architecture. Use software such as VMware vSphere and VMware ESXi (a free download) to transform or “virtualize” the hardware resources of an x86-based computer—including the CPU, RAM, hard disk and network controller—to create a fully functional virtual machine that can run its own operating system and applications just like a “real” computer. Each virtual machine contains a complete system, eliminating potential conflicts. VMware virtualization works by inserting a thin layer of software directly on the computer hardware or on a host operating system. This contains a virtual machine monitor or “hypervisor” that allocates hardware resources dynamically and transparently. Multiple operating systems run concurrently on a single physical computer and share hardware resources with each other. By encapsulating an entire machine, including CPU, memory, operating system, and network devices, a virtual machine is completely compatible with all standard x86 operating systems, applications, and device drivers. You can safely run several operating systems and applications at the same time on a single computer, with each having access to the resources it needs when it needs them.
Build your Datacenter on a Flexible Architecture
Virtualizing a single physical computer is just the beginning. With VMware vSphere, the industry's first cloud operating system, scales across hundreds of interconnected physical computers and storage devices to form an entire virtual infrastructure. You don’t need to assign servers, storage, or network bandwidth permanently to each application. Instead, your hardware resources are dynamically allocated when and where they’re needed. This “internal cloud” means your highest priority applications will always have the resources they need without wasting money on excess hardware only needed for peak times. The internal cloud can connect to an external cloud as well, giving your business the flexibility, availability and scalability it needs to thrive.
Manage your Resources with the Lowest TCO
It’s not just virtualization that’s important. You need the management tools to run those machines and the ability to run the wide selection of applications and infrastructure services your business depends on. VMware lets you increase service availability while eliminating error-prone manual tasks. IT operations are more efficient and effective with VMware virtualization. Your staff will handle double or triple the number of servers, giving users access to the services they need while retaining centralized control. Deliver built-in availability, security, and performance across the board, from the desktop to the datacenter.
Why Your Company Should Virtualize?
Virtualizing your IT infrastructure lets you reduce IT costs while increasing the efficiency, utilization, and flexibility of your existing assets. Around the world, companies of every size benefit from VMware virtualization. Thousands of organizations—including all of the Fortune 100—use VMware virtualization solutions. See how virtualizing 100% of your IT infrastructure will benefit your organization.
Top 5 Reasons to Adopt Virtualization Software
- Get more out of your existing resources: Pool common infrastructure resources and break the legacy “one application to one server” model with server consolidation.
- Reduce datacenter costs by reducing your physical infrastructure and improving your server to admin ratio: Fewer servers and related IT hardware means reduced real estate and reduced power and cooling requirements. Better management tools let you improve your server to admin ratio so personnel requirements are reduced as well.
- Increase availability of hardware and applications for improved business continuity: Securely backup and migrate entire virtual environments with no interruption in service. Eliminate planned downtime and recover immediately from unplanned issues.
- Gain operational flexibility: Respond to market changes with dynamic resource management, faster server provisioning and improved desktop and application deployment.
- Improve desktop manageability and security: Deploy, manage and monitor secure desktop environments that users can access locally or remotely, with or without a network connection, on almost any standard desktop, laptop or tablet PC.
A virtual machine is a tightly isolated software container that can run its own operating systems and applications as if it were a physical computer. A virtual machine behaves exactly like a physical computer and contains it own virtual (ie, software-based) CPU, RAM hard disk and network interface card (NIC).
An operating system can’t tell the difference between a virtual machine and a physical machine, nor can applications or other computers on a network. Even the virtual machine thinks it is a “real” computer. Nevertheless, a virtual machine is composed entirely of software and contains no hardware components whatsoever. As a result, virtual machines offer a number of distinct advantages over physical hardware.
Virtual Machines Benefits
In general, VMware virtual machines possess four key characteristics that benefit the user:
- Compatibility: Virtual machines are compatible with all standard x86 computers
- Isolation: Virtual machines are isolated from each other as if physically separated
- Encapsulation: Virtual machines encapsulate a complete computing environment
- Hardware independence: Virtual machines run independently of underlying hardware
Wednesday, September 16, 2009
BackTrack Linux
It's evolved from the merge of the two wide spread distributions - Whax and Auditor Security Collection. By joining forces and replacing these distributions, BackTrack has gained massive popularity and was voted in 2006 as the #1 Security Live Distribution by insecure.org. Security professionals as well as new-comers are using BackTrack as their favorite toolset all over the globe.
BackTrack has a long history and was based on many different linux distributions until it is now based on a Slackware linux distribution and the corresponding live-CD scripts by Tomas M. (www.slax.org) . Every package, kernel configuration and script is optimized to be used by security penetration testers. Patches and automation have been added, applied or developed to provide a neat and ready-to-go environment.
After coming into a stable development procedure during the last releases and consolidating feedbacks and addition, the team was focused to support more and newer hardware as well as provide more flexibility and modularity by restructuring the build and maintenance processes. With the current version, most applications are built as individual modules which help to speed up the maintenance releases and fixes.
Because Metasploit is one of the key tools for most analysts it is tightly integrated into BackTrack and both projects collaborate together to always provide an on-the-edge implementation of Metasploit within the BackTrack CD-Rom images or the upcoming remote-exploit.org distributed and maintained virtualization images (like VMWare images appliances).
Being superior while staying easy to use is key to a good security live cd. We took things a step further and aligned BackTrack to penetration testing methodologies and assessment frameworks (ISSAF and OSSTMM). This will help our professional users during their daily reporting nightmares.
Currently BackTrack consists of more than 300 different up-to-date tools which are logically structured according to the work flow of security professionals. This structure allows even newcomers to find the related tools to a certain task to be accomplished. New technologies and testing techniques are merged into BackTrack as soon as possible to keep it up-to-date.
Saturday, September 12, 2009
Execute Disable Bit
The challenge
Malicious buffer overflow attacks pose a significant security threat to businesses, increasing IT resource demands, and in some cases destroying digital assets. In a typical attack, a malicious worm creates a flood of code that overwhelms the processor, allowing the worm to propagate itself to the network, and to other computers. These attacks cost businesses precious productivity time, which can equal significant financial loss.
The solution
Intel's Execute Disable Bit¹ functionality can help prevent certain classes of malicious buffer overflow attacks when combined with a supporting operating system.
Execute Disable Bit allows the processor to classify areas in memory by where application code can execute and where it cannot. When a malicious worm attempts to insert code in the buffer, the processor disables code execution, preventing damage and worm propagation.
Replacing older computers with Execute Disable Bit-enabled systems can halt worm attacks, reducing the need for virus-related repairs. In addition, Execute Disable Bit may eliminate the need for software patches aimed at buffer overflow attacks. By combining Execute Disable Bit with anti-virus, firewall, spyware removal, e-mail filtering software, and other network security measures, IT managers can free IT resources for other initiatives.
Thursday, September 10, 2009
Passwords and More Authentication Fun
Often, while my students are working on a lab, I’ll take this time to search for more class demos. It seems that many of the demos we discuss in class soon get fixed. I’m not sure how or why, but this is often the case. Parameter manipulation — they’re all fixed within a few weeks of discussing them in class. The list goes on, but this is so frequent that I stopped tracking them. Regardless of the reasons, these vulnerabilities get fixed, and I’m glad they do. And I really don’t mind doing some basic research into finding more. So here’s today’s example, as it applies to password policy and authentication fun.
I’ll start off by listing some common authentication-related vulnerabilities I often see and then discuss some error messages I recently found on a popular travel site. I’ll also add some “malicious” ideas, just for fun–to get you thinking.
Common authentication-related vulnerabilities:
- Password policy is weak
- Password policy is strong, but not enforced
- No account lock out (or worse, account lockout tracked at the client side)
- Login error messages let me know whether the username is valid
- Password hints!
Related error messages from a recent find at a travel site:
- “A password must be 5-12 characters long and have no spaces.”–when registering an account.
- “The e-mail and password you have entered do not match. Please try again.”–when attempting to log in with an invalid username or password.
- “That e-mail address is not on file. Please try again.”–when attempting to display the password hint for an invalid account.
- Note: the password hint is happily displayed if you enter a valid username.
Some initial thoughts
- How easy is it to harvest emails or buy a few million email addresses?
- Could a quick script cycle through a list of email addresses and capture password hints?
- How difficult would it be to guess a few password hints based on several hundred or thousand captured password hints?
- This site allows you to save a credit card on file and use it to book travel/hotels/more without verifying anything.
- I wonder if I could book travel on someone else’s account without them knowing. All I need to do is suppress the email confirmation or point their registered email to a different one. By the time the con has been exposed, the postcards I sent from Mexico will have been received.
Do you see where this is going? The main culprit wasn’t the password policy itself. If I had to write the equation, it would look a lot like this (seriously):
Mediocre password policy + password hints + stored credit card info + having a lot of users + nongeneric error messages + not verifying anything on checkout = free trip to Mexico.
Now, let’s go cliff diving in Acapulco!
Tuesday, September 8, 2009
Microsoft Internet Information Services Vulnerability Gives Complete Server Control
Microsoft has confirmed a vulnerability in its Internet Information Services webserver and spelled out the conditions under which it can be exploited to give an attacker complete control of the server on which it runs.
Remote execution of malicious code can be triggered only in limited cases, and even then, it’s relatively easy to change settings that close that possibility. Even then, exploits can still touch off denial-of-service attacks that completely shut down file transfer protocol.
Proof-of-concept code exploiting the vulnerability was released Monday. Microsoft said it will release a fix as soon as it’s ready.
The vulnerability can be exploited only if IIS is configured to allow FTP and untrusted users have the ability to create their own directories. Users of IIS on Windows 2000 and Windows Small Business Server 2003 face the biggest threat because FTP is enabled by default, but even then, users aren’t given write access unless settings have been changed.
In that case, or in cases where users of version 5.1 have turned on FTP and write access, attackers can gain complete control over servers by listing directories with specially manipulated names that trigger a buffer overflow in the application.
Users of IIS6 also face the possibility of DoS attacks, but because the application was built using a compiler setting that automatically terminates applications that have been attacked, remote execution is a much more remote possibility, Microsoft said. IIS7, because it runs on the more secure Vista and Server 2008 versions of Windows, is not vulnerable.
For those at risk, Microsoft recommends the following workarounds until a patch is released:
Turn off FTP if it’s not needed
Disable the creation of new directories
Disable the ability for anonymous users to write using IIS settings
Microsoft said its working with providers of intrusion prevention systems so they can identify attacks. Meanwhile, admins can detect attacks by reviewing log files.
Thursday, April 2, 2009
Hardening the TCP/IP stack to SYN attacks-part 1
While SYN attacks may not be entirely preventable, tuning the TCP/IP stack will help reduce the impact of SYN attacks while still allowing legitimate client traffic through. It should be noted that some SYN attacks do not always attempt to upset servers, but instead try to consume all of the bandwidth of your Internet connection. This kind of flood is outside the scope of scope of this article, as is the filtering of packets which has been discussed elsewhere.
What can an administrator do when his servers are under a classic, non-bandwidth flooding SYN attack? One of most important steps is to enable the operating system's built-in protection mechanisms like SYN cookies or SynAttackProtect. Additionally, in some cases it is worth tuning parameters of the TCP/IP stack. Changing the default values of stack variables can be another layer of protection and help better secure your hosts. In this paper I will concentrate on:
.Increasing the queue of half-open connections (in the SYN RECEIVED state).
.Decreasing the time period of keeping a pending connection in the SYN RECEIVED state in the queue. This method is accomplished by decreasing the time of the first packet retransmission and by either decreasing the number of packet retransmissions or by turning off packet retransmissions entirely. The process of packet retransmissions is performed by a server when it doesn't receive an ACK packet from a client. A Packet with the ACK flag finalizes the process of the three-way handshake.
Note that an attacker can simply send more packets with the SYN flag set and then the above tasks will not solve the problem. However, we can still increase the likelihood of creating a full connection with legitimate clients by performing the above operations.
We should remember that our modification of variables will change the behavior of the TCP/IP stack. In some cases the values can be too strict. So, after the modification we have to make sure that our server can properly communicate with other hosts. For example, the disabling of packet retransmissions in some environments with low bandwidth can cause a legitimate request to fail. In this article you will find a description of the TCP/IP variables for the fallowing operating systems: Microsoft Windows 2000, RedHat Linux 7.3, Sun Solaris 8 and HP-UX 11.00. These variables are similar or the same in current releases.
Definitions: SYN flooding and SYN spoofing
To increase an effectiveness of a SYN flood attack, an attacker spoofs source IP addresses of SYN packets. In this case the victim host cannot finish the initialization process in a short time because the source IP address can be unreachable. This malicious operation is called a SYN spoofing attack.
We need to know that the process of creating a full connection takes some time. Initially, after receiving a connection request (a packet with SYN flag set), a victim host puts this half-open connection to the backlog queue and sends out the first response (a packet with SYN and ACK flags set). When the victim does not receive a response from a remote host, it tries to retransmit this SYN+ACK packet until it times out, and then finally removes this half-open connection from the backlog queue. In some operating systems this process for a single SYN request can take about 3 minutes! In this document you will learn how to change this behavior. The other important information you need to know is that the operating system can handle only a defined amount of half-open connections in the backlog queue. This amount is controlled by the size of the backlog queue. For instance, the default backlog size is 256 for RedHat 7.3 and 100 for Windows 2000 Professional. When this size is reached, the system will no longer accept incoming connection requests.
How to detect a SYN attackIt is very simple to detect SYN attacks. The netstat command shows us how many connections are currently in the half-open state. The half-open state is described as SYN_RECEIVED in Windows and as SYN_RECV in Unix systems.
# netstat -n -p TCP
tcp 0 0 10.100.0.200:21 237.177.154.8:25882 SYN_RECV -
tcp 0 0 10.100.0.200:21 236.15.133.204:2577 SYN_RECV -
tcp 0 0 10.100.0.200:21 127.160.6.129:51748 SYN_RECV -
tcp 0 0 10.100.0.200:21 230.220.13.25:47393 SYN_RECV -
tcp 0 0 10.100.0.200:21 227.200.204.182:60427 SYN_RECV -
tcp 0 0 10.100.0.200:21 232.115.18.38:278 SYN_RECV -
tcp 0 0 10.100.0.200:21 229.116.95.96:5122 SYN_RECV -
tcp 0 0 10.100.0.200:21 236.219.139.207:49162 SYN_RECV -
tcp 0 0 10.100.0.200:21 238.100.72.228:37899 SYN_RECV -
...
We can also count how many half-open connections are in the backlog queue at the moment. In the example below, 769 connections (for TELNET) in the SYN RECEIVED state are kept in the backlog queue.
# netstat -n -p TCP grep SYN_RECV grep :23 wc -l
769
The other method for detecting SYN attacks is to print TCP statistics and look at the TCP parameters which count dropped connection requests. While under attack, the values of these parameters grow rapidly.
In this example we watch the value of the TcpHalfOpenDrop parameter on a Sun Solaris machine.
# netstat -s -P tcp grep tcpHalfOpenDrop
tcpHalfOpenDrop = 473
It is important to note that every TCP port has its own backlog queue, but only one variable of the TCP/IP stack controls the size of backlog queues for all ports.
The backlog queueThe backlog queue is a large memory structure used to handle incoming packets with the SYN flag set until the moment the three-way handshake process is completed. An operating system allocates part of the system memory for every incoming connection. We know that every TCP port can handle a defined number of incoming requests. The backlog queue controls how many half-open connections can be handled by the operating system at the same time. When a maximum number of incoming connections is reached, subsequent requests are silently dropped by the operating system.
As mentioned before, when we detect a lot of connections in the SYN RECEIVED state, host is probably under a SYN flooding attack. Moreover, the source IP addresses of these incoming packets can be spoofed. To limit the effects of SYN attacks we should enable some built-in protection mechanisms. Additionally, we can sometimes use techniques such as increasing the backlog queue size and minimizing the total time where a pending connection in kept in allocated memory (in the backlog queue).
Worldsecure VPN Account Registration
Do you need to download via fast internet connection?!
Do you need a secure connection to buy your products with credit or debit card !?
For registration please contact with worldsecure@hotmail.com.uk .
Saturday, March 28, 2009
Server Virtualization
Virtualization is a method of running multiple independent virtual operating systems on a single physical computer. It is a way of maximizing physical resources to maximize the investment in hardware. Since Moore's law has accurately predicted the exponential growth of computing power and hardware requirements for the most part have not changed to accomplish the same computing tasks, it is now feasible to turn a very inexpensive 1U dual-socket dual-core commodity server into eight or even 16 virtual servers that run 16 virtual operating systems. Virtualization technology is a way of achieving higher server density. However, it does not actually increase total computing power; it decreases it slightly because of overhead. But since a modern $3,000 2-socket 4-core server is more powerful than a $30,000 8-socket 8-core server was four years ago, we can exploit this newly found hardware power by increasing the number of logical operating systems it hosts. This slashes the majority of hardware acquisition and maintenance costs that can result in significant savings for any company or organization.
When to use virtualization
Virtualization is the perfect solution for applications that are meant for small- to medium-scale usage. Virtualization should not be used for high-performance applications where one or more servers need to be clustered together to meet performance requirements of a single application because the added overhead and complexity would only reduce performance. We're essentially taking a 12 GHz server (four cores times three GHz) and chopping it up into 16 750 MHz servers. But if eight of those servers are in off-peak or idle mode, the remaining eight servers will have nearly 1.5 GHz available to them.
While some in the virtualization industry like to tout high CPU utilization numbers as an indication of optimum hardware usage, this advice should not be taken to the extreme where application responsiveness gets excessive. A simple rule of thumb is to never let a server exceed 50% CPU utilization during peak loads; and more importantly, never let the application response times exceed a reasonable SLA (Service Level Agreement). Most modern servers being used for in-house server duties are utilized from 1 to 5% CPU. Running eight operating systems on a single physical server would elevate the peak CPU utilization to around 50%, but it would average much lower since the peaks and valleys of the virtual operating systems will tend to cancel each other out more or less.
While CPU overhead in most of the virtualization solutions available today are minimal, I/O (Input/Output) overhead for storage and networking throughput is another story. For servers with extremely high storage or hardware I/O requirements, it would be wise to run them on bare metal even if their CPU requirements can be met inside a Virtual environment. Both XenSource and Virtual Iron (which will soon be Xen Hypervisor based) promise to minimize I/O overhead, yet they're both in beta at this point, so there haven't been any major independent benchmarks to verify this.
How to avoid the "all your eggs in one basket" syndrome
One of the big concerns with virtualization is the "all your eggs in one basket" syndrome. Is it really wise to put all of your critical servers into a single physical server? The answer is absolutely not! The easiest way to avoid this liability is to make sure that a single service isn't only residing on a single server. Let's take for example the following server types:
HTTP
FTP
DNS
DHCP
RADIUS
LDAP
File Services using Fiber Channel or iSCSI storage
Active Directory services
We can put each of these types of servers on at least two physical servers and gain complete redundancy. These types of services are relatively easy to cluster because they're easy to switch over when a single server fails. When a single physical server fails or needs servicing, the other virtual server on the other physical server would automatically pick up the slack. By straddling multiple physical servers, these critical services never need to be down because of a single hardware failure.
For more complex services such as an Exchange Server, Microsoft SQL, MySQL, or Oracle, clustering technologies could be used to synchronize two logical servers hosted across two physical servers; this method would generally cause some downtime during the transition, which could take up to five minutes. This isn't due to virtualization but rather the complexity of clustering which tends to require time for transitioning. An alternate method for handling these complex services is to migrate the virtual server from the primary physical server to the secondary physical server. In order for this to work, something has to constantly synchronize memory from one physical server to the other so that a failover could be done in milliseconds while all services can remain functional.
Physical to virtual server migration
Any respectable virtualization solution will offer some kind of P2V (Physical to Virtual) migration tool. The P2V tool will take an existing physical server and make a virtual hard drive image of that server with the necessary modifications to the driver stack so that the server will boot up and run as a virtual server. The benefit of this is that you don't need to rebuild your servers and manually reconfigure them as a virtual server—you simply suck them in with the entire server configuration intact!
So if you have a data center full of aging servers running on sub-GHz servers, these are the perfect candidates for P2V migration. You don't even need to worry about license acquisition costs because the licenses are already paid for. You could literally take a room with 128 sub-GHz legacy servers and put them into eight 1U dual-socket quad-core servers with dual-Gigabit Ethernet and two independent iSCSI storage arrays all connected via a Gigabit Ethernet switch. The annual hardware maintenance costs alone on the old server hardware would be enough to pay for all of the new hardware! Just imagine how clean your server room would look after such a migration. It would all fit inside of one rack and give you lots of room to grow.
As an added bonus of virtualization, you get a disaster recovery plan because the virtualized images can be used to instantly recover all your servers. Ask yourself what would happen now if your legacy server died. Do you even remember how to rebuild and reconfigure all of your servers from scratch? (I'm guessing you're cringing right about now.) With virtualization, you can recover that Active Directory and Exchange Server in less than an hour by rebuilding the virtual server from the P2V image.
Patch management for virtualized servers
Patch management of virtualized servers isn't all that different with regular servers because each virtual operating system is its own independent virtual hard drive. You still need a patch management system that patches all of your servers, but there may be interesting developments in the future where you may be able to patch multiple operating systems at the same time if they share some common operating system or application binaries. Ideally, you would be able to assign a patch level to an individual or a group of similar servers. For now, you will need to patch virtual operating systems as you would any other system, but there will be some innovations in the virtualization sector that you won't be able to do with physical servers.
Licensing and support considerations
A big concern with virtualization is software licensing. The last thing anyone wants to do is pay for 16 copies of a license for 16 virtual sessions running on a single computer. Software licensing often dwarfs hardware costs, so it would be foolish to run a $20,000 software license on a machine on a shared piece of hardware. In this situation, it's best to run that license on the fastest physical server possible without any virtualization layer adding overhead.
For something like Windows Server 2003 Standard Edition, you would need to pay for each virtual session running on a physical box. The exception to this rule is if you have the Enterprise Edition of Windows Server 2003, which allows you to run four virtual copies of Windows Server 2003 on a single machine with only one license. This Microsoft licensing policy applies to any type of virtualization technology that is hosting the Windows Server 2003 guest operating systems.
If you're running open source software, you don't have to worry about licensing because that's always free—what you do need to be concerned about is the support contracts. If you're considering virtualizing open source operating systems or open source software, make sure you calculate the support costs. If the support costs are substantial for each virtual instance of the software you're going to run, it's best to squeeze the most out of your software costs by putting it on its own dedicated server. It's important to remember that hardware is often dwarfed by software licensing and/or support costs. The trick is to find the right ratio of hardware to licensing/support costs. When calculating hardware costs, be sure to calculate the costs of hardware maintenance, power usage, cooling, and rack space.
Friday, January 23, 2009
Worldsecure VPN
Do you need to download via fast internet connection?!
Do you need a secure connection to buy your products with credit or debit card !?
For registration please contact with ipsecure0@googlemail.com or worldsecure@hotmail.com.uk .
Thursday, January 15, 2009
Virtual Private Server (or VPS) is a means of splitting a single physical server into multiple virtual servers, where each VPS-es runs
on its own and they are isolated from others.
VPS is a technology where it lls a void between shared hosting and dedicated servers, allowing root-level access without
requiring sole ownership of a server. Each VPS has its own set of processes and resource management, and behaves exactly like a
stand-alone server. It is suitable for those who wishes to have the ownership of server but do not require investment in physical
server.
VPS or Virtual Private Servers is technology that separates the physical server into several independent hosting spaces or VPS-es,
each isolated from the other. Each VPS has its own set of processes and resource management, and behaves exactly like a
stand-alone server.
As such, you can create and manage multiple sites and domains and take full control of your VPS with root/administrator access
which allows you to access the virtual hard disk, RAM and to reboot your private server independently from other VPS-es.
Hostpro2u uses Virtuozzo™ powered VPS technology, the leading industry standard for performance, reliability and -exibility.
Each VPS has its own processes, users, les and provides full root access.
> Each VPS can have its own IP addresses, port numbers, tables, ltering and routing rules.
> Each VPS can have its own system conguration les and can house an application.
> Each VPS can have its own versions of system libraries or modify existing ones.
> For Example: Multiple distributions of Linux can reside on the same physical server.
VPS performs and executes exactly like an isolated stand-alone server
Standard: Includes CPU, disk space and network I/O guarantees
> Unique: Guarantees on memory - user and kernel, physical and virtual
> Unique: Guarantees on disk I/O and many other critical resources (over 20).
Virtual Private Servers (VPS) is not a Virtual Machine (VM)!
> Runs only the same OS as root OS - Linux on Linux, Windows on Windows, etc.
> 10-100 times better e-ciency, dynamic QoS changes for LB and more
Why Virtuozzo?
Virtuozzo is the only true VPS technology. It is the most Complete with tools, docs, trainings, XML & CLI interfaces,
Proven with over 500 providers worldwide, most Powerful with its VPSs fully identical to standalone servers, most Secure
with full Isolation and exible resource control for each VPS but most important advantage is its E-ciency which allows to
run over 5000 VPS on a single server and also allows single VPS to scale to the full size of the server (16 CPU, 64 GB RAM)
all with less then percentage point overhead.