Posts Tagged ‘CISSP’

Differential Cryptanalysis

January 4, 2010

This type of attack also has the goal of uncovering the key that was used for encryption purposes. It was invented in 1990 as an attack against DES, and it turned out to be an effective and successful attack against DES and other block algorithms.

The attacker takes two messages of plaintext and follows the changes that take place to the blocks as they go through the different S-boxes. (Each message is being encrypted with the same key.) The differences identified in the resulting ciphertext values are used to map probability values to different possible key values. The attacker continues this process with several more sets of messages and reviews the common key probability values. One key will continue to show itself as the most probable key used in the encryption processes. Since the attacker chooses the different plaintext messages for this attack, it is considered to be a type of chosen-plaintext attack.

Linear Cryptanalysis

Linear cryptanalysis is another type of attack that carries out functions to identify the highest probability of a specific key employed during the encryption process using a block algorithm.

The attacker carries out a known-plaintext attack on several different messages encrypted with the same key. The more messages the attacker can use and put through this type of attack, the higher the confidence level in the probability of a specific key value.

The attacker evaluates the input and output values for each S-box. He evaluates the probability of input values ending up in a specific combination. Identifying specific output combination’s allows him to assign probability values to different keys until one shows a continual pattern of having the highest probability.

Side-Channel Attacks

All of the attacks we have covered thus far have been based mainly on the mathematics of cryptography. Using plaintext and ciphertext involves high-powered mathematical tools that are needed to uncover the key used in the encryption process.

But what if we took a different approach? Let’s say we see something that looks like a duck, walks like a duck, sounds like a duck, swims in water, and eats bugs and small fish. We could confidently conclude that this is a duck. Similarly, in cryptography, we can review facts and infer the value of an encryption key. For example, we could detect how much power consumption is used for encryption and decryption (the fluctuation of electronic voltage). We could also intercept the radiation emissions released and then calculate how long the processes take. Looking around the cryptosystem, or its attributes and characteristics, is different from looking into the cryptosystem and trying to defeat it through mathematical computations.

If I want to figure out what you do for a living, but I don’t want you to know I am doing this type of reconnaissance work, I won’t ask you directly. Instead, I will find out when you go to work and come home, the types of clothing you wear, the items you carry, whom you talk to… or I can just follow you to work. These are examples of side channels.

So, in cryptography, gathering “outside” information with the goal of uncovering the encryption key is just another way of attacking a cryptosystem.

An attacker could measure power consumption, radiation emissions, and the time it takes for certain types of data processing. With this information, he can work backward by reverse-engineering the process to uncover an encryption key or sensitive data. A power attack reviews the amount of heat released. This type of attack has been successful in uncovering confidential information from smart cards. In 1995, RSA private keys were uncovered by measuring the relative time cryptographic operations took.

The idea is that, instead of attacking a device head on, just watch how it performs to figure out how it works. In biology, scientists can choose to carry out a noninvasive experiment, which will watch an organism eat, sleep, mate, and so on. This type of approach learns about the organism through understanding its behaviors instead of killing it and looking at it from the inside out.

Replay Attacks

A big concern in distributed environments is the replay attack, in which an attacker captures some type of data and resubmits it with the hopes of fooling the receiving device into thinking it is legitimate information. Many times, the data captured and resubmitted are authentication information, and the attacker is trying to authenticate herself as someone else to gain unauthorized access.

Timestamps and sequence numbers are two countermeasures to replay attacks. Packets can contain sequence numbers, so each machine will expect a specific number on each receiving packet. If a packet has a sequence number that has been previously used, this is an indication of a replay attack. Packets can also be timestamped. A threshold can be set on each computer to only accept packets within a certain timeframe. If a packet is received that is past this threshold, it can help identify a replay attack.

Just in case there aren’t enough attacks here for you, we have three more, which are quickly introduced in the following sections.

Algebraic Attacks

Algebraic attacks analyze the vulnerabilities in the mathematics used within the algorithm and exploit the intrinsic algebraic structure. For instance, attacks on the “textbook” version of the RSA cryptosystem exploit properties of the algorithm such as the fact that the encryption of a raw “0” message is “0”.

Analytic

Analytic attacks identify algorithm structural weaknesses or flaws, as opposed to brute force attacks which simply exhaust all possibilities without respect to the specific properties of the algorithm. Examples = Double DES attack and RSA factoring attack.

Statistical

Statistical attacks identify statistical weaknesses in algorithm design for exploitation — for example, if statistical patterns are identified, as in the number of 0s compared to the number of 1s. For instance, a random number generator may be biased. If keys are taken directly from the output of the RNG, then the distribution of keys would also be biased. The statistical knowledge about the bias could be used to reduce the search time for the keys.

Source: http://www.logicalsecurity.com/resources/resources_articles.html

Review full Cryptography Chapter at www.LogicalSecurity.com

http://logicalsecurity-ls.blogspot.com/2009/03/differential-cryptanalysis.html

Attacks

December 31, 2009

Eavesdropping and sniffing data as it passes over a network are considered passive attacks because the attacker is not affecting the protocol, algorithm, key, message, or any parts of the encryption system. Passive attacks are hard to detect, so in most cases methods are put in place to try to prevent them rather than detect and stop them.

Altering messages, modifying system files, and masquerading as another individual are acts that are considered active attacks because the attacker is actually doing something instead of sitting back and gathering data. Passive attacks are usually used to gain information prior to carrying out an active attack. The following sections address some active attacks that relate to cryptography.

Cipher-Only Attack

In this type of attack, the attacker has the ciphertext of several messages. Each of the messages has been encrypted using the same encryption algorithm. The attacker’s goal is to discover the key used in the encryption process. Once the attacker figures out the key, she can decrypt all other messages encrypted with the same key.

A ciphertext-only attack is the most common type of active attack because it is very easy to get ciphertext by sniffing someone’s traffic, but it is the hardest attack to actually be successful at because the attacker has so little information about the encryption process.

Known-Plaintext Attacks

In known-plaintext attacks, the attacker has the plaintext and ciphertext of one or more messages. Again, the goal is to discover the key used to encrypt the messages so other messages can be deciphered and read.

Messages usually start with the same type of beginning and close with the same type of ending. An attacker might know that each message a general sends out to his commanders always starts with certain greetings and ends with specific salutations and the general’s name and contact information. In this instance, the attacker has some of the plaintext (the data that are the same on each message) and can capture an encrypted message, and therefore capture the ciphertext. Once a few pieces of the puzzle are discovered, the rest is accomplished by reverse-engineering, frequency analysis, and brute force attempts. Known-plaintext attacks were used by the United States against the Germans and the Japanese during World War II.

Chosen-Plaintext Attacks

In chosen-plaintext attacks, the attacker has the plaintext and ciphertext, but can choose the plaintext that gets encrypted to see the corresponding ciphertext. This gives her more power and possibly a deeper understanding of the way the encryption process works so she can gather more information about the key being used. Once the key is discovered, other messages encrypted with that key can be decrypted.

How would this be carried out? I can e-mail a message to you that I think you not only will believe, but that you will also panic about, encrypt, and send to someone else. Suppose I send you an e-mail that states, “The meaning of life is 42.” You may think you have received an important piece of information that should be concealed from others, everyone except your friend Bob, of course. So you encrypt my message and send it to Bob. Meanwhile I am sniffing your traffic and now have a copy of the plaintext of the message, because I wrote it, and a copy of the ciphertext.

Chosen-Ciphertext Attacks

In chosen-ciphertext attacks, the attacker can choose the ciphertext to be decrypted and has access to the resulting decrypted plaintext. Again, the goal is to figure out the key. This is a harder attack to carry out compared to the previously mentioned attacks, and the attacker may need to have control of the system that contains the cryptosystem.

Source: http://www.logicalsecurity.com/resources/resources_articles.html

Review full Cryptography Chapter at www.LogicalSecurity.com

http://logicalsecurity-ls.blogspot.com/2009/03/attacks.html

Internet Security Protocol

December 31, 2009

The Internet Protocol Security (IPSec) protocol suite provides a method of setting up a secure channel for protected data exchange between two devices. The devices that share this secure channel can be two servers, two routers, a workstation and a server, or two gateways between different networks. IPSec is a widely accepted standard for providing network layer protection. It can be more flexible and less expensive than end-to end and link encryption methods.

IPSec has strong encryption and authentication methods, and although it can be used to enable tunneled communication between two computers, it is usually employed to establish virtual private networks (VPNs) among networks across the Internet.

IPSec is not a strict protocol that dictates the type of algorithm, keys, and authentication method to use. Rather, it is an open, modular framework that provides a lot of flexibility for companies when they choose to use this type of technology. IPSec uses two basic security protocols: Authentication Header (AH) and Encapsulating Security Payload (ESP). AH is the authenticating protocol, and ESP is an authenticating and encrypting protocol that uses cryptographic mechanisms to provide source authentication, confidentiality, and message integrity.

IPSec can work in one of two modes: transport mode, in which the payload of the message is protected, and tunnel mode, in which the payload and the routing and header information are protected. ESP in transport mode encrypts the actual message information so it cannot be sniffed and uncovered by an unauthorized entity. Tunnel mode provides a higher level of protection by also protecting the header and trailer data an attacker may find useful. Figure 8-26 shows the high-level view of the steps of setting up an IPSec connection.

Each device will have at least one security association (SA) for each VPN it uses. The SA, which is critical to the IPSec architecture, is a record of the configurations the device needs to support an IPSec connection. When two devices complete their handshaking process, which means they have agreed upon a long list of parameters they will use to communicate, these data must be recorded and stored somewhere, which is in the SA.

The SA can contain the authentication and encryption keys, the agreed-upon algorithms, the key lifetime, and the source IP address. When a device receives a packet via the IPSec protocol, it is the SA that tells the device what to do with the packet. So if device B receives a packet from device C via IPSec, device B will look to the corresponding SA to tell it how to decrypt the packet, how to properly authenticate the source of the packet, which key to use, and how to reply to the message if necessary.

SAs are directional, so a device will have one SA for outbound traffic and a different SA for inbound traffic for each individual communication channel. If a device is connecting to three devices, it will have at least six SAs, one for each inbound and outbound connection per remote device. So how can a device keep all of these SAs organized and ensure that the right SA is invoked for the right connection? With the mighty secu rity parameter index (SPI), that’s how. Each device has an SPI that keeps track of the different SAs and tells the device which one is appropriate to invoke for the different packets it receives. The SPI value is in the header of an IPSec packet, and the device reads this value to tell it which SA to consult.

IPSec can authenticate the sending devices of the packet by using MAC (covered in the earlier section, “The One-Way Hash”). The ESP protocol can provide authentication, integrity, and confidentiality if the devices are configured for this type of functionality.

So if a company just needs to make sure it knows the source of the sender and must be assured of the integrity of the packets, it would choose to use AH. If the company would like to use these services and also have confidentiality, it would use the ESP protocol because it provides encryption functionality. In most cases, the reason ESP is employed is because the company must set up a secure VPN connection.

It may seem odd to have two different protocols that provide overlapping functionality. AH provides authentication and integrity, and ESP can provide those two functions and confidentiality. Why even bother with AH then? In most cases, the reason has to do with whether the environment is using network address translation (NAT). IPSec will generate an integrity check value (ICV), which is really the same thing as a MAC value, over a portion of the packet. Remember that the sender and receiver generate their own values. In IPSec, it is called an ICV value. The receiver compares her ICV value with the one sent by the sender. If the values match, the receiver can be assured the packet has not been modified during transmission. If the values are different, the packet has been altered and the receiver discards the packet.

The AH protocol calculates this ICV over the data payload, transport, and network headers. If the packet then goes through a NAT device, the NAT device changes the IP address of the packet. That is its job. This means a portion of the data (network header) that was included to calculate the ICV value has now changed, and the receiver will generate an ICV value that is different from the one sent with the packet, which means the packet will be discarded automatically.

The ESP protocol follows similar steps, except it does not include the network header portion when calculating its ICV value. When the NAT device changes the IP address, it will not affect the receiver’s ICV value because it does not include the network header when calculating the ICV.

Because IPSec is a framework, it does not dictate which hashing and encryption algorithms are to be used or how keys are to be exchanged between devices. Key management can be handled manually or automated by a key management protocol. The de facto standard for IPSec is to use Internet Key Exchange (IKE), which is a combination of the ISAKMP and OAKLEY protocols. The Internet Security Association and Key Management Protocol (ISAKMP) is a key exchange architecture that is independent of the type of keying mechanisms used. Basically, ISAKMP provides the framework of what can be negotiated to set up an IPSec connection (algorithms, protocols, modes, keys). The OAKLEY protocol is the one that carries out the negotiation process. You can think of ISAKMP as providing the playing field (the infrastructure) and OAKLEY as the guy running up and down the playing field (carrying out the steps of the negotiation).

IPSec is very complex with all of its components and possible configurations. This complexity is what provides for a great degree of flexibility, because a company has many different configuration choices to achieve just the right level of protection. If this is all new to you and still confusing, please review one or more of the following references to help fill in the gray areas.

Source: http://www.logicalsecurity.com/resources/resources_articles.html

Review full Cryptography Chapter at www.LogicalSecurity.com

http://logicalsecurity-ls.blogspot.com/2009/03/internet-security-protocol.html

Secure Shell – SSH

December 31, 2009

Secure Shell (SSH) functions as a type of tunneling mechanism that provides terminal like access to remote computers. SSH is a program and a protocol that can be used to log in to another computer over a network. For example, the program can let Paul, who is on computer A, access computer B’s files, run applications on computer B, and retrieve files from computer B without ever physically touching that computer. SSH provides authentication and secure transmission over vulnerable channels like the Internet. SSH should be used instead of Telnet, FTP, rlogin, rexec, or rsh, which provide the same type of functionality SSH offers but in a much less secure manner. SSH is a program and a set of protocols that work together to provide a secure tunnel between two computers. The two computers go through a handshaking process and exchange (via Diffie-Hellman) a session key that will be used during the session to encrypt and protect the data sent.

Once the handshake takes place and a secure channel is established, the two computers have a pathway to exchange data with the assurance that the information will be encrypted and its integrity will be protected.

Source: http://www.logicalsecurity.com/resources/resources_articles.html

Review full Cryptography Chapter at www.LogicalSecurity.com

http://logicalsecurity-ls.blogspot.com/2009/03/secure-shell_16.html

Cookies

December 31, 2009

Cookies are text files that a browser maintains on a user’s hard drive. Cookies have different uses, and some are used for demographic and advertising information. As a user travels from site to site on the Internet, the sites could be writing data to the cookies stored on the user’s system. The sites can keep track of the user’s browsing and spending habits and the user’s specific customization for certain sites. For example, if Emily goes to mainly gardening sites on the Internet, those sites will most likely record this information and the types of items in which she shows most interest. Then, when Emily returns to one of the same or similar sites, it will retrieve her cookies, find she has shown interest in gardening books in the past, and present her with its line of gardening books. This increases the likelihood of Emily purchasing a book of her liking. This is a way of zeroing in on the right marketing tactics for the right person.

The servers at the web site determine how cookies are actually used. When a user adds items to his shopping cart on a site, such data are usually added to a cookie. Then, when the user is ready to check out and pay for his items, all the data in this specific cookie are extracted and the totals are added.

As stated before, HTTP is a stateless protocol, meaning a web server has no memory of any prior connections. This is one reason to use cookies. They retain the memory between HTTP connections by saving prior connection data to the client’s computer.

For example, if you carry out your banking activities online, your bank’s web server keeps track of your activities through the use of cookies. When you first go to its site and are looking at public information, such as branch locations, hours of operation, and CD rates, no confidential information is being transferred back and forth. Once you make a request to access your bank account, the web server sets up an SSL connection and requires you to send credentials. Once you send your credentials and are authenticated, the server generates a cookie with your authentication and account information in it. The server sends it to your browser, which either saves it to your hard drive or keeps it in memory.

So, suppose you look at your checking account and do some work there and then request to view your savings account information. The web server sends a request to see if you have been properly authenticated for this activity by checking your cookie.

Most online banking software also periodically requests your cookie, to ensure no man-in-the-middle attacks are going on and that someone else has not hijacked the session.

It is also important to ensure that secure connections time out. This is why cookies have timestamps within them. If you have ever worked on a site that has an SSL connection set up for you and it required you to reauthenticate, the reason is because your session has been idle for a while and, instead of leaving a secure connection open, the web server software closed it out.

A majority of the data within a cookie is meaningless to any entities other than the servers at specific sites, but some cookies can contain usernames and passwords for different accounts on the Internet. The cookies that contain sensitive information should be encrypted by the server at the site that distributes them, but this does not always happen, and a nosey attacker could find this data on the user’s hard drive and attempt to use it for mischievous activity. Some people who live on the paranoid side of life do not allow cookies to be downloaded to their systems (controlled through browser security controls). Although this provides a high level of protection against different types of cookie abuse, it also reduces their functionality on the Internet. Some sites require cookies because there is specific data within the cookies that the site must utilize correctly in order to provide the user with the services she requested.

Source: http://www.logicalsecurity.com/resources/resources_articles.html

Review full Cryptography Chapter at www.LogicalSecurity.com

http://logicalsecurity-ls.blogspot.com/2009/03/cookies.html

Secure Electronic Transaction

December 31, 2009

Secure Electronic Transaction (SET) is a security technology proposed by Visa and MasterCard to allow for more secure credit card transaction possibilities than what is currently available. SET has been waiting in the wings for full implementation and acceptance as a standard for quite some time. Although SET provides an effective way of transmitting credit card information, businesses and users do not see it as efficient because it requires more parties to coordinate their efforts, more software installation and configuration for each entity involved, and more effort and cost than the widely used SSL method.
SET is a cryptographic protocol and infrastructure developed to send encrypted credit card numbers over the Internet. The following entities would be involved with a SET transaction, which would require each of them to upgrade their software, and possibly their hardware:

Issuer (cardholder’s bank) The financial institution that provides a credit card to the individual.

Cardholder The individual authorized to use a credit card.

Merchant The entity providing goods.

Acquirer (merchant’s bank) The financial institution that processes payment cards.

Payment gateway This processes the merchant payment. It may be an acquirer.
To use SET, a user must enter her credit card number into her electronic wallet software. This information is stored on the user’s hard drive or on a smart card. The software then creates a public key and a private key that are used specifically for encrypting financial information before it is sent.

Let’s say Tanya wants to use her electronic credit card to buy her mother a gift from a web site. When she finds the perfect gift and decides to purchase it, she sends her encrypted credit card information to the merchant’s web server. The merchant does not decrypt the credit card information, but instead digitally signs it and sends it on to its processing bank. At the bank, the payment server decrypts the information, verifies that Tanya has the necessary funds, and transfers the funds from Tanya’s account to the merchant’s account. Then the payment server sends a message to the merchant telling it to finish the transaction, and a receipt is sent to Tanya and the merchant. At each step, an entity verifies a digital signature of the sender and digitally signs the information before it is sent to the next entity involved in the process. This would require all entities to have digital certificates and participate in a PKI.

This is basically a very secure way of doing business over the Internet, but today everyone seems to be happy enough with the security SSL provides. They do not feel motivated enough to move to a different and more encompassing technology. The lack of motivation comes from all of the changes that would need to take place to our current processes and the amount of money these changes would require.

Source: http://www.logicalsecurity.com/resources/resources_articles.html

Review full Cryptography Chapter at www.LogicalSecurity.com

http://logicalsecurity-ls.blogspot.com/2009/03/secure-electronic-transaction.html

Internet Security

December 30, 2009

The Web is not the Internet. The Web runs on top of the Internet, in a sense. The Web is the collection of HTTP servers that hold and process web sites we see. The Internet is the collection of physical devices and communication protocols used to transverse these web sites and interact with them. (These issues were touched upon in Chapter 2.) The web sites look the way they do because their creators used a language that dictates the look, feel, and functionality of the page. Web browsers enable users to read web pages by enabling them to request and accept web pages via HTTP, and the user’s browser converts the language (HTML, DHTML, and XML) into a format that can be viewed on the monitor. The browser is the user’s window to the World Wide Web.

Browsers can understand a variety of protocols and have the capability to process many types of commands, but they do not understand them all. For those protocols or commands the user’s browser does not know how to process, the user can download and install a viewer or plug-in, a modular component of code that integrates itself into the system or browser. This is a quick and easy way to expand the functionality of the browser. However, this can cause serious security compromises, because the payload of the module can easily carry viruses and malicious software that users don’t discover until it’s too late.

Start with the Basics

Why do we connect to the Internet? At first, this seems a basic question, but as we dive deeper into the query, complexity creeps in. We connect to download MP3s, check email, order security books, look at web sites, communicate with friends, and perform various other tasks. But what are we really doing? We are using services provided by a computer’s protocols and software. The services may be file transfers provided by FTP, remote connectivity provided by Telnet, Internet connectivity provided by HTTP, secure connections provided by SSL, and much, much more. Without these protocols, there would be no way to even connect to the Internet.

Management needs to decide what functionality employees should have pertaining to Internet use, and the administrator must implement these decisions by controlling services that can be used inside and outside the network. Services can be restricted in various ways, such as: allowing certain services to only run on a particular system and restrict access to that system; employing a secure version of a service; filtering the use of services; or blocking services altogether. These choices determine how secure the site will be and indicate what type of technology is needed to provide this type of protection.

Let’s go through many of the technologies and protocols that make up the World Wide Web.

HTTP

TCP/IP is the protocol suite of the Internet, and HTTP is the protocol of the Web. HTTP sits on top of TCP/IP. When a user clicks a link on a web page with her mouse, her browser uses HTTP to send a request to the web server hosting that web site. The web server finds the corresponding file to that link and sends it to the user via HTTP. So where is TCP/IP in all of this? The TCP protocol controls the handshaking and maintains the connection between the user and the server, and the IP protocol makes sure the file is routed properly throughout the Internet to get from the web server to the user.

So, the IP protocol finds the way to get from A to Z, TCP makes sure the origin and destination are correct and that no packets are lost along the way, and, upon arrival at the destination, HTTP presents the payload, which is a web page.

HTTP is a stateless protocol, which means the client and web server make and break a connection for each operation. When a user requests to view a web page, that web server finds the requested web page, presents it to the user, and then terminates the connection. If the user requests a link within the newly received web page, a new connection must be set up, the request goes to the web server, and the web server sends the requested item and breaks the connection. The web server never “remembers” the users that ask for different web pages, because it would have to commit a lot of resources to the effort.

HTTP Secure

HTTP Secure (HTTPS) is HTTP running over SSL. (HTTP works at the application layer and SSL works at the transport layer.) Secure Sockets Layer (SSL) uses public key encryption and provides data encryption, server authentication, message integrity, and optional client authentication. When a client accesses a web site, that web site may have both secured and public portions. The secured portion would require the user to be authenticated in some fashion. When the client goes from a public page on the web site to a secured page, the web server will start the necessary tasks to invoke SSL and protect this type of communication.

The server sends a message back to the client, indicating a secure session should be established, and the client in response sends its security parameters. The server compares those security parameters to its own until it finds a match. This is the handshaking phase. The server authenticates to the client by sending it a digital certificate, and if the client decides to trust the server, the process continues. The server can require the client to send over a digital certificate for mutual authentication, but that is rare.

The client generates a session key and encrypts it with the server’s public key. This encrypted key is sent to the web server, and they both use this symmetric key to encrypt the data they send back and forth. This is how the secure channel is established.

SSL keeps the communication path open until one of the parties requests to end the session. The session is usually ended when the client sends the server a FIN packet, which is an indication to close out the channel.

SSL requires an SSL-enabled server and browser. SSL provides security for the connection but does not offer security for the data once received. This means the data are encrypted while being transmitted, but not after the data are received by a computer.So if a user sends bank account information to a financial institution via a connection protected by SSL, that communication path is protected, but the user must trust the financial institution that receives this information because at this point, SSL’s job is done.

The user can verify that a connection is secure by looking at the URL to see that it includes https://. The user can also check for a padlock or key icon, depending on the browser type, which is shown at the bottom corner of the browser window.

In the protocol stack, SSL lies beneath the application layer and above the network layer. This ensures SSL is not limited to specific application protocols and can still use the communication transport standards of the Internet. Different books and technical resources place SSL at different layers of the OSI model, which may seem confusing at first. But the OSI model is a conceptual construct that attempts to describe the reality of networking. This is like trying to draw nice neat boxes around life—some things don’t fit perfectly and hang over the sides. SSL is actually made up of two protocols: one works at the lower end of the session layer, and the other works at the top of the transport layer. This is why one resource will state that SSL works at the session layer and another resource puts it in the transport layer. For the purposes of the CISSP exam, we’ll use the latter definition: the SSL protocol works at the transport layer.

Although SSL is almost always used with HTTP, it can also be used with other types of protocols. So if you see a common protocol that is followed by an s, that protocol is using SSL to encrypt its data.

Secure HTTP

Though their names are very similar, there is a difference between Secure HTTP (SHTTP) and HTTP Secure (HTTPS). S-HTTP is a technology that protects each message sent between two computers, while HTTPS protects the communication channel between two computers, messages and all. HTTPS uses SSL and HTTP to provide a protected circuit between a client and server. So, S-HTTP is used if an individual message needs to be encrypted, but if all information that passes between two computers must be encrypted, then HTTPS is used, which is SSL over HTTP.

Source: http://www.logicalsecurity.com/resources/resources_articles.html

Review full Cryptography Chapter at www.LogicalSecurity.com

http://logicalsecurity-ls.blogspot.com/2009/03/internet-security.html

Quantum Cryptography

December 30, 2009

Today, we have very sophisticated and strong algorithms that are more than strong enough for most uses, even financial transactions and exchanging your secret meatloaf recipe. Some communication data are so critical and so desired by other powerful entities that even our current algorithms may be broken. This type of data might be spy interactions, information warfare, government espionage, and so on. When a whole country wants to break another country’s encryption, a great deal of resources will be put behind such efforts—which can put our current algorithms at risk of being broken.

Because of the need to always build a better algorithm, some very smart people have mixed quantum physics and cryptography, which has resulted in a system (if built correctly) that is unbreakable and where any eavesdroppers can be detected. In traditional cryptography, we try to make it very hard for an eavesdropper to break an algorithm and uncover a key, but we cannot detect that an eavesdropper is on the line. In quantum cryptography, however, not only is the encryption very strong, but an eavesdropper can be detected.
Quantum cryptography can be carried out using various methods. So, we will walk through one version to give you an idea of how all this works. Let’s say Tom and Kathy are spies and need to send their data back and forth with the assurance it won’t be captured. To do so, they need to establish a symmetric encryption key on both ends, one for Tom and one for Kathy.

In quantum cryptography, photon polarization is commonly used to represent bits (1 or 0). Polarization is the orientation of electromagnetic waves, which is what photons are. Photons are the particles that make up light. The electromagnetic waves have an orientation of horizontal or vertical, or left hand or right hand. Think of a photon as like a jellybean. As a jellybean flies through the air, it can be vertical (standing up straight), horizontal (lying on its back), left handed (tilted to the left), or right handed (tilted to the right). (This is just to conceptually get your head around the idea of polarization.)

Now both Kathy and Tom each have their own photon gun, which they will use to send photons (information) back and forth to each other. They also have a mapping between the polarization of a photon and a binary value. The polarizations can be represented as vertical (|), horizontal (-), left (\), or right (/), and since we only have two values in binary, there must be some overlap.

In this example, a photon with a vertical (|) polarization maps to the binary value of 0. A left polarization (\) maps to 1, a right polarization (/) maps to 0, and a horizontal polarization (-) maps to 1. This mapping (or encoding) is the binary values that make up an encryption key. Tom must have the same mapping to interpret what Kathy sends to him. Tom will use this as his map so when he receives a photon with the polarization of (\), he will write down a 1. When he receives a photon with the polarization of (|), he will write down a 0. He will do this for the whole key and use these values as the key to decrypt a message Kathy sends him.

Source: http://www.logicalsecurity.com/resources/resources_articles.html

Review full Cryptography Chapter at www.LogicalSecurity.com

http://logicalsecurity-ls.blogspot.com/2009/03/quantum-cryptography.html

Governmental Involvement in Cryptography

December 30, 2009

In the United States, in the 1960s to 1980s, exportation of cryptographic mechanisms and equipment was very carefully regulated and monitored. The goal was to make obtaining and using encryption technology harder for terrorists and criminals. Harry Truman created the NSA in 1952, and its main mission was, and still is, to listen in on communications in the interest of national security for the United States. The NSA keeps an extremely low profile, and its activities are highly secret. The NSA also conducts research in cryptology to create secure algorithms and to break other cryptosystems to enable eavesdropping and spying.

The government attempted to restrict the use of public cryptography so enemies of the United States could not employ encryption methods that were too strong for it to break. These steps caused tension and controversy between cryptography researchers, vendors, and the NSA pertaining to new cryptographic methods and the public use of

them. The fear of those opposed to the restrictions was that if the government controlled all types of encryption and was allowed to listen in on private citizens’ conversations, the obtained information would be misused in “Big Brotherly” ways. Also, if the government had the technology to listen in on everyone’s conversations, the possibility existed that this technology would fall into the wrong hands, and be used for the wrong reasons.
At one time a group existed whose duty was to control the export of specific types of weapons and cryptographic products to communist countries. This group came up with the Coordinating Committee on Multilateral Export Controls (COCOM). Because the threat of communism decreased over time, this group was disbanded. Then, in 1996, a group of 33 countries reached an agreement to control exportation of the same types of items to several countries deemed to be “terrorist states.” These countries (Iran, Iraq, Libya, North Korea, Sudan, Cuba, and Syria) were identified as having connections with terrorist groups and activities. The group set up agreed-upon guidelines regarding how to regulate exportation of certain types of weapons and technologies that contained cryptography functionality. In part, this group worked together to ensure “dual-use” products (products that have both civilian and military application) that contain encryption capabilities were not made available to the “terrorist states.” Because one of the main goals of every military is to be able to eavesdrop on its perceived enemies, the group of 33 countries was concerned that if terrorist states were able to obtain strong encryption methods, spying on them would be much harder to accomplish.

Just as the United States has the NSA, different countries have government agencies that are responsible for snooping on the communications of potential enemies, which involves using very powerful systems that can break a certain level of encryption. Since these countries know, for example, that they can break encryption methods that use symmetric keys of up to 56 bits, they will allow these types of products to be exported in an uncontrolled manner. Anything using a symmetric key over 56 bits needs to be controlled, because the governments are not sure they can efficiently crack those codes.
The following outlines the characteristics of specific algorithm types that are considered too dangerous to fall into the hands of the enemy and thus are restricted:

• Symmetric algorithms with key sizes over 56 bits

• Asymmetric algorithms that carry out factorization of an integer with key sizes

over 512 bits (such as RSA)

• Asymmetric algorithms that compute discrete logarithms in a field with key

sizes over 512 bits (such as El Gamal)

• Asymmetric algorithms that compute discrete logarithms in a group (not in a

field) with key sizes over 112 bits (such as ECC)

The Wassenaar Arrangement contains the agreed-upon guidelines that this group of countries came up with, but the decision of whether or not to follow the guidelines has been left up to the individual countries. The United States has relaxed its export controls over the years and today exportation can take place to any country, other than the previously listed “terrorist states,” after a technical review. If the product is an open-source product, then a technical review is not required, but it is illegal to provide this type of product directly to identified terrorist groups and countries. Also, a technical review is not necessary for exportation of cryptography to foreign subsidiaries of U.S. firms.

Source: http://www.logicalsecurity.com/resources/resources_articles.html

Review full Cryptography Chapter at www.LogicalSecurity.com

http://logicalsecurity-ls.blogspot.com/2009/03/governmental-involvement-in.html

Pretty Good Privacy

December 30, 2009

Pretty Good Privacy (PGP) was designed by Phil Zimmerman as a freeware e-mail security program and was released in 1991. It was the first widespread public key encryption program. PGP is a complete cryptosystem that uses cryptographic protection to protect e-mail and files. It can use RSA public key encryption for key management and use IDEA symmetric cipher for bulk encryption of data, although the user has the option of picking different types of algorithms for these functions. PGP can provide confidentiality by using the IDEA encryption algorithm, integrity by using the MD5 hashing algorithm, authentication by using the public key certificates, and nonrepudiation by using cryptographically signed messages. PGP uses its own type of digital certificates rather than what is used in PKI, but they both have similar purposes.

The user’s private key is generated and encrypted when the application asks the user to randomly type on her keyboard for a specific amount of time. Instead of using passwords, PGP uses passphrases. The passphrase is used to encrypt the user’s private key that is stored on her hard drive.

PGP does not use a hierarchy of CAs, or any type of formal trust certificates, but instead relies on a “web of trust” in its key management approach. Each user generates and distributes his or her public key, and users sign each other’s public keys, which creates a community of users who trust each other. This is different from the CA approach, where no one trusts each other; they only trust the CA. For example, if Mark and Joe want to communicate using PGP, Mark can give his public key to Joe. Joe signs Mark’s key and keeps a copy for himself. Then, Joe gives a copy of his public key to Mark so they can start communicating securely. Later, Mark would like to communicate with Sally, but Sally does not know Mark and does not know if she can trust him. Mark sends Sally his public key, which has been signed by Joe. Sally has Joe’s public key, because they have communicated before, and she trusts Joe. Because Joe signed Mark’s public key, Sally now also trusts Mark and sends her public key and begins communicating with him.

So, basically, PGP is a system of “I don’t know you, but my buddy Joe says you are an all right guy, so I will trust you on Joe’s word.”

Each user keeps in a file, referred to as a key ring, a collection of public keys he has received from other users. Each key in that ring has a parameter that indicates the level of trust assigned to that user and the validity of that particular key. If Steve has known Liz for many years and trusts her, he might have a higher level of trust indicated on her stored public key than on Tom’s, whom he does not trust much at all. There is also a field indicating who can sign other keys within Steve’s realm of trust. If Steve receives a key from someone he doesn’t know, like Kevin, and the key is signed by Liz, he can look at the field that pertains to whom he trusts to sign other people’s keys. If the field indicates that Steve trusts Liz enough to sign another person’s key, Steve will accept Kevin’s key and communicate with him because Liz is vouching for him. However, if Steve receives a key from Kevin and it is signed by untrustworthy Tom, Steve might choose to not trust Kevin and not communicate with him.

These fields are available for updating and alteration. If one day Steve really gets to know Tom and finds out he is okay after all, he can modify these parameters within PGP and give Tom more trust when it comes to cryptography and secure communication.

Because the web of trust does not have a central leader, such as a CA, certain standardized functionality is harder to accomplish. If Steve were to lose his private key, he would need to notify everyone else trusting his public key that it should no longer be trusted. In a PKI, Steve would only need to notify the CA, and anyone attempting to verify the validity of Steve’s public key would be told not to trust it upon looking at the most recently updated CRL. In the PGP world, this is not as centralized and organized.

Steve can send out a key revocation certificate, but there is no guarantee it will reach each user’s key ring file.

PGP is a public domain software that uses public key cryptography. It has not been endorsed by the NSA, but because it is a great product and free for individuals to use, it has become somewhat of an encryption de facto standard on the Internet.

Source: http://www.logicalsecurity.com/resources/resources_articles.html

Review full Cryptography Chapter at www.LogicalSecurity.com

http://logicalsecurity-ls.blogspot.com/2009/03/pretty-good-privacy.html