Algorithms available in GNU Crypto

The following table summarizes the current contents of the GNU Crypto library in terms of cryptographic algorithms:

Family / Algorithm Abstract
Symmetric key block cipher
Anubis

A 128-bit block cipher that accepts a variable length key. The cipher is a uniform substitution-permutation network whose inverse only differs from the forward operation in the key schedule. The design of both the round transformation and the key schedule is based upon the Wide Trail strategy and permits a wide variety of implementation tradeoffs.

Khazad

A 64-bit (legacy) block cipher that accepts a 128-bit key. The cipher is a uniform substitution-permutation network whose inverse only differs from the forward operation in the key schedule. The overall cipher design follows the Wide Trail strategy, favors component reuse, and permits a wide variety of implementation tradeoffs.

Rijndael

Rijndael is an iterated block cipher with a variable block length and a variable key length. The block length and the key length can be independently specified to 128, 192 or 256 bits.

Rijndael is the AES (Advanced Encryption Standard).

Serpent

Serpent is a 32-round substitution-permutation network block cipher, operating on 128-bit blocks and accepting keys of 128, 192, and 256 bits in length. At each round the plaintext is XORed with a 128 bit portion of the session key—a 4224 bit key computed from the input key—then one of eight S-boxes are applied, and finally a simple linear transformation is done Decryption does the exact same thing in reverse order, and using the eight inverses of the S-boxes.

Serpent was designed by Ross Anderson, Eli Biham, and Lars Knudsen as a proposed cipher for the Advanced Encryption Standard.

Serpent can be sped up greatly by replacing S-box substitution with a sequence of binary operations, and the optimal implementation depends upon finding the fastest sequence of binary operations that reproduce this substitution. This implementation uses the S-boxes discovered by Dag Arne Osvik, which are optimized for the Pentium family of processors.

Square

A 128-bit block, 128-bit key cipher. The original design of Square concentrates on the resistance against differential and linear cryptanalysis.

Twofish

Twofish is a 128-bit block cipher, designed by Bruce Schneier, John Kelsey, Doug Whiting, David Wagner, Chris Hall, and Niels Ferguson. It accepts a variable-length key up to 256 bits. The cipher is a 16-round Feistel network with a bijective F function made up of four key-dependent 8-by-8-bit S-boxes, a fixed 4-by-4 maximum distance separable matrix over GF(28), a pseudo-Hadamard transform, bitwise rotations, and a carefully designed key schedule.

Twofish can be implemented in hardware in 14,000 gates. The design of both the round function and the key schedule permits a wide variety of tradeoffs between speed, software size, key setup time, gate count, and memory.

Twofish designers have extensively cryptanalysed the algorithm; their best attack breaks 5 rounds with 222.5 chosen plaintexts and 251 effort.

Blowfish

Blowfish, a symmetric secret-key block cipher. It is a Feistel network, iterating a simple encryption function 16 times. The block size is 64 bits, and the key can be any length up to 448 bits. Although there is a complex initialization phase required before any encryption can take place, the actual encryption of data is very efficient on large microprocessors.

Blowfish is unpatented and license-free, and is available free for all uses.

DES

The Data Encryption Standard. DES is a 64-bit block cipher with a 56-bit key, developed by IBM in the 1970's for the standardization process begun by the National Bureau of Standards (now NIST).

New applications should not use DES, except for compatibility.

TripleDES

Triple-DES, 3DES, or DESede is a combined cipher that uses three iterations of the Data Encryption Standard (DES) cipher to improve the security (at the cost of speed) of plain DES.

Triple-DES runs the DES algorithm three times with three independent 56 bit keys.

To encrypt: Ci = Ek3 ( Ek2-1 ( Ek1 ( Pi ) ) )
And to decrypt: Pi = Ek1-1 ( Ek2 ( Ek3-1 ( Ci ) ) )

The "ede" comes from the encryption operation, which runs Encrypt-Decrypt-Encrypt.

CAST-128

In the words of one of its designers, Carlisle Adams (the CA in CAST, ST stands for Stafford Tavares) this algorithm is described as:

"...a DES-like Substitution-Permutation Network (SPN) cryptosystem which appears to have good resistance to differential cryptanalysis, linear cryptanalysis, and related-key cryptanalysis. This cipher also possesses a number of other desirable cryptographic properties, including avalanche, Strict Avalanche Criterion (SAC), Bit Independence Criterion (BIC), no complementation property, and an absence of weak and semi-weak keys."

CAST-128 (a.k.a. CAST5) is a symmetric block cipher with a block-size of 8 bytes and a variable key-size of up to 128 bits. It accepts a key size that can vary from 40 bits to 128 bits, in 8-bit increments.

Mode of operation
Counter mode (CTR)

The Counter (CTR) mode is a confidentiality mode that requires a sequence of blocks, called counters, with the property that each block in the sequence is different than every other block. This condition is not restricted to a single message: across all of the messages that are encrypted under the given key, all of the counters must be distinct.

Electronic Codebook (ECB)

The Electronic Codebook (ECB) mode is a confidentiality mode that is defined as follows:

Encryption: Cj = CIPHK(Pj) for j = 1..n
Decryption: Pj = CIPH-1K(Cj) for j = 1..n

In ECB encryption, the forward cipher function is applied directly, and independently, to each block of the plaintext. The resulting sequence of output blocks is the ciphertext.

In ECB decryption, the inverse cipher function is applied directly, and independently, to each block of the ciphertext. The resulting sequence of output blocks is the plaintext.

Integer Counter Mode (ICM)

Counter Mode is a way to define a pseudo-random key-stream generator using a block cipher. The key-stream can be used for additive encryption, key derivation, or any other application requiring pseudo-random data. In ICM, the key-stream is logically broken into segments. Each segment is identified with a segment index, and the segments have equal lengths. This segmentation makes ICM especially appropriate for securing packet-based protocols.

Output Feedback Mode (OFB)

The Output Feedback (OFB) mode is a confidentiality mode that features the iteration of the forward cipher on an IV to generate a sequence of output blocks that are exclusive-ORed with the plaintext to produce the ciphertext, and vice versa. The OFB mode requires that the IV is a nonce, i.e., the IV must be unique for each execution of the mode under the given key.

CFB

The Cipher Feedback Mode (CFB) is a stream mode that operates on s bit blocks, where 1 <= s <= b, if b is the underlying cipher's block size. Encryption is:

I1 = IV
Ij = LSB( b - s, Ij-1 ) | Cj-1; for j = 2..n
Oj = CIPH( K, Ij ); for j = 1, 2..n
Cj = Pj ^ MSB( s, Oj ); for j = 1, 2..n

And decryption is:

I1 = IV
Ij = LSB( b - s, Ij-1 ) | Cj-1; for j = 2..n
Oj = CIPH( K, Ij ); for j = 1, 2..n
Pj = Cj ^ MSB( s, Oj ); for j = 1, 2..n

CFB mode requires an initialization vector (IV), which needs not be kept secret.

CBC

The Cipher Block Chaining (CBC) mode introduces feedback into the cipher by XORing the previous ciphertext block with the plaintext block before encipherment. That is, encrypting looks like this:

Ci = EK( Pi ^ Ci-1 )

Similarly, decrypting is:

Pi = Ci-1 ^ DK( Ci )
Padding scheme
PKCS-7

Let B be the number of octets in the data blocks (or segments) and let L be the number of octets in the data string. The data string is padded at the trailing end with B-(Lmod B) octets, each of which is the binary representation of B - (L mod B).

Trailing Bit Complement (TBC)

With TBC, the data string is padded at the trailing end with the complement of the trailing bit of the unpadded message: if the trailing bit is `1,' then `0' bits are appended, and if the trailing bit is `0,' then `1' bits are appended. As few bits are added as are necessary to meet the formatting size requirement.

Message digest (hash)
Whirlpool

A new 512-bit hashing function operating on messages less than 2256 bits in length. The function structure is designed according to the Wide Trail strategy and permits a wide variety of implementation tradeoffs.

RIPEMD-128

The main difference with RIPEMD-160 is that we keep a hash result and chaining variable of 128 bits (four 32 bit words); only four rounds are used.

RIPEMD-160

The bit-size of the hash result and chaining variable are increased to 160 bits (five 32-bit words), the number of rounds is increased from three to five, and the two lines are made more different (not only the constants are modified, but also the Boolean functions and the order of the message words).

SHA-160, SHA-256, SHA-384, and SHA-512

The Secure Hash Algorithm (SHA-1) is required for use with the Digital Signature Algorithm (DSA) as specified in the Digital Signature Standard (DSS) and whenever a secure hash algorithm is required for US federal applications.

The properties of the four message digest algorithms specified in FIPS (Federal Information Processing Standards) 180-2, dated August 1st, 2002 and implemented in this library, are given in the following table:

Algorithm Message Size
(bits)
Block Size
(bits)
Word Size
(bits)
Output Size
(bits)
Security
(bits)
SHA-1 < 2 64 512 32 160 80
SHA-256 < 2 64 512 32 256 128
SHA-384 < 2 128 1024 64 384 192
SHA-512 < 2 128 1024 64 512 256
Tiger

Tiger was designed by Ross Anderson and Eli Biham, with the goal of producing a secure, fast hash function that performs especially well on next-generation 64-bit architectures, but is still efficient on 32- and 16-bit architectures.

Tiger processes data in 512-bit blocks and produces a 192-bit digest.

HAVAL

The HAVAL message-digest algorithm is a variable output length, with variable number of rounds. The output length can vary from 128 to 256 bits in increments of 32 bits. The number of rounds can vary from 3 to 5.

By default, this implementation allows HAVAL to be used as a drop-in replacement for MD5—an output size of 128 bits, and 3 rounds.

MD5

The MD5 message-digest algorithm takes as input a message of arbitrary length and produces as output a 128-bit fingerprint or message digest of the input.

MD4

MD4 (a 128-bit output size hash algorithm) was the precursor to the stronger MD5 message-digest algorithm.

While not considered cryptographically secure itself, MD4 is in use in various applications. It is slightly faster than MD5.

RSA Laboratories, in their Bulletin #4, dated November 12, 1996, recommends that MD4 should not be used.

MD2

MD2 takes as input a message of arbitrary length and produces as output a 128-bit fingerprint or message digest of the input. It is intended for digital signature applications, where a large file must be compressed in a secure manner before being signed with a private (secret) key under a public-key cryptosystem such as RSA.

RSA Laboratories, in their Bulletin #4, dated November 12, 1996, recommends to update applications away from MD2 whenever it is practical.

Message authentication code (MAC)
HMAC

HMAC is a mechanism for message authentication using cryptographic hash functions. HMAC can be used with any iterative cryptographic hash function, e.g., MD5, SHA-1, in combination with a secret shared key. The cryptographic strength of HMAC depends on the properties of the underlying hash function.

UHash32

UHASH is a keyed hash function, which takes as input a string of arbitrary length, and produces as output a string of fixed length (such as 8 bytes). The actual output length depends on the parameter UMAC-OUTPUT-LEN.

UHASH has been shown to be epsilon-ASU (Almost Strongly Universal), where epsilon is a small (parameter-dependent) real number. Informally, saying that a keyed hash function is epsilon-ASU means that for any two distinct fixed input strings, the two outputs of the hash function with a random key look almost like a pair of random strings. The number epsilon measures how non-random the output strings may be.

UHASH has been designed to be fast by exploiting several architectural features of modern commodity processors. It was specifically designed for use in UMAC. But UHASH is useful beyond that domain, and can be easily adopted for other purposes.

UHASH; does its work in three layers. First, a hash function called NH is used to compress input messages into strings which are typically many times smaller than the input message. Second, the compressed message is hashed with an optimized polynomial hash function into a fixed-length 16-byte string. Finally, the 16-byte string is hashed using an inner-product hash into a string of length WORD-LEN bytes. These three layers are repeated (with a modified key) until the outputs total UMAC-OUTPUT-LEN bytes.

UMac32

The UMAC algorithms are parameterized. This means that various low-level choices, like the endian convention and the underlying cryptographic primitive, have not been fixed. One must choose values for these parameters before the authentication tag generated by UMAC (for a given message, key, and nonce) becomes fully-defined. In the UMAC draft the authors provide two collections of parameter settings, and have named the sets UMAC16 and UMAC32—only the latter is implemented in this library. The parameter sets have been chosen based on experimentation and provide good performance on a wide variety of processors.

UMAC16 is designed to excel on processors which provide small-scale SIMD parallelism of the type found in Intel's MMX and Motorola's AltiVec instruction sets, while UMAC32 is designed to do well on processors with good 32- and 64- bit support. UMAC32 may take advantage of SIMD parallelism in future processors.

UMAC has been designed to allow implementations which accommodate on-line authentication. This means that pieces of the message may be presented to UMAC at different times (but in correct order) and an on-line implementation will be able to process the message correctly without the need to buffer more than a few dozen bytes of the message.

Implementation Note: Currently, there are no test vectors against which an implementation of this algorithm can be validated. The validity test performed in this implementation relies on pre-computed values generated by itself. This is not a true test of conformance.

TMMH/16(v1)

The Truncated Multi-Modular Hash Function (TMMH) is a universal hash function suitable for message authentication in the Wegman-Carter paradigm, as in the Universal Security Transform (UST). It is simple, quick, and especially appropriate for Digital Signal Processors and other processors with a fast multiply operation, though a straightforward implementation requires storage equal in length to the largest message to be hashed.

TMMH is a simple hash function which maps a key and a message to a hash value. There are two versions of TMMH: TMMH/16 (the one implemented in this library) and TMMH/32. TMMH/16 uses sixteen bit unsigned words as a basic data unit, while TMMH/32 uses thirty-two bit unsigned words. TMMH can be used as a message authentication code, as described in Section 5 of its draft document.

Pseudo-random number generator (PRNG)
Integer Counter Mode (ICM) key-stream generator

Counter Mode is a way to define a pseudo-random key-stream generator using a block cipher. The key-stream can be used for additive encryption, key derivation, or any other application requiring pseudo-random data.

Generic hash-based generator

A PRNG configurable with a hash function and a seed. The generator continuously updates its context with the result of each request for random bytes.

UMAC key-stream generator

From the UMAC draft:

"Stretching the user-supplied key into pseudo-random bits used internally by UMAC is done with a key-derivation function (UMacGenerator). In this section we define a UMacGenerator which is efficiently instantiated with a block cipher. The Advanced Encryption Standard (AES) is used in output-feedback mode to produce the required bits."

ARCFour

RC4 is a stream cipher developed by Ron Rivest. Until 1994, RC4 was a trade secret of RSA Data Security, Inc., when it was released anonymously to a mailing list. This version is a descendent of that code, and since there is no proof that the leaked version was in fact RC4 and because "RC4" is a trademark, it is called "ARCFOUR", short for Allegedly RC4.

To use this implementation as a stream cipher, one would do (for both encryption, and decryption):

    out = in ^ arcfour.nextByte();
PKCS#5 KDF2 generator

PBKDF2 applies a pseudo-random function to derive keys. The length of the derived key is essentially unbounded; however, the maximum effective search space for the derived key may be limited by the structure of the underlying pseudo-random function (which in this implementation is an HMAC algorithm).

A key derivation function produces a derived key from a base key and other parameters. In a password-based key derivation function, the base key is a user- supplied password and the other parameters are a salt value and an iteration count.

Digital signature scheme
Digital Signature Standard (DSS)

This Standard specifies a Digital Signature Algorithm (DSA) appropriate for applications requiring a digital rather than written signature. The DSA digital signature is a pair of large numbers represented in a computer as strings of binary digits. The digital signature is computed using a set of rules (i.e., the DSA) and a set of parameters such that the identity of the signatory and integrity of the data can be verified. The DSA provides the capability to generate and verify signatures. Signature generation makes use of a private key to generate a digital signature. Signature verification makes use of a public key which corresponds to, but is not the same as, the private key. Each user possesses a private and public key pair. Public keys are assumed to be known to the public in general. Private keys are never shared. Anyone can verify the signature of a user by employing that user's public key. Signature generation can be performed only by the possessor of the user's private key.

RSA EMSA-PSS

RSA-PSS is a public-key encryption scheme combining the RSA algorithm with the Probabilistic Signature Scheme (PSS) encoding method. The inventors of RSA are Ronald L. Rivest, Adi Shamir, and Leonard Adleman, while the inventors of the PSS encoding method are Mihir Bellare and Phillip Rogaway. During efforts to adopt RSA-PSS into the P1363a standards effort, certain adaptations to the original version of RSA-PSS were made by Mihir Bellare and Phillip Rogaway and also by Burt Kaliski (the editor of IEEE P1363a) to facilitate implementation and integration into existing protocols.

RSA PKCS#1 v1.5

This algorithm combines the RSA signature (and verification) primitives with the the EMSA-PKCS1-v1_5 encoding method.

The length of messages on which this algorithm can operate is either unrestricted or constrained by a very large number, depending on the hash function underlying the EMSA-PKCS1-v1_5 method.

Five hash functions are supported in this library: MD2, MD4, SHA-160, SHA-256, SHA-384, and SHA-512.

Simple Authentication and Security Layer (SASL)
PLAIN

Clear-text passwords are simple, interoperate with almost all existing operating system authentication databases, and are useful for a smooth transition to a more secure password-based authentication mechanism. The drawback is that they are unacceptable for use over an unencrypted network connection.

ANONYMOUS

The mechanism consists of a single message from the client to the server. The client sends optional trace information in the form of a human readable string. The trace information should take one of three forms: an Internet email address, an opaque string which does not contain the @ character and can be interpreted by the system administrator of the client's domain, or nothing.

CRAM-MD5

This mechanism has the advantage over some possible alternatives of not requiring that the server maintain information about email logins on a per-login basis. While mechanisms that do require such per-login history records may offer enhanced security, protocols which may have several connections between a given client and server open more or less simultaneous, may make their implementation particularly challenging.

SRP

The Secure Remote Password (SRP) is a password-based, zero-knowledge, authentication and key-exchange protocol developed by Thomas Wu. It has good performance, is not plaintext-equivalent and maintains perfect forward secrecy. It provides authentication (optionally mutual authentication) and the negotiation of a shared context key.

Key agreement protocols

Diffie-Hellman (basic version)

The basic version of the Diffie-Hellman key agreement is described as follows:

  • An appropriate prime p and generator g of Zp*, 2 <= g <= p-2 are selected and published.
  • A and B each send the other one message over an open channel; as a result, they both can then compute a shared secret key K which they can use to protect their future communication.
  • A chooses a random secret x, 1 <= x <= p-2, and sends B message (1) which is gx mod p .
  • B chooses a random secret y, 1 <= y <= p-2, and sends A message (2) which is gy mod p .
  • B receives message (1) and computes the shared key as K = (gx)y mod p.
  • A receives message (2) and computes the shared key as K = (gy)x mod p.
ElGamal (half-certified Diffie-Hellman)

The ElGamal key agreement is described as follows:

  • A sends to B a single message allowing one-pass key agreement.
  • A obtains an authentic copy of B's public key (p, g, yb), where yb = gxb.
  • A chooses a random integer x, 1 <= x <= p-2, and sends B the message gx. A computes the shared secret key K as ybx.
  • B computes the same key K on receipt of the previous message as (gx)xb.
SRP-6

The Secure Remote Password (SRP) key agreement, also known as SRP-6, is a zero-knowledge protocol, designed by Thomas J. Wu.

SASL-SRP

The version of SRP-6 as used in the proposed SASL-SRP mechanism. The difference with the basic version is that the N and g parameters are part of the exchange.

Experimental implementations

UST

The Universal Security Transform (UST) is a cryptographic transform which uses a keystream generator (which can generate keystream segments in arbitrary order) and a universal hash function to provide confidentiality protection, integrity protection, and replay detection security services. This transform is efficient, provably secure, and is appropriate for network security.

Implementation Note: Currently, there are no test vectors against which an implementation of this algorithm can be validated. The validity test performed in this implementation relies on pre-computed values generated by itself. This is not a true test of conformance.


Retrun to the [ GNU Crypto's home page | GNU's home page ].

Please send FSF & GNU inquiries & questions to gnu@gnu.org. There are also other ways to contact the FSF.

Please send comments on these web pages to webmasters@gnu.org, send other questions to gnu@gnu.org.

Copyright © 2001, 2002, 2003, 2004 Free Software Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.

Verbatim copying and distribution of this entire article is permitted in any medium, provided this notice is preserved.

Last Modified: $Date: 2006/11/25 10:32:10 $ $Author: ramprasadb $