This is a widely-used block mode. Before encrypting a block, we XOR the data
with the encrypted bytes of the *previous* block. (In a moment, we'll come back to
the issue of what happens with the *first* block, where there's no previous
encrypted block.) Notice that in this mode, **every block depends on the previous
block**, so we **cannot parallelise** the encryption. At
present, few people implement multithreaded encryption, but in the future, it is
likely to become a more common requirement.

### Output feedback mode (OFB)

In this mode, the key stream is created by continually running the last block
of key stream through the block cipher. So we start with some initial block (again,
we'll come back to that). Then, we encrypt that block to get the first block of
key stream. The the first block of plaintext is then XORed with this block of key stream.
Then we run the key stream through the encryption routine *again* to get the
next block of key stream, and the whole cycle repeats. In rough Java code, it would
look something like this (we'll see that Java actually allows us to specifcy the
block mode "properly": this is just to help understand what's going on):

private byte[] encrypt(byte[] data, byte[] key) {
byte[] tempBlock = new byte[16];
initialise(tempBlock);
ByteArrayOutputStream bout = new ByteArrayOutputStream();
int noBlocks = data.length / 16;
for (int i = 0; i < noBlocks; i++) {
byte[] dataSection = section(data, i * 16, i * 16 + 16);
byte[] buf = xor(tempBlock, dataSection);
bout.write(buf);
tempBlock = encryptAES(tempBlock, key);
}
return bout.toByteArray();
}

Note that at some point, after encrypting a huge enough amount of data,
the output of the encryption will by chance be identical to a previous block of
key stream. At that point, we'll hit a "cycle", and the encryption will be insecure
for the remainder of the stream. In practice, the amount of data we must encrypt for
this to happen is huge by today's standards. With a 128-bit key, then after
encrypting 2^{64} blocks, there's a roughly 50% chance that we'll have hit a cycle;
after 2^{63} blocks, a 12.5% chance; after 2^{62} blocks, a
3.1% chance etc (for each power of 2 less, divide the probability by four).
With *current* amounts of data, this is a viable risk in most cases. For
example, after encrypting about 17 billion gigabytes (2^{60} blocks),
there'll be a roughly 0.2% chance of a cycle.

### Counter mode (CTR)

In **counter mode**, we create the key stream by **encrypting a series
of incrementing numbers**. In common implementations, we pick some starting value,
which could be totally random, or could be the number that we got up to after
the last set of data we encrypted with this key.
Then, we encrypt *n + 1* and use the resulting encrypted block as the
key stream for XORing the first block of plaintext. The next block of plaintext is
then XORed with the encryption of *n + 2*, etc. Again, it may sound surprising
that this is secure, but remember that our block cipher produces an essentially
random output: even though the two blocks that we are encrypting differ in a trivial
way, the *encrypted* blocks will be totally, unpredictably different to
somebody without the key. (If they're not, our encryption algorithm is not secure.)

Counter mode encryption has the distinct advantage that we can
**encrypt and decrypt multiple blocks in parallel**: since the *i*th
key stream block will be the encryption of *n + i*, we can calculate the
key stream of any future block at any time.

For counter mode to be secure,
it is crucially important that we **never re-use the same counter value
with the same key**.

## Next: the Initialisation Vector (IV)