Since V(n,q) has a metric defined on it, it makes sense to talk about spheres centered at a
vector with a given radius. Thus B(x,r) = {y in V(n,q) | d(x,y) <= r} is the sphere of radius r
centered at x. The *covering radius* of a code C is the smallest radius s so that

i.e., every vector of the space is contained in some (at least one) sphere of radius s centered
at a code word. The* packing radius* of a code C is the largest radius t so that the spheres of
radius t centered at the code words are disjoint. Clearly, t <= s. When t = s, we say that C is a
*perfect code*. We will see some examples of perfect codes in the next section.

The minimum distance d of a perfect code must be odd. If it were even, there would be vectors at an equal distance from two code words and spheres around those code words could not be disjoint if they had to contain these vectors. So, d = 2e + 1 and it is easy to see that for a perfect code t = s = e. Furthermore, we can count the number of vectors in a sphere of radius e and obtain:

**Proposition 1** : A q-ary (n,M,d)-code is perfect if and only if d = 2e + 1 and

Recall that for a linear (n,k)-code C, the parity-check matrix for C is the generator matrix H
of the dual code C^{perp} . Furthermore, we can use H to determine the codewords of C by,

(Note that this is just the transpose of the condition given in earlier notes). The following theorem relates a property of H to the minimum distance of C. We are assuming that the code is defined over an alphabet F which is a finite field (this permits the algebraic operations to be performed).

**Theorem 1** : Let H be a parity-check matrix for a linear (n,k)-code C defined over F. Then
every set of s-1 columns of H are linearly independent if and only if C has minimum
distance at least s.

*Proof*: First assume that every set of s-1 columns of H are linearly independent over F. Let c
= (c_{1}c_{2}...c_{n}) be a non-zero codeword and let** h _{1},h_{2}, ..., h_{n}** be the columns of H. Then since H is
the parity check matrix, Hc

The weight of c, w(c) is the number of non-zero components of c. If w(c) <= s - 1, then we have a nontrivial linear combination of less than s columns of H which sums to 0. This is not possible by the hypothesis that every set of s - 1 or fewer columns of H are linearly independent. Therefore, w(c) >= s, and since c is an arbitrary non-zero codeword of the linear code C it follows that the minimum non-zero weight of a codeword is >= s. So, by a previous theorem, the minimum distance of C is >= s.

To prove the converse, assume that C has minimum distance at least s. Suppose that some
set of t < s columns of H are linearly dependent. Without loss of generality, we may
assume that these columns are **h _{1}, h_{2}, ..., h_{t}**. Then there exist scalars a

Construct a vector c having a_{i} in position i, 1<= i <= t, and 0's elsewhere. By construction,
this c is a non-zero vector in C since Hc^{t} = 0. But w(c) = t < s. This is a contradiction
since by hypothesis, every non-zero codeword in C has weight at least s. We conclude that no
s-1 columns of H are linearly dependent.

It follows easily from the theorem that a linear code C with parity-check matrix H has minimum distance (exactly) d if and only if every set of d-1 columns of H are linearly independent, and some set of d columns are linearly dependent. Hence this theorem could be used to determine the minimum distance of a linear code, given a parity-check matrix.

It is also possible to use this theorem to construct single-error correcting codes (i.e., those
with a minimum distance of 3). To construct such a code, we need only construct a matrix H
such that no 2 or fewer columns are linearly dependent. The only way a single column can be
linearly dependent is if it is the zero column. Suppose two non-zero columns **h _{i}** and

This implies that

meaning that

**Example**: Over the field F = GF(3), consider the matrix

1 0 0 1 2 H = 0 2 0 0 1 0 0 1 1 0The columns of H are non-zero and no column is a scalar multiple of any other column. Hence, H is the parity-check matrix for a (5,2)-code over F of distance at least 3.

When working with linear codes it is often desirable to be able to convert from the generator matrix to the parity-check matrix and vice-versa. This is easily done.

**Theorem 2**: If G = [I_{k} A] is the generator matrix (in standard form) for the (n,k)-code C, then
H = [-A^{t} I_{n-k}] is the parity check matrix for C.

*Proof*: We must show that H is a generator matrix for C^{perp} . Now GH^{t} = I_{k} (-A) + A I_{n-k} = 0,
implying that the rows of H are orthogonal to the rows of G, thus span(H) = {row space of
H} is contained in C^{perp}.

Consider any x in C^{perp} where** x** = (x_{1} x_{2} ...x_{n}) and let

where** r _{i}** is the ith row of H. Since x in C

Hence, x in span(H) and we have C^{perp} contained in span(H). Thus, span(H) = C^{perp} and so H is a
generator matrix of C^{perp} .

**Example (cont.) **: To look at the code we have previously constructed, it would be
convenient to have the generator matrix. Since H is the generator matrix for C^{perp} , if we apply
the last theorem we can get the parity-check matrix for C^{perp} which is the generator matrix for
C. To this end we perform row operations on H to put it into standard form H'.

1 0 0 1 2 1 2 H' = 0 1 0 0 2 A = 0 2 0 0 1 1 0 1 0 G = 2 0 2 1 0 1 1 0 0 1 .We can now take all linear combinations (over GF(3)) of the rows to write out the 9 codewords of C. With their weights they are

Codeword Weight 00000 0 20210 3 11001 3 10120 3 22002 3 01211 4 21121 5 12212 5 02122 4And we see that we have indeed generated a code of minimum distance 3.

It would have been even easier to generate a code as in the example but over GF(2) since then the only restriction on the columns of H is that they be distinct (i.e., the only scalar multiples of a column are the zero column and the column itself).

As a corollary to Theorem 1 we can derive a relationship between the parameters of a linear
code which is known as the *Singleton bound*.

**Corollary**: For any (n,k)- linear code with minimum distance d we have n - k >= d - 1.

*Proof:* Let H be the parity check matrix for the code. By Theorem 1, any d-1 columns of H
are linearly independent, so the column rank of H >= d - 1. But since the column rank = row
rank of H, and H has row rank = n - k, we obtain the desired inequality.

**Definition**: A *Hamming Code* of order r over GF(q) is an (n,k)-code where n = (q^{r}-1)/(q-1)
and k = n - r, with parity check matrix H_{r} an r by n matrix such that the columns of H_{r} are
non-zero and no two columns are scalar multiples of each other.

Note that q^{r} - 1 is the number of non-zero r-vectors over GF(q) and that q - 1 is the number
of non-zero scalars, thus n is the maximum number of non-zero r-vectors no two of which are
scalar multiples of each other. It follows immediately from Theorem 1 that the Hamming
codes all have minimum distance exactly 3 and so are 1-error correcting codes. Since the
number of codewords in a Hamming code is q^{k}, a direct computation shows that the equation
in Proposition 1 holds, so:

**Theorem 3** : The Hamming codes of order r over GF(q) are perfect codes.

**Example**: The Hamming code of order r = 3 over GF(2) is defined by the parity-check
matrix

1 0 0 1 0 1 1 HThis is a (7,4)-code with distance 3. Re-ordering the columns of H_{3}= 0 1 0 1 1 0 1 0 0 1 0 1 1 1 .

**Example**: The (13,10)-Hamming code of order 3 over GF(3) is defined by the parity-check
matrix

1 0 0 1 0 1 1 2 0 1 2 1 1 H_{3}= 0 1 0 1 1 0 1 1 2 0 1 2 1 0 0 1 0 1 1 1 0 1 2 1 1 2 .