ECE 417, Lecture 6: Lecture 6: Discrete Cosine Transfo rm Mark Hasegawa-Johnson 9/6/2019 Outline DCT KNN How to draw the contour plots of a multivariate Gaussian pdf

Discrete Cosine Transform Last time: PCA Why its useful: PCs are uncorrelated with one another, so you can kee p just the top-N (for N<

set of all natural images? A model of natural images model of natural images 1. Choose an object of a random color, 2. Make it a random size, 3. Position it at a random locatio n in the image, 4. Repeat.

Result: PCA model of natural images = DFT! DFT! Define the 2D DFT, , as It turns out that the pixels, , are highly correlated with one another (often ex actly the same!) But on average, as # images , the DFT coefficients become uncorrelated with

one another (because object sizes are drawn at random). 2D DFT as a vector transform Suppose we vectorize the image, for example, in raster-scan order, so that and suppose we invent some mapping from to , for example, it cou ld be in diagonal order: . Then the features are , with basis vectors The problem with DFT

is that its complex-valued! That makes it hard to do some types of st atistical analysis and machine learning (some types of derivatives, for ex ample, do not have a definition if the variable is complex-valued). How to make the DFT real The DFT of a real symmetric sequence is real & symmetric. How to make the DFT real Most natural images are real-val ued.

Lets also make it symmetric: pre tend that the observed image is j ust of a larger, mirrored image. Discrete Cosine Transform Define Then:

+

=2 cos ( )

1 = + 2 2D DCT as a vector transform Assume that you have some reasonable mapping from to , and from t o . Then , where , and Basis Images: 9th-order 2D DCT

The k1=0, k2=0 case represents the av erage intensity of all pixels in the imag e. The k1=1 or k2=1 basis vectors captur e the brightness gradient from top to bottom, or from left to right, respectiv ely. The k1=2 or k2=2 basis vectors captur e the difference in pixel intensity betw

een the center vs. the edges of the im age. Nearest neighbors: 9 -order 2D DCT th This image shows the four nearest nei ghbors of Image 0 (Arnold Schwarze negger) and Image 47 (Jiang Zemin), calculated using a 9th-order 2D DCT.

Neighbors of Image 0 are dark on th e right-hand-side, and in the lower-le ft corner. Neighbors of Image 47 are darker o n the bottom than the top. Neither of these features captures per son identity very well Basis Images: 36th-order 2D DCT

With a 36th order DCT (up to k1=5, k2=5), we can get a bit more detail about the image. Nearest neighbors: 36th-order 2D DC T The 36 order DCT is, at least, capturin g the face orientation: most of the im ages considered similar are at least l ooking in the same way.

Jiang Zemin seems to be correctly ide ntified (2 of the 4 neighbors are the sa me person), but Arnold Schwarzenegg er isnt (each of the 4 similar images shows a different person!) PCA model of natural images vs. DCT DCT PCA is like DCT in some ways. In t his example, might be measuring average brightness; is left-to-right

gradient; is measuring center-vs-e dges. But PCA can also learn whats imp ortant to represent sample covaria nce of the given data. For exampl e, eyeglasses (, ), short vs. long no se (), narrow vs. wide chin (). Nearest neighbors: 9 -order PCA model of natural images th

For these two test images, 9th-order P CA has managed to identify both peop le. Two of the four neighbors of Image 0 are Arnold Schwarzenegger. Three of the four neighbors of Image 47 are Jiang Zemin. High-order PCA model of natural images might be just noise!

It is not always true that PCA outperforms DCT. Especially for higher-dimens ion feature vectors, PCA might just learn random variation in the training dat aset, which might not be useful for identifying person identity. vs Summary As , PCA of randomly generated images DFT DCT = half of the real symmetric DFT of a real mirrored image. As order of the DCT grows, details of the image start to affect its nearest

neighbor calculations, allowing it capture more about person identity. PCA can pick out some details with smaller feature vectors than DCT, be cause it models the particular problem under study (human faces) rathe r than a theoretical model of all natural images. With larger feature vectors, PCA tends to learn quirks of the given datas et, which are usually not useful for person identification. DCT is a bit m ore robust (maybe because its like using ). Outline DCT

KNN How to draw the contour plots of a multivariate Gaussian pdf K-Nearest Neighbors (KNN) Classifier Classifier 1. To classify each test token, find the K training tokens that are closes t. 2. Look up the reference labels (known true person IDs) of those K nei ghbors. Let them vote. If there is a winner, then use that person ID as the hypothesis for the test token. If there is no winner, then fall back to 1NN.

Confusion Matrix Reference Hypothesis 0 1 2

3 0 # times that person 0 was classified correctly (sometimes abbreviated C(0|0))

1 # times that person 0 was classified as person 1 (sometimes abbreviated C(1|0))

2 3

A model of natural imagesccuracy, Lecture 6: Recall, Lecture 6: and Precision Accuracy: Recall: Precision: Outline DCT KNN How to draw the contour plots of a multivariate Gaussian pdf

The Multivariate Gaussian probabilit y density function If the dimensions of are jointly Gaussian, then we can write their joint p robability density function (pdf) as The exponent is sometimes called the Mahalanobis distance (with weig ht matrix ) between and (named after Prasanta Chandra Mahalanobis, 1893-1972): Contour lines of a Gaussian pdf

The contour lines of a Gaussian pdf ar e the lines of constant Mahalanobis di stance between and . For example: when which happens when when which happens when Inverse of a positive definite matrix The inverse of a positive definite matrix is:

Proof: So Facts about ellipses The formula , where

or equivalently where is the formula for an ellipsoid (in 2D, an ellipse). The principal axes are the vectors , The radius of the ellipse in the direction is If , then its a circle. Example Suppose that and are linearly correlated Gaussians with means 1 and 1, respectively, and with variances 1 and 4, and covariance 1.

Remember the definitions of variance and covariance: Example We have that We get the eigenvalues from the determinant equation: which equals z ero for . We get the eigenvectors by solving , which gives Example So the principal axes of the ellipse

are in the directions Summary In fact, its useful to talk about in thi s way: The first principal component, , is t he part of thats in the direction. It has a variance of . The second principal component, , is the part of thats in the directio

n. It has a variance of . The principal components are unco rrelated with each other. If is Gaussian, then and are indep endent Gaussian random variables.