This image very faintly pulled out the vertical lines along the left-hand and right-hand sides of the image. The lines in the middle of the image all but disappeared and this may be because the fiter multiplied the values of the pixels in the middle of each bunch of pixels by -4 thereby highlighting the edges of the features in the image.
This image pulled out the vertical lines in the image and make them really stand out.
This image essentially just returned the original image to me. I beleive this is because the filter was so evenly spread out and the changes applied across the bunch of pixels were well-balanced and minimal at best, so very little was actually being changed.
We are essentially convulving these images. A convolution is a filter that passes over an image, processes it and then extracts features that show a commonolatity in the image. The way that a convolution works is it scans every pixel in the image and then looks at it’s neighboring pixels. Then the value of each pixel is multiplied by the equivalent weight in a filter to help identify features/commonalities in the image.This ability to extract commonalities in images is what makes a convolving filter image so useful for computer vision; by allowing the computer to pick up on commonalities, the computer has more flexibility/ lee-way in terms of identifying features in images. With convulving filters, the computer no longer is just looking at the raw pixels of an image but is now able to identify the actual features that make up an object, so there is more flexibility in terms of correctly identifying objects no matter what form/perspective they show up in.
Essentially, after pooling my image with a 2x2 filter the horizontal and vertical lines now appear to stand out more than in the original image. In this case it seems that the MaxPooling filter has been applied. MaxPooling takes blocks of pixels (in this case, it took a block that was 2 blocks by 2 blocks, or four total pixels) and pulls out only the pixels with the biggest values. This allows the computer to pull out important features and get rid of extraneous information.
Using a CNN increased my accuracy for the mnist dataset; the accuracy was already pretty high with the DNN (about 97%) but with the CNN my accuracy was a bit higher at nearly 100%.
You can see in the plot above that as the images are convolved the computer begins to pull out common features in each number. For example, with the number seven the horizontal line at the top of the image and the vertical line extending from this horizontal line are pulled out and highlighted which helps the computer mark this number as a seven, with each convultion a part of a defining feature of a number is pulled out adn then in the next round this feature is built on until a number can be identified from the highlighted features. As I changed the 32s to 64s, the accuracy and training time decreased, whereas when I changed the 32s to 16s the accuracy and training time both increased. Furthermore, as I added more convolutin layers both the accuracy and training time increased.