1
0

Merge branch 'master' of github.com:Dekker1/ResearchMethods

This commit is contained in:
Kelvin Davis 2018-05-25 17:17:11 +10:00
commit d57c73be04

View File

@ -38,10 +38,10 @@
understand. In this report we compare the well known machine learning understand. In this report we compare the well known machine learning
methods Naive Bayes, Support Vector Machines, $k$-Nearest Neighbors, and methods Naive Bayes, Support Vector Machines, $k$-Nearest Neighbors, and
Random Forest against the Neural Network Architectures LeNet, Fully Random Forest against the Neural Network Architectures LeNet, Fully
Convolutional Neural Networks, and Fully Convolutional Neural Networks. Convolutional Neural Networks, and Fully Convolutional Neural Networks. Our
\todo{I don't like this big summation but I think it is the important comparison shows that, although the different neural networks architectures
information} have the highest accuracy, some other methods come close with only a
Our comparison shows that \todo{...} fraction of the training time.
\end{abstract} \end{abstract}
@ -158,13 +158,13 @@
of randomness and the mean of these trees is used which avoids this problem. of randomness and the mean of these trees is used which avoids this problem.
\subsection{Neural Network Architectures} \subsection{Neural Network Architectures}
\tab There are many well established architectures for Neural Networks depending on the task being performed. \tab There are many well established architectures for Neural Networks depending on the task being performed.
In this paper, the focus is placed on convolution neural networks, which have been proven to effectively classify images \cite{NIPS2012_4824}. In this paper, the focus is placed on convolution neural networks, which have been proven to effectively classify images \cite{NIPS2012_4824}.
One of the pioneering works in the field, the LeNet \cite{726791}architecture, will be implemented to compare against two rudimentary networks with more depth. One of the pioneering works in the field, the LeNet \cite{726791}architecture, will be implemented to compare against two rudimentary networks with more depth.
These networks have been constructed to improve on the LeNet architecture by extracting more features, condensing image information, and allowing for more parameters in the network. These networks have been constructed to improve on the LeNet architecture by extracting more features, condensing image information, and allowing for more parameters in the network.
The difference between the two network use of convolutional and dense layers. The difference between the two network use of convolutional and dense layers.
The convolutional neural network contains dense layers in the final stages of the network. The convolutional neural network contains dense layers in the final stages of the network.
The Fully Convolutional Network (FCN) contains only one dense layer for the final binary classification step. The Fully Convolutional Network (FCN) contains only one dense layer for the final binary classification step.
The FCN instead consists of an extra convolutional layer, resulting in an increased ability for the network to abstract the input data relative to the other two configurations. The FCN instead consists of an extra convolutional layer, resulting in an increased ability for the network to abstract the input data relative to the other two configurations.
\\ \\
\todo{Insert image of LeNet from slides if time} \todo{Insert image of LeNet from slides if time}
@ -238,8 +238,8 @@
chosen to maintain training accuracy while minimizing training time. chosen to maintain training accuracy while minimizing training time.
\subsection{Neural Network Testing}\label{nnTesting} \subsection{Neural Network Testing}\label{nnTesting}
\tab After training each network, a separate test set of images (and labels) was used to evaluate the models. \tab After training each network, a separate test set of images (and labels) was used to evaluate the models.
The result of this testing was expressed primarily in the form of an accuracy (percentage). The result of this testing was expressed primarily in the form of an accuracy (percentage).
These results as well as the other methods presented in this paper are given in Table \ref{tab:results}. These results as well as the other methods presented in this paper are given in Table \ref{tab:results}.
% Kelvin Start % Kelvin Start
\subsection{Benchmarking}\label{benchmarking} \subsection{Benchmarking}\label{benchmarking}
@ -289,8 +289,11 @@
perform poorly in either precision or recall. perform poorly in either precision or recall.
\section{Results} \label{sec:results} \section{Results} \label{sec:results}
\tab The time taken to train each of the neural networks and traditional approaches was measured and recorded alongside their accuracy (evaluated using a separate test dataset) in Table \ref{tab:results}.
The time taken to train each of the neural networks and traditional
approaches was measured and recorded alongside their accuracy (evaluated
using a separate test dataset) in Table \ref{tab:results}.
% Annealing image and caption % Annealing image and caption
\begin{table}[H] \begin{table}[H]
\centering \centering
@ -301,7 +304,7 @@
\hline \hline
LeNet & 87.86\% & 65.67\\ LeNet & 87.86\% & 65.67\\
\hline \hline
CNN & 95.63\% & 119.31\\ CNN & \textbf{95.63\%} & 119.31\\
\hline \hline
FCN & 94.66\% & 113.94\\ FCN & 94.66\% & 113.94\\
\hline \hline
@ -309,18 +312,35 @@
\hline \hline
K Nearest Neighbours & 67.96\% & 0.22\\ K Nearest Neighbours & 67.96\% & 0.22\\
\hline \hline
Gaussian Naive Bayes & 85.44\% & 0.15\\ Gaussian Naive Bayes & 85.44\% & \textbf{0.15}\\
\hline \hline
Random Forest & 92.23\% & 0.92\\ Random Forest & 92.23\% & 0.92\\
\hline \hline
\end{tabular} \end{tabular}
\captionsetup{width=0.70\textwidth} \captionsetup{width=0.70\textwidth}
\caption{Comparison of the accuracy and training time of each neural network and traditional machine learning technique} \caption{Comparison of the accuracy and training time of each neural
network and traditional machine learning technique}
\label{tab:results} \label{tab:results}
\end{table} \end{table}
\section{Conclusion} \label{sec:conclusion} \section{Conclusion} \label{sec:conclusion}
Image from the ``Where's Waldo?'' puzzle books are ideal images to test
image classification techniques. Their tendency for hidden objects and ``red
herrings'' make them challenging to classify, but because they are drawings
they remain tangible for the human eye.
In our experiments we show that, given unspecialized methods, Neural
Networks perform best on this kind of image classification task. No matter
which architecture their accuracy is very high. One has to note though that
random forest performed surprisingly well, coming close to the performance
of the better Neural Networks. Especially when training time is taking into
account it is the clear winner.
It would be interesting to investigate various of these methods further.
There might be quite a lot of ground that could be gained by using
specialized variants of these clustering algorithms.
\clearpage % Ensures that the references are on a seperate page \clearpage % Ensures that the references are on a seperate page
\pagebreak \pagebreak
\bibliographystyle{alpha} \bibliographystyle{alpha}