1
0

Merge branch 'master' of github.com:Dekker1/ResearchMethods

This commit is contained in:
Kelvin Davis 2018-05-25 17:17:11 +10:00
commit d57c73be04

View File

@ -38,10 +38,10 @@
understand. In this report we compare the well known machine learning understand. In this report we compare the well known machine learning
methods Naive Bayes, Support Vector Machines, $k$-Nearest Neighbors, and methods Naive Bayes, Support Vector Machines, $k$-Nearest Neighbors, and
Random Forest against the Neural Network Architectures LeNet, Fully Random Forest against the Neural Network Architectures LeNet, Fully
Convolutional Neural Networks, and Fully Convolutional Neural Networks. Convolutional Neural Networks, and Fully Convolutional Neural Networks. Our
\todo{I don't like this big summation but I think it is the important comparison shows that, although the different neural networks architectures
information} have the highest accuracy, some other methods come close with only a
Our comparison shows that \todo{...} fraction of the training time.
\end{abstract} \end{abstract}
@ -289,7 +289,10 @@
perform poorly in either precision or recall. perform poorly in either precision or recall.
\section{Results} \label{sec:results} \section{Results} \label{sec:results}
\tab The time taken to train each of the neural networks and traditional approaches was measured and recorded alongside their accuracy (evaluated using a separate test dataset) in Table \ref{tab:results}.
The time taken to train each of the neural networks and traditional
approaches was measured and recorded alongside their accuracy (evaluated
using a separate test dataset) in Table \ref{tab:results}.
% Annealing image and caption % Annealing image and caption
\begin{table}[H] \begin{table}[H]
@ -301,7 +304,7 @@
\hline \hline
LeNet & 87.86\% & 65.67\\ LeNet & 87.86\% & 65.67\\
\hline \hline
CNN & 95.63\% & 119.31\\ CNN & \textbf{95.63\%} & 119.31\\
\hline \hline
FCN & 94.66\% & 113.94\\ FCN & 94.66\% & 113.94\\
\hline \hline
@ -309,18 +312,35 @@
\hline \hline
K Nearest Neighbours & 67.96\% & 0.22\\ K Nearest Neighbours & 67.96\% & 0.22\\
\hline \hline
Gaussian Naive Bayes & 85.44\% & 0.15\\ Gaussian Naive Bayes & 85.44\% & \textbf{0.15}\\
\hline \hline
Random Forest & 92.23\% & 0.92\\ Random Forest & 92.23\% & 0.92\\
\hline \hline
\end{tabular} \end{tabular}
\captionsetup{width=0.70\textwidth} \captionsetup{width=0.70\textwidth}
\caption{Comparison of the accuracy and training time of each neural network and traditional machine learning technique} \caption{Comparison of the accuracy and training time of each neural
network and traditional machine learning technique}
\label{tab:results} \label{tab:results}
\end{table} \end{table}
\section{Conclusion} \label{sec:conclusion} \section{Conclusion} \label{sec:conclusion}
Image from the ``Where's Waldo?'' puzzle books are ideal images to test
image classification techniques. Their tendency for hidden objects and ``red
herrings'' make them challenging to classify, but because they are drawings
they remain tangible for the human eye.
In our experiments we show that, given unspecialized methods, Neural
Networks perform best on this kind of image classification task. No matter
which architecture their accuracy is very high. One has to note though that
random forest performed surprisingly well, coming close to the performance
of the better Neural Networks. Especially when training time is taking into
account it is the clear winner.
It would be interesting to investigate various of these methods further.
There might be quite a lot of ground that could be gained by using
specialized variants of these clustering algorithms.
\clearpage % Ensures that the references are on a seperate page \clearpage % Ensures that the references are on a seperate page
\pagebreak \pagebreak
\bibliographystyle{alpha} \bibliographystyle{alpha}