Merge branch 'master' of github.com:Dekker1/ResearchMethods
This commit is contained in:
commit
d57c73be04
@ -38,10 +38,10 @@
|
||||
understand. In this report we compare the well known machine learning
|
||||
methods Naive Bayes, Support Vector Machines, $k$-Nearest Neighbors, and
|
||||
Random Forest against the Neural Network Architectures LeNet, Fully
|
||||
Convolutional Neural Networks, and Fully Convolutional Neural Networks.
|
||||
\todo{I don't like this big summation but I think it is the important
|
||||
information}
|
||||
Our comparison shows that \todo{...}
|
||||
Convolutional Neural Networks, and Fully Convolutional Neural Networks. Our
|
||||
comparison shows that, although the different neural networks architectures
|
||||
have the highest accuracy, some other methods come close with only a
|
||||
fraction of the training time.
|
||||
|
||||
\end{abstract}
|
||||
|
||||
@ -289,7 +289,10 @@
|
||||
perform poorly in either precision or recall.
|
||||
|
||||
\section{Results} \label{sec:results}
|
||||
\tab The time taken to train each of the neural networks and traditional approaches was measured and recorded alongside their accuracy (evaluated using a separate test dataset) in Table \ref{tab:results}.
|
||||
|
||||
The time taken to train each of the neural networks and traditional
|
||||
approaches was measured and recorded alongside their accuracy (evaluated
|
||||
using a separate test dataset) in Table \ref{tab:results}.
|
||||
|
||||
% Annealing image and caption
|
||||
\begin{table}[H]
|
||||
@ -301,7 +304,7 @@
|
||||
\hline
|
||||
LeNet & 87.86\% & 65.67\\
|
||||
\hline
|
||||
CNN & 95.63\% & 119.31\\
|
||||
CNN & \textbf{95.63\%} & 119.31\\
|
||||
\hline
|
||||
FCN & 94.66\% & 113.94\\
|
||||
\hline
|
||||
@ -309,18 +312,35 @@
|
||||
\hline
|
||||
K Nearest Neighbours & 67.96\% & 0.22\\
|
||||
\hline
|
||||
Gaussian Naive Bayes & 85.44\% & 0.15\\
|
||||
Gaussian Naive Bayes & 85.44\% & \textbf{0.15}\\
|
||||
\hline
|
||||
Random Forest & 92.23\% & 0.92\\
|
||||
\hline
|
||||
\end{tabular}
|
||||
\captionsetup{width=0.70\textwidth}
|
||||
\caption{Comparison of the accuracy and training time of each neural network and traditional machine learning technique}
|
||||
\caption{Comparison of the accuracy and training time of each neural
|
||||
network and traditional machine learning technique}
|
||||
\label{tab:results}
|
||||
\end{table}
|
||||
|
||||
\section{Conclusion} \label{sec:conclusion}
|
||||
|
||||
Image from the ``Where's Waldo?'' puzzle books are ideal images to test
|
||||
image classification techniques. Their tendency for hidden objects and ``red
|
||||
herrings'' make them challenging to classify, but because they are drawings
|
||||
they remain tangible for the human eye.
|
||||
|
||||
In our experiments we show that, given unspecialized methods, Neural
|
||||
Networks perform best on this kind of image classification task. No matter
|
||||
which architecture their accuracy is very high. One has to note though that
|
||||
random forest performed surprisingly well, coming close to the performance
|
||||
of the better Neural Networks. Especially when training time is taking into
|
||||
account it is the clear winner.
|
||||
|
||||
It would be interesting to investigate various of these methods further.
|
||||
There might be quite a lot of ground that could be gained by using
|
||||
specialized variants of these clustering algorithms.
|
||||
|
||||
\clearpage % Ensures that the references are on a seperate page
|
||||
\pagebreak
|
||||
\bibliographystyle{alpha}
|
||||
|
Reference in New Issue
Block a user