Minor report additions
This commit is contained in:
parent
640002813d
commit
6786f9173c
@ -137,3 +137,12 @@ month={Nov},}
|
||||
pages={2825--2830},
|
||||
year={2011}
|
||||
}
|
||||
@misc{bilogur_2017,
|
||||
title={Where's Waldo | Kaggle},
|
||||
url={https://www.kaggle.com/residentmario/wheres-waldo},
|
||||
journal={Countries of the World | Kaggle},
|
||||
publisher={Aleksey Bilogur},
|
||||
author={Bilogur, Aleksey},
|
||||
year={2017},
|
||||
month={Oct}
|
||||
}
|
@ -279,11 +279,11 @@
|
||||
\hline
|
||||
\textbf{Method} & \textbf{Test Accuracy} & \textbf{Training Time (s)}\\
|
||||
\hline
|
||||
LeNet & 87.86\% & 65.67\\
|
||||
LeNet & 89.81\% & 58.13\\
|
||||
\hline
|
||||
CNN & \textbf{95.63\%} & 119.31\\
|
||||
CNN & \textbf{95.63\%} & 113.81\\
|
||||
\hline
|
||||
FCN & 94.66\% & 113.94\\
|
||||
FCN & 94.66\% & 117.69\\
|
||||
\hline
|
||||
Support Vector Machine & 83.50\% & 5.90\\
|
||||
\hline
|
||||
@ -300,29 +300,36 @@
|
||||
\label{tab:results}
|
||||
\end{table}
|
||||
|
||||
We can see by the results that Deep Neural Networks outperform our benchmark
|
||||
We can see by in these results that Deep Neural Networks outperform our benchmark
|
||||
classification models, although the time required to train these networks is
|
||||
significantly greater.
|
||||
|
||||
\section{Conclusion} \label{sec:conclusion}
|
||||
\section{Conclusion} \label{sec:conclusion}
|
||||
|
||||
Image from the ``Where's Waldo?'' puzzle books are ideal images to test
|
||||
image classification techniques. Their tendency for hidden objects and ``red
|
||||
herrings'' make them challenging to classify, but because they are drawings
|
||||
they remain tangible for the human eye.
|
||||
Image from the ``Where's Waldo?'' puzzle books are ideal images to test
|
||||
image classification techniques. Their tendency for hidden objects and ``red
|
||||
herrings'' make them challenging to classify, but because they are drawings
|
||||
they remain tangible for the human eye.
|
||||
|
||||
In our experiments we show that, given unspecialized methods, Neural
|
||||
Networks perform best on this kind of image classification task. No matter
|
||||
which architecture their accuracy is very high. One has to note though that
|
||||
random forest performed surprisingly well, coming close to the performance
|
||||
of the better Neural Networks. Especially when training time is taking into
|
||||
account it is the clear winner.
|
||||
In our experiments we show that, given unspecialized methods, Neural
|
||||
Networks perform best on this kind of image classification task. No matter
|
||||
which architecture their accuracy is very high. One has to note though that
|
||||
random forest performed surprisingly well, coming close to the performance
|
||||
of the better Neural Networks. Especially when training time is taking into
|
||||
account it is the clear winner.
|
||||
|
||||
It would be interesting to investigate various of these methods further.
|
||||
There might be quite a lot of ground that could be gained by using
|
||||
specialized variants of these clustering algorithms.
|
||||
It would be interesting to investigate various of these methods further.
|
||||
There might be quite a lot of ground that could be gained by using
|
||||
specialized variants of these clustering algorithms.
|
||||
|
||||
\clearpage % Ensures that the references are on a seperate page
|
||||
|
||||
Discussion of the results:
|
||||
- Was this what we expected to see?
|
||||
- What was surprising?
|
||||
- If you take learning time into account, are NN still as good?
|
||||
- We also did say we would have these other measures, so we should at least try to include them. Then the question is also what do they show.
|
||||
|
||||
\clearpage % Ensures that the references are on a separate page
|
||||
\pagebreak
|
||||
\bibliographystyle{alpha}
|
||||
\bibliography{references}
|
||||
|
Reference in New Issue
Block a user