1
0

Minor report additions

This commit is contained in:
Silver-T 2018-05-25 18:23:56 +10:00
parent 640002813d
commit 6786f9173c
2 changed files with 35 additions and 19 deletions

View File

@ -137,3 +137,12 @@ month={Nov},}
pages={2825--2830},
year={2011}
}
@misc{bilogur_2017,
title={Where's Waldo | Kaggle},
url={https://www.kaggle.com/residentmario/wheres-waldo},
journal={Countries of the World | Kaggle},
publisher={Aleksey Bilogur},
author={Bilogur, Aleksey},
year={2017},
month={Oct}
}

View File

@ -279,11 +279,11 @@
\hline
\textbf{Method} & \textbf{Test Accuracy} & \textbf{Training Time (s)}\\
\hline
LeNet & 87.86\% & 65.67\\
LeNet & 89.81\% & 58.13\\
\hline
CNN & \textbf{95.63\%} & 119.31\\
CNN & \textbf{95.63\%} & 113.81\\
\hline
FCN & 94.66\% & 113.94\\
FCN & 94.66\% & 117.69\\
\hline
Support Vector Machine & 83.50\% & 5.90\\
\hline
@ -300,7 +300,7 @@
\label{tab:results}
\end{table}
We can see by the results that Deep Neural Networks outperform our benchmark
We can see by in these results that Deep Neural Networks outperform our benchmark
classification models, although the time required to train these networks is
significantly greater.
@ -322,7 +322,14 @@
There might be quite a lot of ground that could be gained by using
specialized variants of these clustering algorithms.
\clearpage % Ensures that the references are on a seperate page
Discussion of the results:
- Was this what we expected to see?
- What was surprising?
- If you take learning time into account, are NN still as good?
- We also did say we would have these other measures, so we should at least try to include them. Then the question is also what do they show.
\clearpage % Ensures that the references are on a separate page
\pagebreak
\bibliographystyle{alpha}
\bibliography{references}