From 30fbc81e049cc932ef4587537ce0e206ad3fb040 Mon Sep 17 00:00:00 2001 From: Silver-T Date: Fri, 25 May 2018 19:00:56 +1000 Subject: [PATCH] Reports edits --- mini_proj/report/waldo.tex | 41 +++++++++++++++++++------------------- 1 file changed, 21 insertions(+), 20 deletions(-) diff --git a/mini_proj/report/waldo.tex b/mini_proj/report/waldo.tex index ad8c301..2e52779 100644 --- a/mini_proj/report/waldo.tex +++ b/mini_proj/report/waldo.tex @@ -302,25 +302,34 @@ Random Forest & 92.23\% & 0.92\\ \hline \end{tabular} - \captionsetup{width=0.70\textwidth} + \captionsetup{width=0.80\textwidth} \caption{Comparison of the accuracy and training time of each neural network and traditional machine learning technique} \label{tab:results} \end{table} - We can see by the results that Deep Neural Networks outperform our benchmark - classification models, although the time required to train these networks is - significantly greater. - - % models that learn relationships between pixels outperform those that don't - + \par + We can see in these results that Deep Neural Networks outperform our benchmark classification models in terms of the accuracy they achieve. + However, the time required to train these networks is significantly greater. + An additional consideration is the extra layer of abstraction present in the FCN and not the CNN. + This may indicate that the FCN can achieve better accuracies, given more training time (epochs). + \\ + % models that learn relationships between pixels outperform those that don't + \todo{ + Discussion of the results: + - Was this what we expected to see? + - What was surprising? + - If you take learning time into account, are NN still as good? + - We also did say we would have these other measures, so we should at least try to include them. Then the question is also what do they show. + } + \par Of the benchmark classifiers we see the best performance with Random - Forests and the worst performance with K Nearest Neighbours. As supported - by the rest of the results, this comes down to a models ability to learn - the hidden relationships between the pixels. This is made more apparent by - performance of the Neural Networks. + Forests and the worst performance with K Nearest Neighbours. + The low training time of the random forests could be due to the task being one of binary classification, and the traversal of binary trees being efficient resulting in low training time. + In terms of the models' accuracies, this is supported by the rest of the results and comes down to a model's ability to learn hidden relationships between pixels. + This is made more apparent by performance of the Neural Networks. - \section{Conclusion} \label{sec:conclusion} + \section{Conclusion} \label{sec:conclusion} Image from the ``Where's Waldo?'' puzzle books are ideal images to test image classification techniques. Their tendency for hidden objects and ``red @@ -337,14 +346,6 @@ It would be interesting to investigate various of these methods further. There might be quite a lot of ground that could be gained by using specialized variants of these clustering algorithms. - - - Discussion of the results: - - Was this what we expected to see? - - What was surprising? - - If you take learning time into account, are NN still as good? - - We also did say we would have these other measures, so we should at least try to include them. Then the question is also what do they show. - \clearpage % Ensures that the references are on a separate page \pagebreak \bibliographystyle{alpha}