1
0

Report changes

This commit is contained in:
Silver-T 2018-05-25 19:18:08 +10:00
parent 30fbc81e04
commit bf193539f7

View File

@ -314,20 +314,13 @@
An additional consideration is the extra layer of abstraction present in the FCN and not the CNN.
This may indicate that the FCN can achieve better accuracies, given more training time (epochs).
\\
% models that learn relationships between pixels outperform those that don't
\todo{
Discussion of the results:
- Was this what we expected to see?
- What was surprising?
- If you take learning time into account, are NN still as good?
- We also did say we would have these other measures, so we should at least try to include them. Then the question is also what do they show.
}
\par
Of the benchmark classifiers we see the best performance with Random
Forests and the worst performance with K Nearest Neighbours.
The low training time of the random forests could be due to the task being one of binary classification, and the traversal of binary trees being efficient resulting in low training time.
In terms of the models' accuracies, this is supported by the rest of the results and comes down to a model's ability to learn hidden relationships between pixels.
This is made more apparent by performance of the Neural Networks.
Forests and the worst performance with K Nearest Neighbours. This is consistent with the models' abilities to learn hidden relationships within the data (and K Nearest Neighbours lack thereof).
The accuracy of the random forest approach was unexpected, as the neural networks have had success in image classification task previously.
However, this may be due to the random forest methods ability to avoid overfitting.
The low training time of the classical methods could be due to the their requirement of only one pass of the data to train the model.
Neural networks require more passes the more they abstract the data (e.g. though convolutions).
\section{Conclusion} \label{sec:conclusion}