Report changes
This commit is contained in:
parent
30fbc81e04
commit
bf193539f7
@ -314,20 +314,13 @@
|
|||||||
An additional consideration is the extra layer of abstraction present in the FCN and not the CNN.
|
An additional consideration is the extra layer of abstraction present in the FCN and not the CNN.
|
||||||
This may indicate that the FCN can achieve better accuracies, given more training time (epochs).
|
This may indicate that the FCN can achieve better accuracies, given more training time (epochs).
|
||||||
\\
|
\\
|
||||||
% models that learn relationships between pixels outperform those that don't
|
|
||||||
\todo{
|
|
||||||
Discussion of the results:
|
|
||||||
- Was this what we expected to see?
|
|
||||||
- What was surprising?
|
|
||||||
- If you take learning time into account, are NN still as good?
|
|
||||||
- We also did say we would have these other measures, so we should at least try to include them. Then the question is also what do they show.
|
|
||||||
}
|
|
||||||
\par
|
\par
|
||||||
Of the benchmark classifiers we see the best performance with Random
|
Of the benchmark classifiers we see the best performance with Random
|
||||||
Forests and the worst performance with K Nearest Neighbours.
|
Forests and the worst performance with K Nearest Neighbours. This is consistent with the models' abilities to learn hidden relationships within the data (and K Nearest Neighbours lack thereof).
|
||||||
The low training time of the random forests could be due to the task being one of binary classification, and the traversal of binary trees being efficient resulting in low training time.
|
The accuracy of the random forest approach was unexpected, as the neural networks have had success in image classification task previously.
|
||||||
In terms of the models' accuracies, this is supported by the rest of the results and comes down to a model's ability to learn hidden relationships between pixels.
|
However, this may be due to the random forest methods ability to avoid overfitting.
|
||||||
This is made more apparent by performance of the Neural Networks.
|
The low training time of the classical methods could be due to the their requirement of only one pass of the data to train the model.
|
||||||
|
Neural networks require more passes the more they abstract the data (e.g. though convolutions).
|
||||||
|
|
||||||
\section{Conclusion} \label{sec:conclusion}
|
\section{Conclusion} \label{sec:conclusion}
|
||||||
|
|
||||||
|
Reference in New Issue
Block a user