1
0

report writing, new model file

This commit is contained in:
Silver-T 2018-05-25 16:43:40 +10:00
parent f9c5ffefea
commit 5c5f0d0860
4 changed files with 38 additions and 18 deletions

View File

@ -79,7 +79,7 @@ def gen_data(w_path, n_w_path):
imgs_lbl.append(0)
print('Completed: {0}/{1} non-Waldo images'.format(nw+1, total_nw))
if nw > 50*w:
if nw > 10*w:
print("Non_Waldo files restricted")
break
else:

Binary file not shown.

View File

@ -3,6 +3,8 @@
\usepackage{graphicx} % Used to insert images into the paper
\usepackage{float}
\usepackage{caption}
\interfootnotelinepenalty=10000 % Stops footnotes overflowing onto the newt page
\usepackage[justification=centering]{caption} % Used for captions
\captionsetup[figure]{font=small} % Makes captions small
\newcommand\tab[1][0.5cm]{\hspace*{#1}} % Defines a new command to use 'tab' in text
@ -99,16 +101,6 @@
architectures, as this method is currently the most used for image
classification.
\todo{
\\A couple of papers that may be useful (if needed):
- LeNet: http://yann.lecun.com/exdb/publis/pdf/lecun-01a.pdf
- AlexNet: http://papers.nips.cc/paper/4824-imagenet-classification-with-deep-convolutional-neural-networks
- General comparison of LeNet and AlexNet:
"On the Performance of GoogLeNet and AlexNet Applied to Sketches", Pedro Ballester and Ricardo Matsumura Araujo
- Deep NN Architecture:
https://www-sciencedirect-com.ezproxy.lib.monash.edu.au/science/article/pii/S0925231216315533
}
\subsection{Classical Machine Learning Methods}
The following paragraphs will give only brief descriptions of the different
@ -173,7 +165,7 @@
The Fully Convolutional Network (FCN) contains only one dense layer for the final binary classification step.
The FCN instead consists of an extra convolutional layer, resulting in an increased ability for the network to abstract the input data relative to the other two configurations.
\\
\textbf{Insert image of LeNet from slides}
\todo{Insert image of LeNet from slides if time}
\section{Method} \label{sec:method}
\tab
@ -217,8 +209,7 @@
\subsection{Neural Network Testing}\label{nnTesting}
\tab After training each network, a separate test set of images (and labels) was used to evaluate the models.
The result of this testing was expressed primarily in the form of an accuracy (percentage).
These results as well as the other methods presented in this paper are given in Figure \textbf{[insert ref to results here]} of the Results section.
\textbf{***********}
These results as well as the other methods presented in this paper are given in Table \ref{tab:results}.
% Kelvin Start
\subsection{Benchmarking}\label{benchmarking}
@ -270,6 +261,35 @@
% Kelvin End
\section{Results} \label{sec:results}
\tab The time taken to train each of the neural networks and traditional approaches was measured and recorded alongside their accuracy (evaluated using a separate test dataset) in Table \ref{tab:results}.
% Annealing image and caption
\begin{table}[H]
\centering
\renewcommand{\arraystretch}{1.5} % Adds some space to the table
\begin{tabular}{|c|c|c|}
\hline
\textbf{Method} & \textbf{Test Accuracy} & \textbf{Training Time (s)}\\
\hline
LeNet & 87.86\% & 65.67\\
\hline
CNN & 95.63\% & 119.31\\
\hline
FCN & 94.66\% & 113.94\\
\hline
Support Vector Machine & 83.50\% & 5.90\\
\hline
K Nearest Neighbours & 67.96\% & 0.22\\
\hline
Gaussian Naive Bayes & 85.44\% & 0.15\\
\hline
Random Forest & 92.23\% & 0.92\\
\hline
\end{tabular}
\captionsetup{width=0.70\textwidth}
\caption{Comparison of the accuracy and training time of each neural network and traditional machine learning technique}
\label{tab:results}
\end{table}
\section{Conclusion} \label{sec:conclusion}

View File

@ -49,7 +49,7 @@ def CNN():
## Define the model start and end
model = Model(inputs=inputs, outputs=classif)
# Optimizer recommended Adadelta values (lr=0.01)
model.compile(optimizer=Adam(), loss='binary_crossentropy', metrics=['accuracy', f1])
model.compile(optimizer=Adam(), loss='binary_crossentropy', metrics=['accuracy'])
return model
@ -79,7 +79,7 @@ def FCN():
## Define the model start and end
model = Model(inputs=inputs, outputs=classif)
# Optimizer recommended Adadelta values (lr=0.01)
model.compile(optimizer=Adam(), loss='binary_crossentropy', metrics=['accuracy', f1])
model.compile(optimizer=Adam(), loss='binary_crossentropy', metrics=['accuracy'])
return model
@ -107,7 +107,7 @@ def LeNet():
## Define the model start and end
model = Model(inputs=inputs, outputs=classif)
model.compile(optimizer=Adam(), loss='binary_crossentropy', metrics=['accuracy', f1])
model.compile(optimizer=Adam(), loss='binary_crossentropy', metrics=['accuracy'])
return model
@ -115,7 +115,7 @@ def LeNet():
AlexNet architecture
'''
def AlexNet():
inputs = Input(shape=(3, 64, 64))
#inputs = Input(shape=(3, 64, 64))
return model