Evaluating a CNN Model - The Basics

David Landup
David Landup

Evaluating models can be streamlined through a couple of simple methods that yield stats that you can reference later. If you've ever read a research paper - you've heard of a model's accuracy, weighted accuracy, recall (sensitivity), specificity, or precision. Keep in mind how misleading these numbers can be. In a later project, when we learn to classify Breast Cancer as malignant or benign, we'll deal with all of these metrics extensively, realizing how illusory they can be even together, not just as individual metrics.

While this is a handy way of assessing the promise of a model - just getting a classification report isn't enough. It's a great start though!

Learning Curves

We'll want to visualize the progress of learning through time first. It helps to see the road we took to the goal, not just the goal:

import pandas as pd

model_history = pd.DataFrame(history_basek.history)
model_history['epoch'] = history_basek.epoch

fig, ax = plt.subplots(2, figsize=(14,8))
num_epochs = model_history.shape[0]

ax[0].plot(np.arange(0, num_epochs), model_history["accuracy"], label="Training Accuracy", lw=3)
ax[0].plot(np.arange(0, num_epochs), model_history["val_accuracy"], label="Validation Accuracy", lw=3)

ax[1].plot(np.arange(0, num_epochs), model_history["loss"], label="Training Loss", lw=3)
ax[1].plot(np.arange(0, num_epochs), model_history["val_loss"], label="Validation Loss", lw=3)

ax[0].legend()
ax[1].legend()

plt.tight_layout()
plt.show()
Start project to continue
Lessson 3/6
You must first start the project before tracking progress.
Mark completed

© 2013-2024 Stack Abuse. All rights reserved.

AboutDisclosurePrivacyTerms