Evaluating a CNN Model - The Basics
Evaluating models can be streamlined through a couple of simple methods that yield stats that you can reference later. If you've ever read a research paper - you've heard of a model's accuracy, weighted accuracy, recall (sensitivity), specificity, or precision. Keep in mind how misleading these numbers can be. In a later project, when we learn to classify Breast Cancer as malignant or benign, we'll deal with all of these metrics extensively, realizing how illusory they can be even together, not just as individual metrics.
While this is a handy way of assessing the promise of a model - just getting a classification report isn't enough. It's a great start though!
We'll want to visualize the progress of learning through time first. It helps to see the road we took to the goal, not just the goal:
import pandas as pd model_history = pd.DataFrame(history_basek.history) model_history['epoch'] = history_basek.epoch fig, ax = plt.subplots(2, figsize=(14,8)) num_epochs = model_history.shape ax.plot(np.arange(0, num_epochs), model_history["accuracy"], label="Training Accuracy", lw=3) ax.plot(np.arange(0, num_epochs), model_history["val_accuracy"], label="Validation Accuracy", lw=3) ax.plot(np.arange(0, num_epochs), model_history["loss"], label="Training Loss", lw=3) ax.plot(np.arange(0, num_epochs), model_history["val_loss"], label="Validation Loss", lw=3) ax.legend() ax.legend() plt.tight_layout() plt.show()