AI Training Dashboards
The number of samples the model sees and learns from during training.
The number of samples (that the model has never seen before) on which the model’s performance is evaluated.
The amount of time taken to train the model.
The number of passes through the entire dataset the model made during training.
The number of individual values forming the model’s prediction process, that the model had to learn “trained” values for. (More parameters mean a more “complex” model, meaning more training is needed than a model with fewer parameters.)
The number of layers within the model architecture.
A metric for gauging a model’s tendency to make true-positive predictions over false-positive predictions. (a value of 0.5 means an equal number of true-positives vs. false-positives; a value of 1.0 means no false-positives.)
The fraction of positive results (e.g. true cases of spam in a spam detection problem that the model correctly identified.) (e.g. fraction of all true-spam that the model found.)
The fraction of positive model predictions that were correct. (e.g. fraction of all true-spam predictions the model made that were actually spam.)
The fraction of all cases the model correctly predicted.
The measure of how “wrong” the model’s predictions are. It is used by the model to correct itself during training.
The measure how likely the model’s predictions are. It is used by the model to correct itself during training.