Machine Learning

Machine Learning Model Training: What It Is and Why It’s Important

David Weedmark2021-11-02 | 8 min read

Return to blog home

Training a machine learning (ML) model is a process in which a machine learning algorithm is fed with training data from which it can learn. ML models can be trained to benefit businesses in numerous ways, by quickly processing huge volumes of data, identifying patterns, finding anomalies or testing correlations that would be difficult for a human to do unaided.

What is Model Training?

Model training is at the heart of the data science development lifecycle where the data science team works to fit the best weights and biases to an algorithm to minimize the loss function over prediction range. Loss functions define how to optimize the ML algorithms. A data science team may use different types of loss functions depending on the project objectives, the type of data used and the type of algorithm.

When a supervised learning technique is used, model training creates a mathematical representation of the relationship between the data features and a target label. In unsupervised learning, it creates a mathematical representation among the data features themselves.

Importance of Model Training

Model training is the primary step in machine learning, resulting in a working model that can then be validated, tested and deployed. The model’s performance during training will eventually determine how well it will work when it is eventually put into an application for the end-users.

Both the quality of the training data and the choice of the algorithm are central to the model training phase. In most cases, training data is split into two sets for training and then validation and testing.

The selection of the algorithm is primarily determined by the end-use case. However, there are always additional factors that need to be considered, such as algorithm-model complexity, performance, interpretability, computer resource requirements, and speed. Balancing out these various requirements can make selecting algorithms an involved and complicated process.

how-to-train-a-machine-learning-model

How To Train a Machine Learning Model

Training a model requires a systematic, repeatable process that maximizes your utilization of your available training data and the time of your data science team. Before you begin the training phase, you need to first determine your problem statement, access your data set and clean the data to be presented to the model.

In addition to this, you need to determine which algorithms you will use and what parameters (hyperparameters) they will run with. With all of this done, you can split your dataset into a training set and a testing set, then prepare your model algorithms for training.

Split the Dataset

Your initial training data is a limited resource that needs to be allocated carefully. Some of it can be used to train your model, and some of it can be used to test your model – but you can’t use the same data for each step. You can’t properly test a model unless you have given it a new data set that it hasn’t encountered before. Splitting the training data into two or more sets allows you to train and then validate the model using a single source of data. This allows you to see if the model is overfit, meaning that it performs well with the training data but poorly with the test data.

A common way of splitting the training data is to use cross-validation. In 10-fold cross-validation, for example, the data is split into ten sets, allowing you to train and test the data ten times. To do this:

  1. Split the data into ten equal parts or folds.
  2. Designate one fold as the hold-out fold.
  3. Train the model on the other nine folds.
  4. Test the model on the hold-out fold.

Repeat this process ten times, each time selecting a different fold to be the hold-out fold. The average performance across the ten hold-out folds is your performance estimate, called the cross-validated score.

Select Algorithms to Test

In machine learning, there are thousands of algorithms to choose from, and there is no sure way to determine which will be the best for any specific model. In most cases, you will likely try dozens, if not hundreds, of algorithms in order to find the one that results in an accurate working model. Selecting candidate algorithms will often depend on:

  • Size of the training data.
  • Accuracy and interpretability of the required output.
  • Speed of training time required, which is inversely proportional to accuracy.
  • Linearity of the training data.
  • Number of features in the data set.

Tune the Hyperparameters

Hyperparameters are the high-level attributes set by the data science team before the model is assembled and trained. While many attributes can be learned from the training data, they cannot learn their own hyperparameters.

As an example, if you are using a regression algorithm, the model can determine the regression coefficients itself by analyzing the data. However, it cannot dictate the strength of the penalty it should use to regularize an overabundance of variables. As another example, a model using the random forest technique can determine where decision trees will be split, but the number of trees to be used needs to be tuned beforehand.

Fit and Tune Models

Now that the data is prepared and the model’s hyperparameters have been determined, it’s time to start training the models. The process is essentially to loop through the different algorithms using each set of hyperparameter values you’ve decided to explore. To do this:

  1. Split the data.
  2. Select an algorithm.
  3. Tune the hyperparameter values.
  4. Train the model.
  5. Select another algorithm and repeat steps 3 and 4..

Next, select another set of hyperparameter values you want to try for the same algorithm, cross-validate it again and calculate the new score. Once you have tried each hyperparameter value, you can repeat these same steps for additional algorithms.

Think of these trials as track and field heats. Each algorithm has demonstrated what it can do with the different hyperparameter values. Now you can select the best version from each algorithm and send them on to the final competition.

Choose the Best Model

Now it’s time to test the best versions of each algorithm to determine which gives you the best model overall.

  1. Make predictions on your test data.
  2. Determine the ground truth for your target variable during the training of that model.
  3. Determine the performance metrics from your predictions and the ground truth target variable.
  4. Run each finalist model with the test data.

Once the testing is done, you can compare their performance to determine which are the better models. The overall winner should have performed well (if not the best) in training as well as in testing. It should also perform well on your other performance metrics (like speed and empirical loss), and – ultimately – it should adequately solve or answer the question posed in your problem statement.

Systematic Approach to Model Training

Using a systematic and repeatable model training process is of paramount importance for any organization planning to build successful machine learning model at scale. Central to this is having all of your resources, tools, libraries and documentation in a single enterprise platform that will foster collaboration instead of hindering it.

The Practical Guide to  Accelerating the Data Science Lifecycle  Lessons from the field on becoming a model-driven businesses.   Read the Guide

David Weedmark is a published author who has worked as a project manager, software developer and as a network security consultant.

Subscribe to the Domino Newsletter

Receive data science tips and tutorials from leading Data Science leaders, right to your inbox.

*

By submitting this form you agree to receive communications from Domino related to products and services in accordance with Domino's privacy policy and may opt-out at anytime.