Creating a machine learning model and deploying it in production takes effort. We had previously discussed the various ways in which we can deploy models in production. However, model deployment is not the end; it is just the beginning. The real issues start from here. We don’t have any control over the data in the actual environment. Changes might happen, and we need to be ready to detect and upgrade our model before it becomes obsolete. In this piece, we will discuss some ways to monitor model performance on an ongoing basis.
A machine learning model is built on a set of input training data with various attributes. So, the most important facet is to check to see if the input data on which the model was trained still holds good on the actual data in the real world environment. This terminology is primarily known as Concept Drift. The change in data might be sudden, or it might change gradually over time. So, it is essential to identify the change patterns and fix the model beforehand.
Once the model is deployed in a production environment, we need to follow the following steps to keep our models healthy and useful for its end users.
Before deploying a Machine Learning model in production, devise the performance evaluation metrics that should be monitored over time as well as determine the frequency of refreshing the machine learning model. There is no formal universal strategy to estimate the required changes. For example, in a time series problem, the data might change over time. In the case of social media, the data is dynamic and can change anytime. In the case of the consumer goods industry, the customer-facing data changes over some time, which might be monthly or during promotional activities. In the case of enterprise business, it can be quarterly or when the financial results are out. So, we need to devise a model performance estimation strategy that takes into account the Enterprise and Industry for which the model is developed along with the nature of its input features and data.
The stepping stone after deploying a machine learning model is to monitor its performance. The shift in prediction accuracy can be a summarized metrics for monitoring performance over time. If there is a shift in prediction, then that may be a sign of model performance degradation. This shift in predictions can also be termed as target drift.
We should also monitor the models for feature drifts. If there is a change in the data distribution of input features, it is a sign of feature drift that would affect the model performance. There might be cases where thousands of features are used. If monitoring all features sounds like a daunting task, we can monitor a few critical features whose change in data distribution might skew the model results terribly.
We can also monitor for:
- The percentage of missing values are increasing over time
- The change in levels in the categorical attributes.
If we observe degraded model performance or hit the model refresh scheduled time, then it’s the time again for restructuring the model design. Refreshing the model and creating a retrained model is easy. However, we might need to think of additional features that might improve the model’s performance. Similarly, while correcting a degraded model, we should analyze the root cause of the problem and find an appropriate solution to it. For example, if the model’s performance degradation is due to a feature’s drift, we need to dive deep and analyze the feature to restructure it that gets it more robust and sustainable over time.
The final step is to rebuild the model with the new or modified set of features and model parameters. It is most comfortable of all. The only requirement is to find an optimal model that yields the best accuracy, which generalizes well to some data drifts and doesn’t become a bottleneck to the IT resources. This one line requirement sometimes take days or months to achieve.
To conclude, model monitoring is a continuous process. Devising a correct strategy for this process is crucial to monitor the right elements. Restructuring and rebuilding a model to beat the previous champion model is crucial for a machine learning or data science program’s success.
A machine learning project is not a one-time implementation activity. Providing robust results every day that drives the business forward is what defines the benefits of machine learning and data science.