Week 11 - Resilience Patterns: Stateless Serving Function and Continued Model Evaluation Patterns. Reproducibility Patterns: Transform

Lecture recording (Nov 19, 2024) here.

Lab recording (Nov 21, 2024) here.

Introduction

This week we study two resilience patterns: the stateless serving function pattern and the continued model evaluation pattern. The stateless serving function pattern exports the machine learning model as a stateless function so that it can be shared by multiple clients in a scalable way. This is because production machine learning systems must be able to synchronously handle thousands to millions of prediction requests per second. The continued model evaluation pattern can detect when a deployed model is no longer fit-for-purpose by continually monitoring model predictions and evaluating model performance. This is because model performance of deployed models degrades over time either due to data drift, concept drift or other changes to the pipelines which feed data to the model.

We will also look at the transform design pattern. The transform reproducibility design pattern ensures that data transformations in processing systems can be consistently reproduced, providing data reliability, traceability, and ease of debugging.

Videos

The Stateless Serving Function Pattern Machine Learning Design Patterns | Google Executive | Investor | Meet the Author (What is your favourite design pattern 7:37-10:25)
The Continued Model Evaluation Pattern Machine Learning Design Patterns | Dr Ebin Deni Raj (Design Patterns for Resilient Serving- Continuous Model Evaluation 1:12:50-1:32:30)
Machine Learning Design Patterns (16:10-26:17)
Pipeline Design Patterns What are some common data pipeline design patterns? (Extract, Transform and Load)

Assignment(s)

Assignment 5 - Multimodal Input: An Autonomous Driving System
Assignment 6 - Investigation of Design Patterns for Machine Learning

The Stateless Serving Function Design Pattern

The Rationale

The rationale behind the stateless serving function design pattern is to enable scalable and efficient handling of incoming requests in a distributed computing environment. In this pattern, each request is treated independently, and the server functions do not maintain any state or store any context about previous requests. Instead, they focus solely on processing the current request and generating a response.

When we implement the stateless design pattern, we create classes and objects that do not retain state changes. In this approach, each use of the object, as an example, uses the object in its organic form. In our context, state refers to the values of the object's variables. So, there is no definitive list of states. The state of an object is specific to a moment in time. Using this design pattern, we can have a production ML system synchronously handle millions of prediction requests per second.
Stateless vs Stateful

For a discussion on stateful vs stateless, see RedHat: Stateful vs stateless.

The UML

Here is the UML diagram for the stateless serving function pattern:

  +-----------------+
  |   Application   |
  +-----------------+
  | + serve(request)|
  +-----------------+
          |
          V
  +--------------------+
  |  RequestProcessor  |
  +--------------------+
  | - process(request) |
  +--------------------+
          |
          V
  +-------------------+
  |     Controller    |
  +-------------------+
  | - handle(request) |
  +-------------------+
          |
          V
  +------------------+
  |    Business      |
  +------------------+
  | - doSomething()  |
  +------------------+  


Here are the components of the stateless serving function design pattern:
  1. Application: The Application class represents the server application. It has a serve() function that takes a Request object and passes it to the RequestProcessor.
  2. RequestProcessor, Controller: The RequestProcessor class handles the processing of the request and invokes the Controller to handle the business logic.
  3. Business: The Business class contains the actual business logic that needs to be performed.

Code Example - Stateless Serving Function Pattern

This example is code representation of the above UML:
C++: Stateless.cpp.
C#: Stateless.cs.
Java: Stateless.java.
Python: Stateless.py.

Common Usage

The following are some common usages of the stateless serving function pattern:

  1. Microservices architecture: Stateless serving functions are often used in microservices-based architectures. Each microservice can be implemented as a stateless function, focusing on a specific functionality or business logic. This enables independent scaling and deployment of microservices, making it easier to manage and update individual components of a complex system.
  2. RESTful APIs: Stateless serving functions are well-suited for building RESTful APIs. Each API endpoint can be implemented as a separate function, handling incoming requests independently. This allows for horizontal scaling and efficient resource utilization, especially when the API experiences varying levels of traffic. Stateless functions are also compatible with popular API gateway services and can be easily integrated into API management platforms.
  3. Web applications: Stateless serving functions can power server-side logic in web applications. They can handle incoming requests, process data, and generate responses without the need for maintaining server-side sessions or shared state. This enables efficient scaling and distribution of web application workloads while maintaining responsiveness and minimizing resource usage.
  4. Event-driven systems: Stateless functions are frequently used in event-driven architectures and systems that process real-time events or messages. They can be triggered by events such as messages from message queues, updates in data streams, or events from external systems. Stateless functions process these events independently, allowing for scalability and parallel processing of events.
  5. Data processing pipelines: Stateless serving functions are suitable for building data processing pipelines, especially in scenarios involving batch processing or stream processing. Each function can perform a specific data transformation or analysis task, allowing for efficient parallel processing of data. By chaining multiple functions together, complex data processing pipelines can be constructed and scaled as needed.
  6. Serverless computing: Stateless serving functions are a fundamental building block of serverless computing platforms. In serverless architectures, functions are executed in response to specific events or triggers, and resources are allocated dynamically. By utilizing stateless functions, developers can focus on writing the business logic without worrying about infrastructure management or scalability.
  7. Machine learning inference: Stateless serving functions are used for serving machine learning models in production. When a prediction request comes in, the function loads the required model, performs the inference, and returns the prediction. This allows for efficient scaling and parallel execution of predictions, ensuring that the system can handle high volumes of requests.

Code Problem - Server Handler

In this example, a server application handles multiple types of requests using a stateless serving function design pattern. The Request class represents a request made to the server and provides a method to retrieve the request details. The Business class simulates processing the request by performing some business logic based on the request. In this example, it simply prints the request. The Controller class handles the request and delegates it to the Business class for processing. The RequestProcessor class maintains a mapping of endpoints to their respective controllers. It extracts the endpoint from the request and finds the corresponding controller to handle the request. The Application class serves as the entry point of the program. It allows registering controllers for specific endpoints and processes incoming requests.

In the main() function, we create an instance of the Application class and register two controllers for different endpoints. We then create multiple requests and pass them to the serve() method of the application, which delegates the processing to the appropriate controller based on the request's endpoint. When you run this program, you will see the requests being processed by the respective controllers based on their endpoints. If an invalid endpoint is provided, an error message will be displayed.
Request.h,
Business.h,
Controller.h,
RequestProcessor.h,
RequestProcessor.cpp,
Application.h,
Server.cpp.

Code Problem - Socket Based Server

Sample code for a simple socket-based server for handling requests is given below. This version uses Winsock for socket programming on Windows. Note that you'll need to link against the Ws2_32.lib library.
MLModel.h,
StatelessServingFunction.h,
RequestHandler.h,
RequestHandler.cpp,
Server.h,
Server.cpp,
ServerMain.cpp.

The Continued Model Evaluation Design Pattern

The Rationale

The rationale for the continued model evaluation pattern is to ensure that machine learning models perform effectively and reliably over time. This pattern involves regularly evaluating and monitoring models after they have been deployed in a production environment. The goal is to assess model performance, detect potential issues or drift, and take appropriate actions to maintain or improve model accuracy and reliability.

The UML

Here is the UML diagram for the continued model evaluation pattern:

  +-------------------+
  |   ModelEvaluator  |
  +-------------------+
  | - model: Model    |
  +-------------------+
  | + evaluate(data)  |
  | + updateModel()   |
  +-------------------+
         /\
         |
         |
         |
         |
         |
  +-------------------+
  |       Model       |
  +-------------------+
  | - parameters      |
  +-------------------+
  | + predict(data)   |
  +-------------------+


Here are the components of the continued model evaluation design pattern:
  1. Model: The Model class represents the predictive model with its parameters and a predict() method that performs the actual prediction based on input data.
  2. ModelEvaluator: The ModelEvaluator class is responsible for evaluating the model using the evaluate() method and updating the model using the updateModel() method.

Code Example - Continued Evaluation Pattern

In the provided code, the Model class is implemented with a constructor to initialize model parameters and a predict() method to perform the prediction. The ModelEvaluator class holds an instance of the Model class and provides the evaluate() method to evaluate data using the model. It returns the prediction result based on the model's prediction method. The updateModel() method is used to update the model with new data or retrain the model. In the main() function, you can create a ModelEvaluator object, load the data, evaluate the model using the evaluate() method, and update the model using the updateModel() method. Remember to replace Data with the appropriate data type used in your implementation and adjust the methods and parameters according to your specific requirements.
C++: ContinuedEval.cpp.
C#: ContinuedEval.cs.
Java: ContinuedEval.java.
Python: ContinuedEval.py.

Common Usage

The following are some common usages of the continued model evaluation pattern:

  1. Model performance monitoring: Continuously evaluating model performance is crucial to ensure that it meets the desired accuracy and quality standards. Monitoring metrics such as accuracy, precision, recall, F1 score, or area under the curve (AUC) helps identify if the model's performance is degrading or if it is not meeting the desired thresholds. This pattern allows organizations to track the performance of their models and take appropriate actions when performance falls below acceptable levels.
  2. Data drift detection: Data used for training models can change over time, leading to data drift. By comparing the distribution of incoming data with the training data, organizations can detect if the model is encountering significantly different data. This helps in identifying potential issues and taking corrective measures, such as retraining the model with updated data or adapting the model to the new data distribution.
  3. Concept drift detection: Concept drift refers to the scenario where the relationships between features and target variables change over time. Continuously evaluating models allows organizations to detect concept drift and take necessary actions to address it. This can involve retraining the model with updated data, adjusting model parameters, or even deploying an entirely new model that better captures the changed relationships.
  4. Bias and fairness analysis: Evaluating models in production helps uncover biases or fairness issues that may arise in real-world scenarios. By analyzing model predictions across different demographic groups or sensitive attributes, organizations can identify potential biases and take steps to mitigate them. This can involve adjusting training data, feature engineering, or implementing fairness-aware algorithms.
  5. Error analysis and improvement: Analyzing the errors made by the model can provide insights into its weaknesses and areas for improvement. By continuously evaluating the model, organizations can identify specific types of errors and patterns, allowing them to refine the model, enhance feature engineering, or apply ensemble techniques to improve overall performance.
  6. A/B testing and model version control: The continued evaluation pattern facilitates A/B testing of different model versions. By evaluating multiple models in parallel and comparing their performance, organizations can make data-driven decisions about which model version to deploy in production. It also helps track the performance of different versions over time and provides insights for model version control.
  7. Compliance and governance: Continued model evaluation is vital for maintaining compliance with regulatory requirements and governance standards. Organizations can assess whether models are behaving ethically, avoiding discriminatory or biased outcomes. Evaluating models helps ensure transparency, accountability, and adherence to legal and ethical frameworks.

Code Problem - Model Modification

In this example, we have a Data class that represents the input data, containing a vector of features. The Model class represents the model used for prediction, which consists of a vector of weights. The ModelEvaluator class is responsible for evaluating the model using the provided data and updating the model with new weights. In the main() function, we create a ModelEvaluator object with initial weights, load the input data, and evaluate the model using the evaluate() method. The prediction result is then printed to the console. Next, we update the model with new weights using the updateModel() method and evaluate the model again with the updated weights. The updated prediction result is printed to the console.

You can customize the example by modifying the number of features, adding additional evaluation logic, or adjusting the model's prediction mechanism to suit your specific requirements.
Data.h,
Model.h,
Model.cpp,
ModelEvaluator.h,
ModelEvaluator.cpp,
ModelMod.cpp.

Code Problem - Linear Regression Model Modification

This example is similar to the previous. The below code creates a simple linear regression model and continuously evaluates and updates it based on a stream of data. Additionally, multithreading is used to simulate concurrent evaluation and updating.
LinearRegressionModel.h,
DataStreamGenerator.h,
ModelEvaluator.h,
ModelUpdater.h,
LinearRegressionModMain.cpp.

The Transform Design Pattern

The transform design pattern for machine learning focuses on ensuring the reproducibility and consistency of data transformations, which are critical for training, testing, and deploying models. This pattern emphasizes deterministic transformations, version control, and environmental consistency to maintain data integrity and facilitate debugging.

The Rationale

The problem is that the inputs to a machine learning model are not the features that the machine learning model uses in its computations. In a text classification model, for example, the inputs are the raw text documents and the features are the numerical embedding representations of this text. When we train a machine learning model, we train it with features that are extracted from the raw inputs. The solution is to explicitly capture the transformations applied to convert the model inputs into features.

The UML

The following is a very basic UML diagram of the transform design pattern.

  +-----------------------------------------------+
  |              Raw Data Source                  |
  |  - Collect data from various sources          |
  +-----------------------------------------------+
                     |
                     v
  +-----------------------------------------------+
  |           Data Ingestion Layer                |
  |  - Ingest and store raw data                  |
  |  - Ensure immutability                        |
  +-----------------------------------------------+
                     |
                     v
  +-----------------------------------------------+
  |       Data Transformation Engine              |
  |  - Apply transformations to data              |
  |  - Ensure transformations are deterministic   |
  |  - Version control for transformation logic   |
  +-----------------------------------------------+
                     |
                     v
  +-----------------------------------------------+
  |           Feature Store Layer                 |
  |  - Store transformed features                 |
  |  - Ensure features are versioned and immutable|
  +-----------------------------------------------+
                     |
                     v
  +-----------------------------------------------+
  |      Machine Learning Model Training          |
  |  - Use versioned features for training        |
  |  - Ensure reproducibility of training process |
  +-----------------------------------------------+
                     |
                     v
  +-----------------------------------------------+
  |       Model Serving and Deployment            |
  |  - Deploy trained models                      |
  |  - Use versioned features for prediction      |
  +-----------------------------------------------+
                     |
                     v
  +-----------------------------------------------+
  |      Monitoring and Feedback Loop             |
  |  - Monitor model performance                  |
  |  - Collect feedback for continuous improvement|
  +-----------------------------------------------+
Raw Data Source: Collects data from various sources.
Data Ingestion Layer: Ingests raw data and ensures its immutability.
Data Transformation Engine: Applies deterministic transformations to data. Uses version control for transformation logic.
Feature Store Layer: Stores transformed features. Ensures features are versioned and immutable.
Machine Learning Model Training: Utilizes versioned features for training. Ensures the training process is reproducible.
Model Serving and Deployment: Deploys trained models. Uses versioned features for making predictions.
Monitoring and Feedback Loop: Monitors model performance. Collects feedback to continuously improve the system.

Code Example - Transform

Below is a simple example of using the transform design pattern for a machine learning workflow. This example demonstrates data normalization, which is a common transformation step. We'll use a basic class to handle the normalization of a dataset.
C++: Transform.cpp.
C#: Transform.cs.
Java: Normalizer.java, Transform.java.
Python: Transform.py.

Common Usage

The transform design pattern is widely used in machine learning for various purposes. Here are some common usages:

  1. Data Preprocessing
    Normalization/Standardization: Scaling data to a standard range or normal distribution to improve model performance.
    Encoding Categorical Variables: Converting categorical data into numerical format using techniques like one-hot encoding or label encoding.
    Feature Engineering: Creating new features from existing data to enhance model performance.
    Imputation: Filling in missing values using strategies like mean, median, mode, or predictive models.
  2. Data Augmentation
    Image Augmentation: Applying transformations like rotation, scaling, cropping, and flipping to increase the diversity of the training dataset.
    Text Augmentation: Techniques like synonym replacement, random insertion, and back translation to generate more diverse textual data.
  3. Dimensionality Reduction
    Principal Component Analysis (PCA): Reducing the dimensionality of the data while preserving as much variance as possible.
    t-Distributed Stochastic Neighbor Embedding (t-SNE): Reducing high-dimensional data into two or three dimensions for visualization purposes.
  4. Feature Extraction
    Text Vectorization: Converting text into numerical vectors using methods like TF-IDF, Word2Vec, or BERT embeddings.
    Image Feature Extraction: Using techniques like convolutional neural networks (CNNs) to extract features from images.
  5. Data Transformation Pipelines
    Pipeline Construction: Combining multiple preprocessing and transformation steps into a single pipeline to ensure consistency and reproducibility.
    Cross-Validation: Using pipelines to ensure that data transformation steps are correctly applied during cross-validation to prevent data leakage.
  6. Model Interpretation
    Shapley Values: Transforming model predictions into contributions of each feature for interpretability.
    LIME (Local Interpretable Model-agnostic Explanations): Generating interpretable explanations for model predictions by transforming data into a locally linear representation.
  7. Model Deployment
    Inference Pipeline: Applying the same transformations used during training to new data during inference to maintain consistency.
    Online Feature Transformation: Real-time transformation of data in production systems to ensure incoming data matches the format expected by the model.
  8. Time Series Analysis
    Lag Features: Creating lagged versions of time series data for predictive modeling.
    Rolling Window Features: Calculating rolling statistics (e.g., mean, standard deviation) over a fixed window of past data points.
  9. Anomaly Detection
    Feature Scaling: Transforming features to detect anomalies more effectively.
    Dimensionality Reduction: Reducing feature space to identify outliers more easily.
  10. Synthetic Data Generation
    SMOTE (Synthetic Minority Over-sampling Technique): Generating synthetic samples to balance class distribution in imbalanced datasets.
    GANs (Generative Adversarial Networks): Creating synthetic data that mimics the distribution of the original dataset.

Code Problem - Data Normalization

The following code uses the transform design pattern for data normalization. The loadData function simulates the loading of numerical data: MultiTransform.cpp.
The Normalizer class performs data normalization.
The fit() method calculates the means and standard deviations of the features.
The transform() method normalizes a single feature vector using the stored means and standard deviations.
The fitTransform() method fits the normalizer to the data and then transforms it.