Legacy
Legacy deployment refers to the traditional and older method of deploying artificial intelligence models for use in production environments. This approach is typically used when pre-trained models need to be deployed and utilized in production systems. Below are the details regarding development and deployment using the legacy approach:
Development:
During the development phase, the AI model is trained and optimized using training data.
This phase involves selecting and training the appropriate model architecture, tuning parameters, applying training techniques, and evaluating model performance.
Preparation for Deployment:
After training the model, it needs to be converted to an executable format and prepared for deployment.
This involves converting the model to standard formats such as TensorFlow SavedModel or ONNX and requires advanced configuration for execution in different environments.
Deployment:
In this stage, the prepared model is deployed on servers or local systems.
It includes installing and configuring the necessary systems for execution, transferring the model to the production environment, and running the model to respond to input requests.
Maintenance and Monitoring:
After deployment, the model needs to be maintained and its performance monitored.
This includes monitoring the model's performance, tracking resource consumption, troubleshooting, and necessary updates for improving performance.
In summary, in legacy deployment, pre-trained models are executed on servers or local systems to function as part of production systems and provide services.
Last updated
Was this helpful?