- Print
- PDF
Article summary
Did you find this summary helpful?
Thank you for your feedback
Release Date: 10.10.2023
1. ENHANCEMENTS
- Preserving inference pod replica number: We have introduced the capability to manage inference pods that are imperatively scaled up after the initial training request. This ensures that the performance of the system can be maintained by handling the same inference load as before. This enhancement includes:
- Retraining: When a previously trained MLP model is requested for retraining with the same model ID, the system now automatically generates the same number of inference pods once the retraining process is successfully completed.
- Revive: In cases where an inference request is made to a MLP model that was terminated due to a timeout, the system now automatically generates the same number of inference pods after the training process is successfully completed (The training data must be unchanged for this process).
2. IMPROVEMENTS
- Bugfix: Status of Published Models added before v1.68
- Prior to this fix, when a retraining request was made without any changes for a model created before version 1.68, and that model had expired due to timeout, the status of the published model remained 'Pending' after the training process was successfully completed. This issue has been resolved in this release, ensuring that the status accurately reflects the training completion for pre-v1.68 models.
Was this article helpful?