Version 1.68
  • 20 Oct 2023
  • 1 Minute to read
  • Contributors
  • PDF

Version 1.68

  • PDF

Article summary

Release Date: 10.10.2023

1. ENHANCEMENTS

  • Preserving inference pod replica number: We have introduced the capability to manage inference pods that are imperatively scaled up after the initial training request. This ensures that the performance of the system can be maintained by handling the same inference load as before. This enhancement includes:
    • Retraining: When a previously trained MLP model is requested for retraining with the same model ID, the system now automatically generates the same number of inference pods once the retraining process is successfully completed.
    • Revive: In cases where an inference request is made to a MLP model that was terminated due to a timeout, the system now automatically generates the same number of inference pods after the training process is successfully completed (The training data must be unchanged for this process).

2. IMPROVEMENTS

  • Bugfix: Status of Published Models added before v1.68
    • Prior to this fix, when a retraining request was made without any changes for a model created before version 1.68, and that model had expired due to timeout, the status of the published model remained 'Pending' after the training process was successfully completed. This issue has been resolved in this release, ensuring that the status accurately reflects the training completion for pre-v1.68 models.

Was this article helpful?

What's Next
Changing your password will log you out immediately. Use the new password to log back in.
First name must have atleast 2 characters. Numbers and special characters are not allowed.
Last name must have atleast 1 characters. Numbers and special characters are not allowed.
Enter a valid email
Enter a valid password
Your profile has been successfully updated.