Some times, you may see the Maivin reporting an "Invalid" model status on the demo web application GUI. This describes the situations when this may occur and how to troubleshoot it.
There are three situations when a Maivin will report an Invalid model status:
- New Maivin's without a model loaded will report an invalid model until the first model has been loaded ( see Maivin with eIQ Portal guide on how to load the first model. ).
- While loading a Valid model, the model status will report invalid until the model is successfully loaded. This may take several minutes to load, so please be patient.
- If the model is not an RTM model, such as a TFLite, ONNX, or Keras model.
If you have loaded an RTM model onto the Maivin and the model status is still "Invalid" after several minutes, please confirm the model is an RT model. Load the model into Model Tool and view the model Properties (View -> Model Properties or press CTRL-Enter) and verify the model format is DeepViewRT for Python.
We have included two models on this page that should work if you do not have a valid model available.
Model State is reported on Results Page
Model Status state is captured in the results page. The JSON element "model_status" reports as following:
For example, the following Maivin is reporting a model status of "Invalid".
Classification models in RT format can still be loaded on the Maivin for benchmarking, validation, etc. However, the detection application will still load these models and consider them "Valid" despite being classification models.
Future development may include functionality to run TFLite, ONNX, and Keras models on the Maivin.