I’m experiencing very slow loading times (5 minutes) with an EfficientNet architecture I converted from PyTorch (PT -> ONNX_TF -> TF).
The problem is only present when I load the model with the `v1.compat` mode or with the C API, which is my final goal. However, it loads fast in the standard TF v2 mode In Python (< 1s). After loading the models, the inference seems equally fast and correct in all the cases. I’m using `v1.compat` only for debugging, as it seems to behave similar to the C API.
I’ve noticed that the issue disappears when I export the model from PyTorch with a fixed batch size of 1, however I would prefer to have dynamic batch size.
I created a topic in the TensorFlow forum with access to the models, and details to reproduce.
I’m looking for ideas on what could be the issue, and if the resulting SavedModels could be modified in a way the loading is as fast in the C API as in TF v2.
submitted by /u/pablo_alonso
[visit reddit] [comments]