Hello together, I am currently trying to migrate my code to KerasHub, as it seems for KerasCV the documentation is gone now. In the past I was using the EfficientNetLiteB0Backbone. Seems like it has not been ported to KerasHub. Is there a chance to see this model in KerasHub as well? As I am not seeing it yet I am wondering, is there nowadays maybe a better model to use? I need something small with good classification results for a mobile phone, I think EfficientNetLite was doing the job best with KerasCV.

And in case it will be moved to KerasHub, it would also be great if one could get a pretrained weights for it. Thanks for consideration.

Comment From: edge7

Hi, am not part of the Keras team, so this is my personal opinion.

I understand your issue, though. As Keras Hub is still WIP, the Keras CV documentation should still be around. I got a similar issue for an other model recently. EfficientNetLiteB0 is not ported yet, it probably will. You can still use Keras CV for the time being, look here in particular at the tests, you might be able to load it correctly even without official documentation.

Comment From: sachinprasadhs

Hi, We have a EfficientNet Lite model for the edge devices here https://www.kaggle.com/models/keras/efficientnet/keras/efficientnet_lite0_ra_imagenet which has been ported from Timm. For more details about the model check https://huggingface.co/timm/efficientnet_lite0.ra_in1k

Comment From: cjohn001

Hello, thanks for the directions. For the moment this code works for me, I am primary missing pretrained weights and the docs. If it get ported all is fine. For the moment I can live without the docs. Some kind of versioning in the docs would be great.

efnetlite_backbone = keras_cv.models.EfficientNetLiteB0Backbone(include_rescaling=True,
                                                                input_shape=(HEIGHT,WIDTH,3))
efnetlite_classifier = keras_cv.models.ImageClassifier(efnetlite_backbone, num_classes=len(class_mapping), activation='sigmoid')
efnetlite_classifier.compile(loss=keras.losses.CategoricalCrossentropy(label_smoothing=0.1),
                   optimizer=keras.optimizers.SGD(momentum=0.9),
                   metrics=["accuracy"])

Comment From: cjohn001

@sachinprasadhs I am still trying to figure out how I can get the EfficientNetliteB0Backbone trained to a similar accuracy like was state in the papers of the models. Is there maybe somewhere the training script from ImageNet training available for the model you referenced? Would be a great help to see how the hyperparameters have to be set for training. Thanks for your help!

Comment From: sachinprasadhs

@pkgoogle, PTAL

Comment From: pkgoogle

Hi @cjohn001,

can you try something like this after installing keras_hub?:

# original timm b0
model = keras_hub.models.ImageClassifier.from_preset("hf://timm/efficientnet_b0.ra_in1k")
# lite variant
model = keras_hub.models.ImageClassifier.from_preset("hf://timm/efficientnet_lite0.ra_in1k")

This will load the same preset weights as timm. You can find all the available variants here.

Comment From: cjohn001

@pkgoogle sorry for the late reply and thanks for the directions. In the meantime I was able to load the weights from here:

https://github.com/sebastian-sz/efficientnet-lite-keras/releases/tag/v1.0

Unfortunately, it seems like I cannot use keras_hub yet. I want to deploy my models in a mobile application, which I assume requires me to use tensorflow-model-optimization toolkit if I want to use Sparsity and cluster preserving quantization aware training.
Tensorflow-model-optimization toolkit unfortunately forces me to stay on keras2 :( Is there maybe another kind of optimization functionality integrated in keras3 already, as a replacement, which I could use instead? I would very much like to switch to keras3 as soon as possible, as I think, I could than also directly use timm models with keras3, like described here https://lightning.ai/sitammeur/studios/keras3-with-pytorch-workflow?section=featured. However, keras3 is not an option if I cannot prepare the models for deployment. I am quite new to the entire ecosystem, hence it would be great if you could provide me a direction. Thanks for your help!

Comment From: pkgoogle

Hi @cjohn001, If you absolutely need that workflow -- feel free to continue doing so. We usually recommend you get your workflow/system/project working/going before optimization. In which case you have multiple options (including continuing this workflow).

Depending on your use case:

  • mediapipe. If your usecase fits the tasks listed here -- this is probably the most user friendly way to get started
  • LiteRT. If you already have a .tflite model and need to run it on device but you need more control/flexibility than mediapipe allows you can use this library directly
  • AI-Edge-Torch. If you need to convert a Pytorch model to a .tflite file to be used either with mediapipe or LiteRT, this is the library you would use. (You can accomplish some of the same optimizations here as well).

Does that answer your question?

Comment From: cjohn001

Hello @pkgoogle thanks for the directions. I looked in your mentioned options already. The thought behind my current framework choice was that keras3 gives me full control over the entire build and deployment process and also allows me to build custom solutions based on the building blocks provided with the framework. Switching back to keras2 seems not like a good option to me. Mediapipe seems to me like something nice to get quick results but nothing which supports a full fledged development lifecycle. However, if I understand you correct, you are saying with Keras3 there is currently no decent way to optimize build models with state of the art techniques like tensorflow-optimization-toolkit for deployment as tfilte? I am currently most interested in pruning and pruning preserving quantization aware training as I want to deploy my models with the app binary. Is there something on the roadmap for Keras3 in this regards? I believe not having these capabilies will drastically reduce the usability of the framework. With embedded applications in mind would you rather recommend to switch to another framework like pytorch if more control on the build process is required? Thx for your feedback.