TensorFlow Training Models

Our current code supports these models:

  • SSD MobileNet v2 320x320
  • SSD MobileNet V1 FPN 640x640
  • SSD MobileNet V2 FPNLite 320x320
  • SSD MobileNet V2 FPNLite 640x640

Are there other models you’d like us to support? We want to keep it to models that perform well in FTC_SDK. In our own testing, SSD MobileNet v2 320x320 was the fastest.


    You can find out more information about the models supported here. These models all come from the TensorFlow model zoo. Section 7.1.2 of the FTC-ML Manual has more information on this. The other cool thing about using these models is that if you wish you can utilize the 81 built in labels that can be found here. FTC-ML will also have support for training a model based on another model you previously made. However uploading other models is not currently supported.


1 Like

Thank you Liz. We believe EfficieNet could be a promising candidate regarding lower latency for the current FTC hardware where you cannot expect any GPU acceleration :frowning: , perhaps as well as fewer training time.

Sorry for off the original topic again as we recognized you’re the original author of the FTC TFOD package :smiley: Thank you for bringing such important component to our SDK. We’ve always trying to explore the potential of the TFLite especially in this season that finally TF2.0+ is supported. In our experiments trying to bring cutting edge models like Yolo to the TFLite deployment, we’ve been through some struggling moments by learning that TFOD is based on the vision task lib (no complaining but more like feedbacks):

  1. There’re some restrictions for input/output tensor signatures of the ObjectDetector in TFLite. For adapting the cutting edge models, we have to struggle on manipulating the input and output layers graphs to feed a valid model into TFOD - very high bar for FTC teams still. One could argue that for this level of customization you may wanna also try deploying the model in the OpenCV DNN module, but we also love the conveniences of the multi-box tracker and the threading wrapping provided by TFOD. It’s more like a hope that the adaptation layer of TFLite task lib could be more flexible to various model output layers.

  2. Custom OPs support - due to the fact that we have to mess around with some model layers, some of the customized models were evolved to a level that TFLite OPs could not fully cover the graph so a selected set of full OPs from the original TensorFlow (not lite) may be needed. The custom OPs are currently supported in Java lib of the TFLite Interpreter initialization phase, but not in ObjectDetector which instantiates the TFLite runtime in JNI. We have submitted a request to Google that hopefully this could be supported by ObjectDetector soon.