Hey Kyle.
We had a number of setbacks force our hand to drop support of TensorFlow within the core FTC SDK. The straw that broke the camel’s back, however, was a breaking change in TensorFlow itself that was not compatible with our SDK support infrastructure. Our current (9.x) infrastructure is not compatible with newer TensorFlow toolchains, and so in order to support teams making models with current tools we would have to make significant changes that we’re not able to make at this time. TensorFlow is a “research project” for Google anyway, I think we were “lucky” to be able to use it for as long as we did.
The TensorFlow Object Detection APIs have significant issues, mostly dealing with the amount of time it takes to train coupled with the significant difficulty to create models that are relatively immune to slight lighting differences at venues. I personally prefer to move to appliance-based machine learning models, for example you can train an object detection model on a HuskyLens in about 20 seconds versus 6 hours with TensorFlow and the ML Toolchain; you can even retrain in the exact lighting conditions in another 20 seconds during calibration/inspection. OpenCV is also a fantastic tool for those who “just want results”, though we do not directly support OpenCV on Blocks (we indirectly support it using java-based myBlocks though).
As we move forward we’ll be constantly evaluating the device landscape looking for better ways to incorporate machine learning, AI, and other similar technologies. But for right now it doesn’t make sense for us to pour resources into technologies that are not directly helping teams (sure, they’re learning a lot about ML and AI, but we need them to learn more than “it’s really difficult to use effectively”).