Problems with this year's "Concept Tensor Flow Object Detection Webcam" example?

I feel bad posting such a basic problem here, but it is related to the use of the ftc-ml beta.

We did a quick run through of the whole ftc-ml tool, and everything worked for us: uploading of videos, creation of datasets and models etc. We purposely did not put a lot of work into this so were not too surprised when we could not detect our object. However, as a sanity check, I wanted to make sure that everything else was working

That’s when I discovered that our ConceptTensorFlowObjectDetectionWebcam example from last year (rings) was working still, but this year’s (for detecting cubes, balls, ducks and markers) barely worked at all. No indication of errors, but it very rarely correctly detected any of these objects. I tried two different webcams with the same result.

Again, sorry for posting such a basic question, but what could I be missing here?


There are reports that the model for Freight Frenzy object detection is less accurate at detection than in prior years. It is not completely unsurprising that this would be your result.

An interesting side-effect of this project might be if teams found themselves with extra training time they might try training against certain game objects and see if they can get better results. And if they do get better results, post their models for other teams to use.

1 Like

There’s a lot of variables here. And I mean a LOT. One of them is background variability. We have heard that the models this year are highly dependent upon the Freight objects being on the gray tile background - if you were trying to detect them on a background that wasn’t a gray tile, try again on a gray tile. It appears that the model may have picked up on the fact that all the game pieces were always on a gray tile, which is an artifact of how we chose to optimize our time in taking our videos - we assumed everyone would be on a standard FTC field when using the stock model.


Thanks for the feedback!

On our end, we discovered that placing the objects on the floor mats and keeping that as most of the background greatly improves the object detection efficiency for this years objects. Again, we have a much easier time detecting rings with a variety of backgrounds, so this was surprising.

Also, I’ve been simply having trouble finding any reports on people’s experience with this year’s object detection. I’ve been checking the FTC forums and reddit threads, but not seeing much. Where have you been seeing these reports?

thanks again.

Sorry, just seeing this response now. That makes a LOT of sense, thank you!