Summary of our experience

We have reached a point where we have an excellent model that is working well so we wanted to post our experiences and say thanks, again, for an awesome tool.

Our goal was to train on one object, our 3D printed shipping element, and one label: ShipElem. Our only use-case was to recognize the shipping element at a fixed distance from the robot so that a decision could be made about where to deliver preloaded freight during autonomous.

Our first model worked very well with recognition confidence during OpMode testing virtually always above 85% and usually above 90%. At that time the shipping element was a 3D printed red 3.25" cube with a small raised nodule on top, ducks on two sides, and team numbers on two sides.

During training of the first model, we used 289 training frames and 72 testing frames of which 10% were negative frames (no shipping element present) in each grouping. We trained for 1000 steps which took 21 minutes and 34 seconds. During video creation we varied lighting and orientation significantly, but not distance from the camera. We took videos only on our FTC field from all four starting locations for this year’s game.

The team then experimented by taking new videos of the same shipping element but with far fewer training and testing frames. They also forgot to vary the lighting and decided to vary the distance from the camera. 103 training frames and 26 testing frames were used with 10% of each being negative frames. 350 steps were requested but 400 were completed. Training took 12 minutes and 33 seconds.

This second model with quite a bit less data performed poorly. It recognized very little.

The team then had to re-design the Shipping Element due to an oversight in how they will do capping during the match. The new 3D printed shipping element is a red cuboid that is 3.25" W x 3.25" D x 5.00" H. Two sides have a duck inscribed in them and two sides have the team number.

In the final model, the team used the new shipping element, but as with the first successful model using the old shipping element, they varied lighting and orientation significantly, but not distance from the camera. They took videos only on our FTC field from all four starting locations for this year’s game. Notably, they only trained on an orientation in which the shipping element was standing upright (so it was 5" tall). They used 662 training frames of which 85 were negative frames and 73 testing frames of which 11 were negative frames. They trained for 2100 steps which took 36 minutes and 5 seconds.

This model has been working extremely well with recognition confidence at greater than 90% most of the time. At the end of this post I added several training graphs and a couple pictures of the shipping element. When testing in our OpMode, the recognition worked well both on our field and on a white floor. Most amazingly, it regularly recognized the shipping element when it was tipped over on its side (so 5" wide) even though none of the training or testing data contained the element with that orientation.

The team was trying to guess what aspects of the training data that the model picked up on the most. Their main theory is that the distance from the camera being constant helped a lot, and maybe the bright color (no patterns though) and the varied lighting. Just guessing though!

Other notable facts: We use OnBot Java and a Logitech C310 camera. Our OpMode does not use magnification; in other words, the TFObjectDetector’s setZoom looks like this (1.0 for the first parameter instead of 2.5 as is found in the sample webcam code):

tfod.setZoom(1.0, 16.0/9.0);

We also realized that we should have been using the Play and Pause features of the “Tracking with OpenCV” functionality more often. When we began pausing the playback to adjust bounding boxes and then continuing the playback, the number of bounding box fixes went down dramatically. That is documented in the manual but we didn’t pick up on the use of the controls until the last model and wow, it saved hours of time.

The only outstanding bug we hit again was the truncation of our label when creating the first bounding box. We typed in “ShipElem” but it saved as “ShipE”. That was reported elsewhere and the workaround is to just patch up the label on the first frame so it wasn’t a huge deal.

We hope these details help. We are so thankful for the work that went into this beta program so we wanted to provide as much feedback as possible. This experience energized the team and they are excited to talk about it during judging sessions this year. Thanks soooooo much!

Confidential Robotics - Team 13243, Eagan, Minnesota

6 Likes