FTC ML Recognition Model

Hello, our team has gotten TensorFlow to recognize the prop at a specific distance but, if it isn’t at that distance, it doesn’t recognize the prop. Should we train a 3rd model, have the robot move around to look for the prop, or move the camera? We have some code with roadrunner that could move it somewhere but our odometetry isn’t perfect.

Greetings!

I took a peek into your FTC-ML workspace so that I could provide some first-hand advice. First, I want to say you’ve done a pretty great job with the Red/Blue Team Prop videos and model. Your 60 minute model is incredibly well trained (3,000 steps is a “golden number” that you should keep to) and aside from a few understandable minor mistakes it’s pretty flawless. Here are a couple notes:

  1. You have unfortunately not labeled all objects in some of your frames that you’re using, so in some of the evaluation frames the model can be seen struggling to train properly. If you want an example of what I mean, take a look at “Evium-99 Red Blue Team Prop” frames 426 and 427, the blue prop is not labeled. That frame (427) was used in evaluation, and while the model still correctly detected the object that will confuse the model training because if you don’t label it and it’s detected the model thinks it’s doing something wrong. Here’s a quick snapshot of frame 427:

image

  1. When you pan back and forth, you probably only need to pan +/- 45 degrees unless you’re actually expecting to look at the model from the side or the back. In CENTERSTAGE, you can expect to control the angle of the prop within that tolerance for sure.

  2. Your objects are REALLY similar. Your lighting in your video is great, so I think that is contributing to distinguishing the RED and BLUE objects. Remember that TensorFlow doesn’t necessarily look for properties of objects, like color (except when there’s a specific pattern). What TensorFlow is likely doing here is looking at the color contrast between the two (blue is probably a lower contrast and red is a higher one in your videos, so that’s probably being used to distinguish between them). In CENTERSTAGE there’s no reason to distinguish between red and blue, so I would personally recommend just labeling the red and blue the same label and combine the patterns. You’ll get a much better generally trained object, and you’ll survive more lighting conditions.

  3. Your model is trained well - the model metrics look great, which means the model really understood the training data well (so you did a great job creating the training data). However, when you took the video I bet you’re looking at the objects from a higher elevation than the robots will see. While you panned left/right, you didn’t pan up/down, and so at a lower angle to the floor the objects may look differently (to the model). You need to ensure your camera is at the same height as the camera on your robot so that you see the object at the same vantage point as your robot. I personally would bet this is the biggest factor in your training. If you were to lift your robot up a couple feet, I bet it’d recognize the objects really well.

These are a few first impressions while looking through your models. If you have any further comments or questions, please let me know!

-Danny

Thank You Danny Diaz for all of your responses and recommendations for our software issues this season!

We have decided to use distance sensors to detect our team prop and score our pixels during autonomous. The ML Recognition Model proved to be time consuming and difficult to work with as a rookie team without enough practice time available.

The distance sensors are more consistent for our team when looking for our team prop. We are using Roadrunner in relation to the starting position to accurately move our robot to specific positions. Then we are using distance sensors and basic logic to determine where the team prop is located. Then we score with Roadrunner.

Thank You again for all of your help! I’m sure we will be reaching out for more help this season!
Evium 99; #23765

1 Like