Team 11848 Spare Parts Robotics has created 2 Team Props for CenterStage, one blue, and one red. We only created one ML model for our blue team prop and it works fine (always detected on any spike mark). However, our red team prop (which is the exact same shape and size as the blue one) is not getting detected. We’ve created multiple models for the red prop and none of them work. Either nothing is detected, or random parts of the playing field are detected as the red prop. We’re using a Logitech Pro C290 camera on our robot. Any suggestions on why the red prop wouldn’t be detected? Thanks!
Are you labeling the team prop as two? Hoping the model can distinguish between the red or blue one?
We made two different models. One model only has the blue. The other model only has the red. We learned reading through the topics that TensorFlow doesn’t detect color, but rather contrast. We were testing different scenarios just to see if we could prove out the contrast is causing some type of issue. Here’s what we tried.
- We ran the “red” model to see if we could detect our red team prop. It never detects it. We 3D prints a new red team prop in a darker shade of red, and it still was never detected. (We made about 6 “red” models and none of them worked).
- We ran the “red” model to see if we could detect our blue team prop. The blue prop was only detected in the middle spike position when using the red model.
- We used the “blue” model to see if could detect our red team prop. The “blue” model never detected our red nor our darker red team prop.
- We used the “blue” model and tried standing the green, yellow, purple, and white pixel in front of it (separately). When the green, yellow, or purple pixel was in front of our red team prop, the “blue” model detected it. When the white pixel was in front of our red team prop, the “blue” model didn’t detect it.
- We used the “red” model and tried standing the green, yellow, purple, and white pixel in front of it (separately). None of the pixels in front of our red team prop, were recognized by the “red” model.
With all of these tests, seems like we proved that the color contrast is an issue. Not sure how to overcome this.
Correct, tensorflow is more for shape and contour detections. We have been able to have one model where both red or blue are the same label.
Did you create one model with both the red and blue prop in it? Can we also ask what material your props are made out of!
I also want to add that we tried using the FTC white pixel tensorflow model, and we weren’t able to detect that either.
Yes, one model, one label. The props are 3D printed.
Used a small number of images, thus only recognizing on the gray tiles.
Were you using it right? You cannot use the default model to detect the pixel from the robot starting position. Be sure to read this thoroughly.
https://ftc-docs.firstinspires.org/en/latest/programming_resources/vision/tensorflow_cs_2023/tensorflow-cs-2023.html
-Danny
We really want to use our red team prop so we plugged in a Logitech 720 webcam and we are able to detect our red team prop. We think there must be something in the settings of the c920 Logitech webcam that we need to update. The Logitech 720 webcam doesn’t have a wide enough field of vision and we can’t zoom out any further.
The Webcam controls can be used to change settings on your Webcam for each program. These controls change the Webcam settings and those changes are reflected in the images provided to Tensorflow. The C920 is the most configurable camera, but it means out of the box it has the most settings to tweak - and unless you TRAIN the model using video from the C920 you might need to adjust the camera settings on the C920 based on the camera you used to shoot the video to train the model. This is why using the same camera to take video as you’re using on the robot is so important.
This is a link to information on Webcam controls. Blocks has the same Webcam controls, even though the tutorials are Java based.
https://ftc-docs.firstinspires.org/en/latest/programming_resources/vision/webcam_controls/index.html
Pay special attention to examples where the effects of camera controls are seen in TFOD, such as:
https://ftc-docs.firstinspires.org/en/latest/programming_resources/vision/webcam_controls/gain/index.html
-Danny
I decided to take a look at your videos and models in your FTC-ML workspace, since you’ve asked for help I took that as permission to do so.
Your blue model (“1st time training the model”) was trained with videos that are great examples of what to do for a model such as yours. Different poses, different angles, different distances, etc… Since your object is essentially a round sphere, you don’t need a whole lot but what you have is plenty. Your blue model trained very well, the model statistics look great and the training image samples show good recognition.
Your red models have a lot of problems.
-
The “Red Team Prop on the Left” model shows a lot of training inconsistencies - the model seems to be able to detect the object in fairly specific cases, but has great difficulty outside those specific cases. The loss metrics show that the model still needs quite a bit of training. When you created your detection boxes, you left little room between the object and the bounding box, and that makes it very difficult to differentiate the background from the object. In many images used for training, the bounding box “cuts inside” the object and that’s throwing the model training way off. You can see in this image the model can’t quite figure out what to do in some instances.
-
The “2nd training round for redprop” model also has not been trained properly. You can look at the metrics and see for yourself. On your Blue Model, you can see the Training Metric “Loss/total_loss” graph gets under 0.2 (you really want this value to get near 0.1 but for the blue model it’s perfectly fine):
However, in the “2nd training round for reprop” model it never goes under 10. This means the model is having a super hard time training, there’s something preventing it from training properly.
I think this is another case of “your bounding box is too close/tight to the object in your labeling” - you need some amount of background around the object in the label so that the training algorithm can differentiate the background from the object within the label. Otherwise, the model will think EVERYTHING within the label is part of the object, and the model will have a hard time training. -
Your most recent model was trained using a video that had ~165 images that might as well have been the exact same image (with some very very subtle low-light noise). No way that was going to turn out well.
So what are my recommendations?
-
Go back to your original video “Red Team Prop Left” and relabel the video. Be sure to exclude images that are blurry. Here is an example of a GREAT label from your blue video:
And here’s the usual way you’re labeling objects in your RED video:
You see how the red one has virtually no background between the object and the label, and the red one has portions of the object (on the right) even cut off? It may seem small, but small things make a huge difference. -
When Training I recommend training for 3,000 steps when training. It seems like overkill, but it’s not - the training algorithms seem very tweaked for 3,000 steps (as long as my labels are good, my models generally come out fairly well trained).
I bet you’ll have a GREAT red model if you fix your labeling on that video and use that video for training. I also really think you should reconsider what @ChaosBuster recommended which is to use one label for both red and blue (doesn’t matter what it is, and since you have to relabel the red just relabel everything as “LeftTeamProp”, and just train a model using those two videos). You’ll get one model that can recognize both red and blue spheres as “LeftTeamProp”.
Now, this advice comes with caveats - but if you do what I’ve recommended I can help you through any issues that might come up.
-Danny
Danny, thank you so much for your advice. The team created a model with both red and blue props, and did better framing. We now have a successful model.