Default CENTERSTAGE model

Do we need to train our own default model or is that something that is typically accessible to us through ftc-ml? I have a student who is exploring using ML for the first time with for our team. We noticed in the documentation steps related to “Default CENTERSTAGE Model”. Are we able to access this somehow?

Greetings, and welcome to the FTC Community Forums!

The default CENTERSTAGE model is bundled with the software. It comes in two forms:

  1. If using OnBot Java or Blocks, the default model is bundled in the Robot Controller App APK (installed via the REV Hardware Client or your favorite side-loading mechanism). If you use the “easy” sample (ConceptTensorFlowObjectDetectionEasy) the code in the sample is designed to automatically use the default CENTERSTAGE model. Actually, all of the TensorFlow samples are designed to use the default model that is included. The Custom samples (or non-easy) are designed to allow you to specify your own model.

  2. If using Android Studio, the CENTERSTAGE model is stored in Apache Maven’s export repository. When you compile the code for the first time in Android Studio, the gradle build process downloads the export (and thus the model) to your computer and all of the samples are designed to use that model - the model is included as an asset in your project.

ftc-ml is a tool that allows you to create your own custom models, for example so you can recognize/detect your custom Team Props.

I hope that helped clear up the confusion (and hopefully didn’t add any). If you need anything else, don’t hesitate to ask!

-Danny

Thanks a lot, Danny! That makes a ton of sense.

we have not been able to detect pixels at all using the default model.

camera is the logitech c270, imaging conditions are nominal on a standard field.

any thoughts before we launch into a custom model?

thanks
Russ Miller
FTC team 9808

The model was trained to recognize a pixel from the top down. e.g.

image

@ddiaz can correct me if I’m wrong, but I believe it needs to be able to see the center of the pixel in order to recognize it. Are you trying detect a pixel from a point low enough on a robot such that it only has a side view of the pixel?

@ddiaz can correct me if I’m wrong, but I believe it needs to be able to see the center of the pixel in order to recognize it.

That is correct, the camera needs to be able to see the gray tile or tape lines from the center of the Pixel. The TensorFlow in CENTERSTAGE document goes into deep detail on this.

However, this question does bring up a central issue that especially new teams get caught on a lot - have you actually updated to SDK 9.0? If you’re still using SDK 8.2 or previous, the default model is the POWERPLAY model and it’s absolutely not going to recognize the Pixel under any circumstances. Unfortunately we made the “Easy” sample program too easy, there’s no reference in the code to the season or the object label.

Also, what programming language are you using? And which Sample are you basing your code on?

-Danny

Thanks for your help getting this sample opmode to work with the default situation.
our configuration is: onbot java, new control hub, latest firmware. sdk 9.0, gray mat, blue tape, white pixel, logitech c270 camera,

Both java tensorflow samples give high confidence detections at high angles looking down at the pixel. At angles lower than about 45 degrees there are no detections reported. here is what I think will be a typical situation:

(upload://2k1fdBxXVHN8Ean8kzy82FbohOv.jpeg)

as far as I can tell the at a height of about a foot and from the front of a robot against the wall there are no detections. if you look at the pixel from this geometry I don’t think either the mat or the tape is visible thru the hole.

Was this the geometry contemplated?

thanks
Russ

also, after about 1 minute of camera streaming with TFOD the robot controller crashes and restarts

Russ

interestingly tensor flow consistently detects the image of the pixel from the guidance document but not the real pixel next to the macbook

Hey Russ, let’s cover a bit of what you’re seeing here.

as far as I can tell the at a height of about a foot and from the front of a robot against the wall there are no detections. if you look at the pixel from this geometry I don’t think either the mat or the tape is visible thru the hole. Was this the geometry contemplated?

If you read the document I linked to previously, TensorFlow for CENTERSTAGE, you’ll see that this is absolutely a known “challenge” and it also discusses why the issue is the way it is and what robots will need to do in order to overcome this. Yes, this absolutely means that a robot CANNOT simply statically look at the Pixel from the Robot Starting Location and detect the Pixel. Sometimes we go way out of our way to make some game challenges easy for teams (as we did last season), and sometimes that’s not possible. This season is one of the latter.

also, after about 1 minute of camera streaming with TFOD the robot controller crashes and restarts

I’m not sure what’s going on there, but I would need to see your logs. This isn’t something that has been reported by anyone else. You can email me your Robot Controller Logs (you can get them from the REV Hardware Client using these instructions) at ftctech@firstinspires.org and I’ll take a look at them.

interestingly tensor flow consistently detects the image of the pixel from the guidance document but not the real pixel next to the macbook

Not exactly. Look at the bounding box of the detected object. What TensorFlow has detected is a very large white area with a contrasting color in the center, with contrasting color around the object - TensorFlow is actually pattern matching your bright computer screen as a Pixel. Because of the lack of consistently recognizable patterning on a Pixel the “rules” TensorFlow has decided to use to detect the Pixel apply perfectly to that situation.

The reason your Pixel standing on end isn’t being detected is because of color contrast. TensorFlow cannot detect colors, it only detects color contrasts. What you’ve done by standing the Pixel on end is creating color contrasts (aka “Shadows”) within the Pixel that are absolutely never present when the Pixel is laying flat on the floor. TensorFlow says, “Ok, I don’t see a bright area with contrasting colors in the center and around the border of the bright area, so that must not be a Pixel.” Again, read the “TensorFlow in CENTERSTAGE” document for more background.

Glad you got it working. I look forward to seeing your logs if the crashing keeps happening to you.

-Danny

thanks Danny. I never got any detections with the pixel flat on the map except at the higher angles. it did work though most of the time looking down

we plan to train a custom target anyway. I just wanted to see if the default object detection would work as in previous games to illustrate for new team members. April tags saved the day last year