Two webcams with one VisionPortal each

Out of curiosity I tried two webcams with one VisionPortal for each (instead of following the switchable webcam sample, which works fine). The first webcam always opens but the second always gets stuck with a status of OPENING_CAMERA_DEVICE. I get the same error even if I reverse the order of the webcams. I even tried “stop streaming” on the first webcam before starting the second but got the same results. Any suggestions?

This capability is demonstrated in a sample OpMode linked here:

The sample is in FTC Blocks. A Java user can read the documentation, the code and the code’s in-line comments, to replicate this in OnBot Java or Android Studio.

Pay close attention to the use of Android’s “Viewport ID”. Note that FTC Blocks lists begin with index 1, while Java lists begin with index 0.

Thanks for the response. In principle the blocks example looks like what I’m trying to do. The differences have to do with the two portal view ids, which I don’t include. I’ll take a closer look for any other differences and try to make my experiment as close to the blocks example as I can get. But I don’t otherwise have much to go on because the only indication of a problem is the second camera’s unchanging status of OPENING_CAMERA_DEVICE.

Good news. I was able to export the blocks sample as Java and import the file into Android Studio. I attached two webcams to a battery-powered hub connected to the USB 3.0 port on the Robot Controller. The sample worked perfectly. So now it’s up to me to figure out what went wrong with my attempt. Thanks for your help.

Thanks for the update. Score one for the Blocks fans!

As you work with this two-portal arrangement under “real” conditions, watch out for CPU load and USB bandwidth issues. The SDK provides many tools to address this, described at these 2 pages:

It’s good that your external USB hub is independently powered. An alternate configuration is to plug each camera directly into its own USB port on the Control Hub, being aware that the USB 2.0 port is shared with the Wi-Fi radio.

For extra fun, each of your VisionPortals can use Switchable Cameras,with 2 external USB hubs. Yes it really does work, but USB bandwidth will require close management, with resolution and video format.

Have a great season!

Just to close the loop on this issue: After I called build() on each of the two instances of VisionPortal I put in a check with a timeout to make sure that the camera was streaming - this is where I was failing on the second camera. I noticed that your sample didn’t have anything like this; when I removed the check and timeout everything worked correctly. So now I have two cameras, each with its own VisionPortal: the first (forward facing) camera has two processors - one for raw frames and one for AprilTags - and the second (rear facing) has a single processor for AprilTags. If anybody wants to know how to make a processor for raw frames I would be happy to share - I’ve already given the code to two other FTC teams.

I am very incredulous that removing the check to make sure the camera was streaming is what fixed it.

You mentioned you were connecting both cameras to a USB hub; this indicates to me that bandwidth was probably your issue. A single USB hub cannot handle two 640x480 YUY2 streams. Note that in the Blocks sample you referenced, it sets the stream format to MJPEG mode (much lower bandwidth usage at the expensive of CPU time for running a decompression routine). I’m guessing that was the difference.

I only made the one change and that was the difference between not working and working. The resolution was set to 640x480 and MPEG on both cameras. I will pay attention to your other references on bandwidth.

Would you be able to send the code that demonstrates the issue?

I’m happy to share the code. Shall I just put it directly in the reply?

Could you drop it in a GitHub gist?

OK, this was my mistake. When I saw your comment about being incredulous it prompted me to go back and review my commits on Github. So on Oct. 5 when I wrote that I only made one change (in removing the timeout) that was not true. I also changed one line when constructing the VisionPortal (marked with my initials):
visionPortal = new VisionPortal.Builder()
.setCameraResolution(new Size(configuredWebcam.resolutionWidth, configuredWebcam.resolutionHeight))
.enableLiveView(false) //##PY changed to false 10/5/23
// If set “false”, monitor shows camera view without annotations.

            // Set and enable the processor(s).


So with enableLiveView(false) I was able to put the timeout check back in and everything has worked correctly since.

A little background: when we were using EasyOpenCV I showed our more advanced students how to use a CountDownLatch in openCameraDeviceAsync…onOpened to signal the main thread that the camera was open. With the new VisionPortal API I didn’t want to change the source of VisionPortalImpl to inject the CountDownLatch so I went with the timer instead.

Compliments on the new API by the way. It’s very well put together.

I would greatly appreciate a little deeper understanding of this if someone can offer.

A team I coach is running 4 logitech 270’s on their bot. In any given opmode, they never use more than 3, and they never use more than 1 at a time (the cams do different sorts of computer vision processing). All 4 webcams are plugged into an unpowered (yes I know not recommended) USB hub connected to the 3.0 port on the CH.

They started by creating VisionPortals for each webcam and ran smack into the issue described above. They pivoted to a SwitchableCamera with one VisionPortal and it all works as expected with reasonable performance in 640x480 with default streaming formats.

Then they went to optimize and discovered that while SwitchableCameras support manual exposure settings, they don’t support manual gain settings. In addition, changes made to the “current” camera in a switchable camera setup seem to apply to all of the cameras in that setup. They are doing different things with the different cameras, so they kludged together something that adjusts the exposure on the fly as they switch between cameras and ignores the gain. This works remarkably well.

Meanwhile, I’m trying to help them use the SDK correctly, so I found the information above and set up some test code which makes the needed call to makeMultiPortalView and provides the IDs to the VisionPortal builders. Interestingly, this code fails to get past OPENING_CAMERA_DEVICE on the first Vision Portal being built if enableLiveView is set to true. Changing that setting to false (on all 3 VisionPortals) allows all 3 Vision Portals to get up and running correctly. But of course, the kids can’t “see” what the cameras see using scrcpy so it’s not great.

So this seems somewhat like expected behavior given all the cautions that have been issued around underpowered USB busses. But what eludes me, is what does USB bandwidth have to do with this situation? I imagine the USB bus is the critical link to the get the images from the webcams back to the processor, but once the images are in RAM on the CH and being manipulated there, the code under enableLiveView just needs to “render” it on the Android device no? So if the bandwidth and power consumption is adequate for the kid’s use case with enableLiveView set to false, why wouldn’t it be the same with enableLiveView set to true?

Thanks in advance for any insight!

First, congratulations on getting MultiPortal working, if barely. But actually it can and does work quite robustly and normally. Just yesterday I posted a very simple 3-portal solution that you might find informative.

My simple OpMode runs fine with LiveView enabled for all 3 portals; here’s the screenshot from scrcpy.


Indeed there are issues with switchableCamera; gain and exposure cannot be controlled independently on separate webcams, as discussed here. Under MultiPortal, separate portals do support all Camera Controls.


I have wondered the same thing about powered vs. unpowered USB hubs. But my experience has been that powered simply works better, for webcams and gamepads. My working theory is that the 5VDC boost gives signals a better chance (or speed?) of transmission through a physically marginal USB connection. Perhaps related to an error-checking algorithm?

Indeed, it was your post on github that inspired me to give it another shot this morning, which turned up the new results. A powered hub and battery are on the way, but I still don’t “get” what usb power has to do with viewing the results of the image processing on the android screen. I’m hoping Windwoes might be inspired to comment. :slight_smile:

Other recommendations for robot friendly usb hubs and power supplies gratefully appreciated for my teams and any others who find there way here as well!

Other recommendations for robot friendly usb hubs and power supplies gratefully appreciated for my teams and any others who find there way here as well!

The hub and battery you ordered should work fine. The only units mentioned in official FTC documentation are shown here.

That article also points out that a (carefully designed) custom cable will allow powering the USB hub from the Control Hub itself. Personally I recommend a standalone battery, when there’s space on the robot and you don’t mind recharging it.