Hello there, we are team 23260 (it’s our first year!) and we are using both EasyOpenCV (to detect our team prop position) and AprilTags (to align with the right place in the backdrop for delivery) in our autonomous.
Unfortunately every 3 of 4 times we run the autonomous code, the AprilTag camera opening fails and puts the system in a state that every new attempt to run any code (autonomous or teleop) will fail to find the webcam.
Even if we go to Edit Configuration and press “Scan”, the webcam dissapears.
The only way to recover from this state is a full power cycle of the control hub. Even the reboot that gets trigger automatically fails to get out of that bad state.
We are able to get a match log for the autonomous when the issue does NOT reproduce (here’s an example: Autonomous-works well.txt - Google Drive). Unfortunately, when the issue happens the ControlHub seems to hang and the match log fails to be written.
Any suggestions? Maybe this is happening because we are mixing both EasyOpenCV and AprilTags.
Does anyone know how we can use just the VisionPortal for both the EasyOpenCV pipeline and the AprilTags?
Today we first use EasyOpenCV to find the prop position, then close that, and create an instance of the VisionPortal for the AprilTag part.
Here is the EasyOpenCV part:
int cameraMonitorViewId = hardwareMap.appContext.getResources().getIdentifier(“cameraMonitorViewId”, “id”, hardwareMap.appContext.getPackageName());
webcam = OpenCvCameraFactory.getInstance().createWebcam(hardwareMap.get(WebcamName.class, “Webcam 1”), cameraMonitorViewId);
circleDetection = new CircleDetection(isRed);
webcam.setPipeline(circleDetection);
webcam.setMillisecondsPermissionTimeout(5000); // Timeout for obtaining permission is configurable. Set before opening.
webcam.openCameraDeviceAsync(new OpenCvCamera.AsyncCameraOpenListener() {
u/Override
public void onOpened() {
webcam.startStreaming(1280, 720, OpenCvCameraRotation.UPRIGHT);
}
u/Override
public void onError(int errorCode) {
}
});
The circleDetection is our OpenCV pipeline:
public class CircleDetection extends OpenCvPipeline {
…
public Mat processFrame(Mat input) {
…
}
}
Then, as soon as we find the team prop, we no longer need the OpenCV part, so we close it like this:
webcam.stopStreaming();
webcam.closeCameraDevice();
And then we create the AprilTagDetection class like this:
aprilTag = new AprilTagProcessor.Builder().build();
// Adjust Image Decimation to trade-off detection-range for detection-rate.
// eg: Some typical detection data using a Logitech C920 WebCam
// Decimation = 1 … Detect 2" Tag from 10 feet away at 10 Frames per second
// Decimation = 2 … Detect 2" Tag from 6 feet away at 22 Frames per second
// Decimation = 3 … Detect 2" Tag from 4 feet away at 30 Frames Per Second
// Decimation = 3 … Detect 5" Tag from 10 feet away at 30 Frames Per Second
// Note: Decimation can be changed on-the-fly to adapt during a match.
aprilTag.setDecimation(2);
// Create the vision portal by using a builder.
visionPortal = new VisionPortal.Builder()
.setCamera(lom.hardwareMap.get(WebcamName.class, “Webcam 1”))
.addProcessor(aprilTag)
.build();
setManualExposure(6, 250); // Use low exposure time to reduce motion blur
Any ideas what might be happening?
Thanks a lot!!