VisionPortal OpenCV pipeline requires robot power cycle to function after running once

The cameras that we are using require MJPEG stream mode to run in a resolution lower than 720p. In order to use MJPEG mode, we have to use a pipeline built around the VisionPortal api. The code that I have is somewhat working, but the more pressing issue is that it will not show any telemetry and show a glitchy white patterned screen in the camera stream after running the code once and stopping the opmode. The only way around this is a complete power cycle of the robot. The camera works perfectly fine on my laptop, and on my Linux computer. The full pipeline code and running code is provided below. Let me know if you need the system log.

package org.firstinspires.ftc.teamcode.Pipelines;

import android.graphics.Bitmap;
import android.graphics.Canvas;
import android.graphics.Color;
import android.graphics.Paint;

import androidx.annotation.NonNull;

import org.firstinspires.ftc.robotcore.external.Telemetry;
import org.firstinspires.ftc.robotcore.internal.camera.calibration.CameraCalibration;
import org.firstinspires.ftc.vision.VisionProcessor;
import org.opencv.core.Core;
import org.opencv.core.CvException;
import org.opencv.core.Mat;
import org.opencv.core.Rect;
import org.opencv.core.Scalar;
import org.opencv.imgproc.Imgproc;

import static org.firstinspires.ftc.teamcode.Pipelines.ConstantsNew.BLUE_HIGH_HSV;
import static org.firstinspires.ftc.teamcode.Pipelines.ConstantsNew.BLUE_LOW_HSV;
import static org.firstinspires.ftc.teamcode.Pipelines.ConstantsNew.CENTER_ROI_BLUE;
import static org.firstinspires.ftc.teamcode.Pipelines.ConstantsNew.CENTER_ROI_RED;
import static org.firstinspires.ftc.teamcode.Pipelines.ConstantsNew.LEFT_ROI_BLUE;
import static org.firstinspires.ftc.teamcode.Pipelines.ConstantsNew.LEFT_ROI_RED;
import static org.firstinspires.ftc.teamcode.Pipelines.ConstantsNew.RED_HIGH_HSV;
import static org.firstinspires.ftc.teamcode.Pipelines.ConstantsNew.RED_HIGH_HSV1;
import static org.firstinspires.ftc.teamcode.Pipelines.ConstantsNew.RED_LOW_HSV;
import static org.firstinspires.ftc.teamcode.Pipelines.ConstantsNew.RED_LOW_HSV1;
import static org.firstinspires.ftc.teamcode.Pipelines.ConstantsNew.RIGHT_ROI_RED;

public class VisionProcessorPipeline implements VisionProcessor {
    Mat mat;
    Mat filteredFrame;
    Rect NONCENTER_ROI;
    Rect CENTER_ROI;
    Prop undetectableLocation;
    Prop detectableNoncenter;
    Scalar lowHSV;
    Scalar highHSV;
    Scalar lowHSV1;
    Scalar highHSV1;
    static final double PERCENT_COLOR_THRESHOLD = 15;
    Prop location;
    Telemetry telemetry;
    boolean telemetryEnabled = true;
    static Paint noncenterPaint = new Paint();
    static Paint centerPaint = new Paint();
    static {
        noncenterPaint.setStyle(Paint.Style.STROKE);
        centerPaint.setStyle(Paint.Style.STROKE);
    }
    public VisionProcessorPipeline(Telemetry t, StartPosition start) {
        if (start == StartPosition.RED_AUD) {
            NONCENTER_ROI = LEFT_ROI_RED;
            CENTER_ROI = CENTER_ROI_RED;
            undetectableLocation = Prop.RIGHT;
            detectableNoncenter = Prop.LEFT;
            lowHSV = RED_LOW_HSV;
            highHSV = RED_HIGH_HSV;
            lowHSV1 = RED_LOW_HSV1;
            highHSV1 = RED_HIGH_HSV1;
        }
        else if (start == StartPosition.BLUE_AUD) {
            NONCENTER_ROI = LEFT_ROI_BLUE;
            CENTER_ROI = CENTER_ROI_BLUE;
            undetectableLocation = Prop.LEFT;
            detectableNoncenter = Prop.RIGHT;
            lowHSV = BLUE_LOW_HSV;
            highHSV = BLUE_HIGH_HSV;
        }
        else if (start == StartPosition.RED_STAGE) {
            NONCENTER_ROI = RIGHT_ROI_RED;
            CENTER_ROI = CENTER_ROI_RED;
            undetectableLocation = Prop.LEFT;
            detectableNoncenter = Prop.RIGHT;
            lowHSV = RED_LOW_HSV;
            highHSV = RED_HIGH_HSV;
        }
        else if (start == StartPosition.BLUE_STAGE) {
            NONCENTER_ROI = LEFT_ROI_BLUE;
            CENTER_ROI = CENTER_ROI_BLUE;
            undetectableLocation = Prop.RIGHT;
            detectableNoncenter = Prop.LEFT;
            lowHSV = BLUE_LOW_HSV;
            highHSV = BLUE_HIGH_HSV;
        }
        else {
            throw new IllegalArgumentException("Invalid start position passed to pipeline!");
        }
        telemetry = t;
    }
    @Override
    public void init(int width, int height, CameraCalibration calibration) {

    }

    @Override
    public Object processFrame(@NonNull Mat frame, long captureTimeNanos) {
        mat = frame;
        filteredFrame = mat;
        if(frame.empty()) throw new CvException("Input mat was empty!");

        Imgproc.cvtColor(frame, mat, Imgproc.COLOR_BGR2HSV);

        Core.inRange(mat, lowHSV1, highHSV1, filteredFrame);
        Core.inRange(mat, lowHSV, highHSV, mat);
        //Core.add(mat, filteredFrame, mat);
        Core.bitwise_or(mat, filteredFrame, mat);

        Mat noncenter = mat.submat(NONCENTER_ROI); //sub matrices of mat
        Mat center = mat.submat(CENTER_ROI);
        if(noncenter.empty() || center.empty()) throw new CvException("Noncenter or center mat was empty!");

        // if a pixel is between the low and high HSV range OpenCV gives it a value of 255, or white.
        // This way the new image is just grayscale where 0 is black and 255 is white.
        // Below all elements of the ROI are added up and divided by the area and then by 2.55.
        // This essentially calculates the percentage of identified pixels.

        double noncenterValue = Core.sumElems(noncenter).val[0] / NONCENTER_ROI.area() / 2.55;
        double centerValue = Core.sumElems(center).val[0] / CENTER_ROI.area() / 2.55;

        boolean inNoncenterPosition = noncenterValue > PERCENT_COLOR_THRESHOLD; // sets a limit to compare to so small objects don't accidentally trigger
        boolean inCenterPosition = centerValue > PERCENT_COLOR_THRESHOLD;

        noncenter.release();
        center.release();

        if(inNoncenterPosition) {
            location = detectableNoncenter;
            noncenterPaint.setColor(Color.GREEN);
            centerPaint.setColor(Color.RED);
        } else if(inCenterPosition) {
            location = Prop.CENTER;
            centerPaint.setColor(Color.GREEN);
            noncenterPaint.setColor(Color.RED);
        } else {
            location = undetectableLocation;
            centerPaint.setColor(Color.RED);
            noncenterPaint.setColor(Color.RED);
        }

        if (telemetryEnabled) {
            telemetry.addData("Detected position: ", String.valueOf(location));
            telemetry.addData(detectableNoncenter + " percentage", Math.round(noncenterValue) + "%");
            telemetry.addData("CENTER percentage", Math.round(centerValue) + "%");
            telemetry.update();
        }

        mat = frame;
        Imgproc.cvtColor(frame, mat, Imgproc.COLOR_BGR2RGB);

        return mat;
    }

    @Override
    public void onDrawFrame(Canvas canvas, int onscreenWidth, int onscreenHeight, float scaleBmpPxToCanvasPx, float scaleCanvasDensity, Object userContext) {
        noncenterPaint.setStrokeWidth(scaleCanvasDensity * 4);
        centerPaint.setStrokeWidth(scaleCanvasDensity * 4);

        android.graphics.Rect centerROI = makeGraphicsRect(CENTER_ROI, scaleBmpPxToCanvasPx);
        android.graphics.Rect noncenterROI = makeGraphicsRect(NONCENTER_ROI, scaleBmpPxToCanvasPx);

        Mat output = (Mat) userContext;
        Bitmap mat = Bitmap.createBitmap(output.cols(), output.rows(), Bitmap.Config.RGB_565);
        canvas.setBitmap(mat);
        output.release();

        canvas.drawRect(centerROI, centerPaint);
        canvas.drawRect(noncenterROI, noncenterPaint);
    }
    public Prop getPropLocation() {
        return location;
    }
    public void toggleTelemetry() {
        telemetryEnabled = !telemetryEnabled;
    }
    private android.graphics.Rect makeGraphicsRect(Rect rect, float scaleBmpPxToCanvasPx) {
        int left = Math.round(rect.x * scaleBmpPxToCanvasPx);
        int top = Math.round(rect.y * scaleBmpPxToCanvasPx);
        int right = left + Math.round(rect.width * scaleBmpPxToCanvasPx);
        int bottom = top + Math.round(rect.height * scaleBmpPxToCanvasPx);
        return new android.graphics.Rect(left, top, right, bottom);
    }
}
package org.firstinspires.ftc.teamcode.Autonomous;

import android.util.Size;

import com.acmerobotics.dashboard.FtcDashboard;
import com.acmerobotics.dashboard.telemetry.MultipleTelemetry;
import com.qualcomm.robotcore.eventloop.opmode.Autonomous;
import com.qualcomm.robotcore.eventloop.opmode.LinearOpMode;

import org.firstinspires.ftc.robotcore.external.hardware.camera.WebcamName;
import org.firstinspires.ftc.teamcode.Pipelines.PipelineNewCamera;
import org.firstinspires.ftc.teamcode.Pipelines.StartPosition;
import org.firstinspires.ftc.teamcode.Pipelines.VisionProcessorPipeline;
import org.firstinspires.ftc.vision.VisionPortal;
import org.firstinspires.ftc.vision.VisionPortalImpl;
import org.openftc.easyopencv.OpenCvCamera;
import org.openftc.easyopencv.OpenCvCameraFactory;
import org.openftc.easyopencv.OpenCvCameraRotation;

@Autonomous
public class RunVisionProcessorCam1 extends LinearOpMode {
    @Override
    public void runOpMode() throws InterruptedException {
        telemetry = new MultipleTelemetry(telemetry, FtcDashboard.getInstance().getTelemetry());

        VisionProcessorPipeline detector = new VisionProcessorPipeline(telemetry, StartPosition.RED_AUD);

        VisionPortal visionPortal = new VisionPortal.Builder()
                .setStreamFormat(VisionPortal.StreamFormat.MJPEG)
                .setCameraResolution(new Size(640, 480))
                .addProcessor(detector)
                .enableLiveView(true)
                .setCamera(hardwareMap.get(WebcamName.class, "Webcam 1"))
                .build();

        visionPortal.setProcessorEnabled(detector, true);

        waitForStart();
        visionPortal.setProcessorEnabled(detector, false);
        visionPortal.stopLiveView();
        visionPortal.stopStreaming();
        visionPortal.close();
    }
}

VisionPortal is not required to use MJPEG. The raw EOCV API supports MJPEG. VisionPortal is simply a wrapper around EOCV.

Can you please provide the camera model you are using as well as an example of the image pattern you’re talking about? The log would be helpful as well.

Also, please try removing the manual cleanup lines from the end of the program

        visionPortal.stopLiveView();
        visionPortal.stopStreaming();
        visionPortal.close();

They are not needed as this is done automatically, and in the currently released SDK there are a couple race conditions where calling them could cause issues.

The camera is an Arducam OV9782, which only supports 720p and 800p at 10hz in YUY2 mode, but it supports many more resolutions and frame rates in MJPEG mode. I will post the complete system log when I can. I will also send the image at the same time.

I forgot to mention, but I didn’t find a way to use the raw EasyOpenCV API in MJPEG mode, but if there is a way that would be much easier.

FWIW, when I added MJPEG support to the UVC driver and EOCV, my test camera was the Arducam OV9281, and I don’t recall having any issues running multiple times in a row.

As for using EOCV in MJPEG mode, see here.

Thank you so much. It is so much easier to use EOCV than rewriting the whole thing in the VisionPortal API. Also, I believe the only difference between that camera and the one we’re using is that ours is color and only 100hz. I will still upload the log and picture in case you want to debug the problem we were having, but that solves the main issue. (As long as we don’t have any problems with EOCV and the camera.)
Picture:


Log: robotControllerLog.txt - Pastebin.com

Only one problem: we’re apparently using an old version of EOCV, and we can’t really move everything to the 9.0.1 sdk. Can we just change the Maven dependencies to 9.0.1, or do we need to start from scratch? (We use roadrunner, so it is difficult to upgrade the sdk the usul way.)

I suspect you’re going to have the same issue with the EOCV API.

You need to update to SDK v9.x otherwise you will fail inspection at competition.
Changing the maven versions is not enough, as there are other files that change between versions which are not bundled in the maven dependencies. If your repo has git history from the official SDK, you should just be able to pull from upstream.

We’re already at 9.0.0, we just haven’t gone to 9.0.1. Did EOCV get updated in that version bump? Or do we need to try to manually update EOCV? (We didn’t fork the sdk, so we can’t just pull.)

It did not get updated in 9.0.1

Ok. Do you know if it’s possible to manually update the SDK’s version of EOCV?

It is possible, but the version of EOCV shipped with the SDK does support MJPEG - otherwise MJPEG wouldn’t work in vision portal.

Well, it won’t let me do startStreaming with a stream format.


EDIT: I found the problem. By default, “webcam” was an OpenCvCamera, and the method is only in OpenCvWebcam. I just needed to cast it.
And we don’t have the weird bug with EOCV alone. It might’ve been because of the onDrawFrame() code that I wrote, but I don’t know. If you want to figure out what was causing it the log and artifacted picture are in an earlier reply.

Can you try running Vision Portal on the problematic camera without any vision processors?

As expected, it works fine with an empty processor even after several runs. I just made some mistakes with my code, most likely. Thank you for the help, it was most valuable.