Occasional crash with Invalid address passed to free: value not allocated

@Windwoes see reply above. But also, looking at the AprilTagProcessorImpl and sample code- it appears we need to be calling apriltag.processframe rather than getdetections? And then implement on draw frame to show it on the viewport? Do I have that right? It seems like maybe, except maybe I have to cast the output of processframe, because its declared as Object rather than detections

@MR-Who I think you’re making this much more complicated than it needs to be :')

Simply adapt what I posted above for the TFOD processor for the AprilTag one. I went ahead and did that, but fair warning I have not tested it.

package org.firstinspires.ftc.teamcode;

import android.graphics.Canvas;

import com.qualcomm.robotcore.eventloop.opmode.Disabled;
import com.qualcomm.robotcore.eventloop.opmode.LinearOpMode;
import com.qualcomm.robotcore.eventloop.opmode.TeleOp;

import org.firstinspires.ftc.robotcore.external.hardware.camera.WebcamName;
import org.firstinspires.ftc.robotcore.external.tfod.Recognition;
import org.firstinspires.ftc.robotcore.internal.camera.calibration.CameraCalibration;
import org.firstinspires.ftc.robotcore.internal.camera.calibration.CameraCalibrationHelper;
import org.firstinspires.ftc.robotcore.internal.camera.calibration.CameraCalibrationIdentity;
import org.firstinspires.ftc.vision.apriltag.AprilTagDetection;
import org.firstinspires.ftc.vision.apriltag.AprilTagProcessor;
import org.firstinspires.ftc.vision.tfod.TfodProcessor;
import org.firstinspires.ftc.vision.VisionProcessor;
import org.opencv.core.Mat;
import org.openftc.easyopencv.OpenCvCamera;
import org.openftc.easyopencv.OpenCvCameraFactory;
import org.openftc.easyopencv.OpenCvCameraRotation;
import org.openftc.easyopencv.OpenCvInternalCamera2;
import org.openftc.easyopencv.OpenCvWebcam;
import org.openftc.easyopencv.TimestampedOpenCvPipeline;

@TeleOp
@Disabled
public class AprilTagFromEocv extends LinearOpMode
{
    OpenCvCamera camera;
    AprilTagProcessor aprilTagProcessor;
    
    @Override
    public void runOpMode()
    {
        aprilTagProcessor = AprilTagProcessor.easyCreateWithDefaults();

        int cameraMonitorViewId = hardwareMap.appContext.getResources().getIdentifier("cameraMonitorViewId", "id", hardwareMap.appContext.getPackageName());
        camera = OpenCvCameraFactory.getInstance().createWebcam(hardwareMap.get(WebcamName.class, "Webcam 1"), cameraMonitorViewId);
        camera.setViewportRenderer(OpenCvCamera.ViewportRenderer.NATIVE_VIEW);
        camera.setViewportRenderingPolicy(OpenCvCamera.ViewportRenderingPolicy.OPTIMIZE_VIEW);
        camera.openCameraDeviceAsync(new OpenCvCamera.AsyncCameraOpenListener()
        {
            @Override
            public void onOpened()
            {
                MyPipeline myPipeline = new MyPipeline(aprilTagProcessor);

                if (camera instanceof OpenCvWebcam)
                {
                    myPipeline.noteCalibrationIdentity(((OpenCvWebcam) camera).getCalibrationIdentity());
                }

                camera.startStreaming(640, 480, OpenCvCameraRotation.SENSOR_NATIVE);
                camera.setPipeline(myPipeline);
            }

            @Override
            public void onError(int errorCode)
            {

            }
        });

        waitForStart();

        while (opModeIsActive())
        {
            for (AprilTagDetection det : aprilTagProcessor.getDetections())
            {
                telemetry.addLine(String.format("Tag ID %d", det.id));
            }
        }

        telemetry.update();
    }

    static class MyPipeline extends TimestampedOpenCvPipeline
    {
        private final VisionProcessor processor;
        private CameraCalibrationIdentity ident;

        public MyPipeline(VisionProcessor processor)
        {
            this.processor = processor;
        }

        public void noteCalibrationIdentity(CameraCalibrationIdentity ident)
        {
            this.ident = ident;
        }

        @Override
        public void init(Mat firstFrame)
        {
            CameraCalibration calibration = CameraCalibrationHelper.getInstance().getCalibration(ident, firstFrame.width(), firstFrame.height());
            processor.init(firstFrame.width(), firstFrame.height(), calibration);
        }

        @Override
        public Mat processFrame(Mat input, long captureTimeNanos)
        {
            Object drawCtx = processor.processFrame(input, captureTimeNanos);
            requestViewportDrawHook(drawCtx);
            return input;
        }

        @Override
        public void onDrawFrame(Canvas canvas, int onscreenWidth, int onscreenHeight, float scaleBmpPxToCanvasPx, float scaleCanvasDensity, Object userContext)
        {
            processor.onDrawFrame(canvas, onscreenWidth, onscreenHeight, scaleBmpPxToCanvasPx, scaleCanvasDensity, userContext);
        }
    }
}

thanks. Any complexity is left over from prior code (we also need to switch between pipelines). But per my note right above, I figured out last night that using processframe rather than getdetections was necessary, so we’d already made changes that look a lot like your example. Now we’ll see if that works.

OK, our revised code worked, thanks! We have our C920 set to the one resolution that doesn’t autocalibrate, so once we fix that we should be OK!