Occasional crash with Invalid address passed to free: value not allocated

Actually, before we go down that rabbit hole…

The “RenderThread” is a bit of a clue. The invalid free is happening on the Android thread responsible for rendering the application UI. I notice you are using a custom VisionProcessor (ObjectDetectionVisionProcessor.java). I believe that the onDrawFrame() function is invoked by the Android RenderThread.

Try this: in SilverTitansVisionPortal.java, where you build the VisionPortal, try NOT adding your objectDetectionVisionProcessor to the portal on line 40.

Can you share the stack trace of the thread causing the invalid free? Is there some tool we can use to analyze the system dump ourselves? You seem to have gotten to the crux of it rather rapidly.

If we don’t add ObjectDetectionVisionProcessor and the crash doesn’t happen, we won’t know where the problem lies. Also, of course, we do want to use our vision processor.

Firstly it’s not a Java crash so there isn’t going to be a stacktrace pointing to your code, only a native backtrace as Android calls it. But we don’t have that either, that’s what I was asking about when I mentioned the tombstone earlier.

I just made an educated guess at what’s probably going on. I am the author of both VisionPortal and EasyOpenCV :slight_smile:

If the crash doesn’t happen without your processor, we at least have an extremely narrowed scope of investigation for where the issue may be.

Can you help us with the VisionProcessor? I think we know where the problem lies. We implement the processFrame(Mat frame, long captureTimeNanos) and return an object which is really a Map of objects found by the processor indexed by the object type. When onDrawFrame is called, we expect the Map to passed in. We work based on this map to paint on the canvas the objects seen and returned in processFrame. I don’t know if the two calls happen on the same thread or not. There is no synchronization between them and thus the issue with free. We’ve change the onDrawFrame to do nothing to see if it removes the problem.

Also, didn’t know we were being helped by essentially Royalty! The author of VisionPortal and EasyOpenCV. Wow!

You have the right idea for how to implement the coordination between processFrame() and onDrawFrame().

Correct, there is no synchronization between processFrame() and onDrawFrame(), and they will be called on different threads. They may also be called concurrently, and moreover there is no guarantee that you will get an onDrawFrame() call for every processFrame() call. If the viewport renderer is lagging behind, it will drop frames to prevent backlogging the actual vision processing. Also, onDrawFrame() itself may be called from multiple threads, one for the local view and one for the remote view on the DS.

Synchronization is not forced between the two functions because it is not needed in all cases. You can always add a mutex if you need it. That being said, the easiest way to deal with it is to just return new object instances from processFrame() each time. Then no explicit synchronization is necessary. This is how the implementation of the AprilTagProcessor is done. The onDrawFrame() function has a mutex just for itself to prevent being called from two threads simultaneously for reasons that I don’t completely remember but I think it was because I was using the same canvasAnnotator instance and I didn’t want to let the scale change mid-draw - though looking at it now I see it probably could have been avoided by passing the draw params directly to the drawOutlineMarker() etc. calls instead of having a noteDrawParams() function.

If you’re thinking to yourself “gee this is a lot more complicated than drawing directly on the image buffer in a standard EOCV pipeline,” you’re not wrong… the complexity stems from the fact that we needed to support multiple processors annotating the same frame while leaving ourselves the future option to run processors in parallel on different threads (although currently they are run in series).

At the time, I was of the opinion that people advanced enough to write their own vision code would just want to keep working with the EasyOpenCV APIs directly, although evidently that doesn’t seem to be the case as you are one of many to be interested in using custom VisionProcessors.

All that being said, even if you have concurrency issues in your processor, I’d usually expect that it would manifest as a Java crash. The free() error is not even something that’s possible to do in Java, it’s only possible in C/C++. The fact that it’s a native crash is very odd, it makes me think perhaps that the crash is happening inside the native code of the Canvas graphics API.

Thanks for the detailed explanation. We do have synchronization in code on the returned object so maybe that is not the problem. As you mentioned, even if we did not have proper synchronization, it should only result in a java issue, not a “free” issue.

Good to know that the processors are not called in parallel, we thought they were. I think it might be best for use to move to a pure EasyOpenCV implementation that finds our objects and the april tags in a pipeline we write.

That’s what I originally anticipated more advanced teams would gravitate towards. I’m attaching a sample from our internal repository that shows using the Tfod VisionProcessor from EOCV, you could use it as a reference for using the AprilTag one.

That being said, we still have not confirmed whether it is in fact an issue with your custom processor or not.

/*
 * Copyright (c) 2023 FIRST
 *
 * All rights reserved.
 *
 * Redistribution and use in source and binary forms, with or without modification,
 * are permitted (subject to the limitations in the disclaimer below) provided that
 * the following conditions are met:
 *
 * Redistributions of source code must retain the above copyright notice, this list
 * of conditions and the following disclaimer.
 *
 * Redistributions in binary form must reproduce the above copyright notice, this
 * list of conditions and the following disclaimer in the documentation and/or
 * other materials provided with the distribution.
 *
 * Neither the name of FIRST nor the names of its contributors may be used to
 * endorse or promote products derived from this software without specific prior
 * written permission.
 *
 * NO EXPRESS OR IMPLIED LICENSES TO ANY PARTY'S PATENT RIGHTS ARE GRANTED BY THIS
 * LICENSE. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
 * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO,
 * THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
 * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE
 * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
 * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
 * SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
 * CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR
 * TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF
 * THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
 */

package org.firstinspires.ftc.robotcontroller.tests;

import android.graphics.Canvas;

import com.qualcomm.robotcore.eventloop.opmode.Disabled;
import com.qualcomm.robotcore.eventloop.opmode.LinearOpMode;
import com.qualcomm.robotcore.eventloop.opmode.TeleOp;

import org.firstinspires.ftc.robotcore.external.hardware.camera.WebcamName;
import org.firstinspires.ftc.robotcore.external.tfod.Recognition;
import org.firstinspires.ftc.robotcore.internal.camera.calibration.CameraCalibration;
import org.firstinspires.ftc.robotcore.internal.camera.calibration.CameraCalibrationHelper;
import org.firstinspires.ftc.robotcore.internal.camera.calibration.CameraCalibrationIdentity;
import org.firstinspires.ftc.vision.tfod.TfodProcessor;
import org.firstinspires.ftc.vision.VisionProcessor;
import org.opencv.core.Mat;
import org.openftc.easyopencv.OpenCvCamera;
import org.openftc.easyopencv.OpenCvCameraFactory;
import org.openftc.easyopencv.OpenCvCameraRotation;
import org.openftc.easyopencv.OpenCvInternalCamera2;
import org.openftc.easyopencv.OpenCvWebcam;
import org.openftc.easyopencv.TimestampedOpenCvPipeline;

@TeleOp
@Disabled
public class TestTfodProcessorWithEocv extends LinearOpMode
{
    boolean USE_WEBCAM = false;
    OpenCvCamera camera;
    TfodProcessor tfProcessor;

    private static final String TFOD_MODEL_ASSET = "CenterStage.tflite";
    private static final String[] TFOD_LABELS ={
            "Pixel"
    };

    @Override
    public void runOpMode()
    {
        tfProcessor = new TfodProcessor.Builder()
            .setModelAssetName(TFOD_MODEL_ASSET)
            .setModelLabels(TFOD_LABELS)
            .setIsModelTensorFlow2(true)
            .setIsModelQuantized(true)
            .setModelInputSize(300)
            .setModelAspectRatio(16.0 / 9.0)
            .build();

        int cameraMonitorViewId = hardwareMap.appContext.getResources().getIdentifier("cameraMonitorViewId", "id", hardwareMap.appContext.getPackageName());

        if (USE_WEBCAM)
        {
            camera = OpenCvCameraFactory.getInstance().createWebcam(hardwareMap.get(WebcamName.class, "Webcam 1"), cameraMonitorViewId);
        }
        else
        {
            camera = OpenCvCameraFactory.getInstance().createInternalCamera2(OpenCvInternalCamera2.CameraDirection.BACK, cameraMonitorViewId);
        }

        camera.setViewportRenderer(OpenCvCamera.ViewportRenderer.NATIVE_VIEW);
        camera.setViewportRenderingPolicy(OpenCvCamera.ViewportRenderingPolicy.OPTIMIZE_VIEW);
        camera.openCameraDeviceAsync(new OpenCvCamera.AsyncCameraOpenListener()
        {
            @Override
            public void onOpened()
            {
                MyPipeline myPipeline = new MyPipeline(tfProcessor);

                if (camera instanceof OpenCvWebcam)
                {
                    myPipeline.noteCalibrationIdentity(((OpenCvWebcam) camera).getCalibrationIdentity());
                }

                camera.startStreaming(640, 480, OpenCvCameraRotation.SENSOR_NATIVE);
                camera.setPipeline(myPipeline);
            }

            @Override
            public void onError(int errorCode)
            {

            }
        });

        waitForStart();

        while (opModeIsActive())
        {
            for (Recognition recognition : tfProcessor.getRecognitions())
            {
                telemetry.addData("Image", "%s (%.0f %% Conf.)", recognition.getLabel(), recognition.getConfidence() * 100 );
            }
        }

        telemetry.update();
    }

    static class MyPipeline extends TimestampedOpenCvPipeline
    {
        private VisionProcessor processor;
        private CameraCalibrationIdentity ident;

        public MyPipeline(VisionProcessor processor)
        {
            this.processor = processor;
        }

        public void noteCalibrationIdentity(CameraCalibrationIdentity ident)
        {
            this.ident = ident;
        }

        @Override
        public void init(Mat firstFrame)
        {
            CameraCalibration calibration = CameraCalibrationHelper.getInstance().getCalibration(ident, firstFrame.width(), firstFrame.height());
            processor.init(firstFrame.width(), firstFrame.height(), calibration);
        }

        @Override
        public Mat processFrame(Mat input, long captureTimeNanos)
        {
            Object drawCtx = processor.processFrame(input, captureTimeNanos);
            requestViewportDrawHook(drawCtx);
            return input;
        }

        @Override
        public void onDrawFrame(Canvas canvas, int onscreenWidth, int onscreenHeight, float scaleBmpPxToCanvasPx, float scaleCanvasDensity, Object userContext)
        {
            processor.onDrawFrame(canvas, onscreenWidth, onscreenHeight, scaleBmpPxToCanvasPx, scaleCanvasDensity, userContext);
        }
    }
}

@Windwoes Returning to this, our team has working pipelines under the old EOCV. We already have it for our team element. They also have an AprilTagJNI detector pipeline, but it appears to find a different pose (I assume 2.0.0 is built into the SDK as well? We haven’t compiled yet, but Android Studio finds all the import files).

So, my question is: Is there a quick way to get the quality FTC pose from the current AprilTagJNI? I know there’s some relatively simple transformation happening, but it’s not clear whether it’s complete or what it is.
Thanks!

I’m not completely clear on what you are asking, but, yes: AprilTag plugin 2.0.0 is included in the SDK.

You can see the transformation that happens here: Extracted-RC/Vision/src/main/java/org/firstinspires/ftc/vision/apriltag/AprilTagProcessorImpl.java at d6468783e0248b92369171d988ce0581e47bae09 · OpenFTC/Extracted-RC · GitHub

Thanks - this is very helpful. I was asking how to implement the new VisionProcessor in a pipeline without having to redo all the VisionProcessor setup w/respect to our Team Element since that pipeline works (using the OpenCVCameraFactory, StartStreaming and Listeners, etc. I thought I saw that posted somewhere, but I can’t find it again. But since our current apriltag pipeline uses AprilTagJNI directly, I suppose we can just do our own transformation.

In a previous post on this thread I posted a sample OpMode showing how to run the TFOD processor from a pipeline. It should be straightforward to adapt for the AprilTag processor.

Thanks - I read that, which is what prompted my question. Our current pipelines are separate class files, and they don’t take a processor as a parameter (though I think we added a different class as a parameter because we directly change the other class inside the pipeline). I guess we can just adjust the parameters as we see fit? Or create the processor inside the pipeline class?

Either of those would work.

Thanks, I think we’ve got it now.

OK, I’m back. We made the following pipeline (edited for brevity, but I left in a couple of the commented out lines from last year’s processor). processFrame gets called, and the viewport shows the image, but detections is always zero. I am certain we are doing something wrong, but can’t figure out what we are missing. The team qualified for worlds today, but we were at a real disadvantage without the ability to see AprilTags. Any thoughts appreciated.

package org.firstinspires.ftc.teamcode.NewCustomSwerve.SubSystems;
/*
 * Copyright (c) 2021 OpenFTC Team
 *
 * Permission is hereby granted, free of charge, to any person obtaining a copy
 * of this software and associated documentation files (the "Software"), to deal
 * in the Software without restriction, including without limitation the rights
 * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
 * copies of the Software, and to permit persons to whom the Software is
 * furnished to do so, subject to the following conditions:
 *
 * The above copyright notice and this permission notice shall be included in all
 * copies or substantial portions of the Software.
 * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
 * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
 * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
 * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
 * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
 * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
 * SOFTWARE.
 */
import ...

public class AprilTagDetectionPipeline extends OpenCvPipeline
{
    // STATIC CONSTANTS

    // Lens intrinsics
    // UNITS ARE PIXELS
    // NOTE: this calibration is for the C920 webcam at 800x448.
    // You will need to do your own calibration for other configurations!
    public static double fx = 578.272;
    public static double fy = 578.272;
    public static double cx = 402.145;
    public static double cy = 221.506;
    public int tagID=0;

    // UNITS ARE METERS
    public static double TAG_SIZE = 0.166;

    // instance variables

    private Mat grey = new Mat();
    private ArrayList<AprilTagDetection> detections = new ArrayList<>();
    private AprilTagProcessor aprilTag;
    //private ArrayList<AprilTagDetection> detectionsUpdate = new ArrayList<>();
    private List<AprilTagDetection> currentDetections;
    private final Object detectionsUpdateSync = new Object();

    Mat cameraMatrix;

    double tagsizeX = TAG_SIZE;
    double tagsizeY = TAG_SIZE;

    private float decimation;
    private boolean needToSetDecimation;
    private final Object decimationSync = new Object();

    private Webcam webcam;



    public AprilTagDetectionPipeline(Webcam webcam) {
        this.webcam = webcam;
     //   constructMatrix();

        // Create the AprilTag processor.
        aprilTag = new AprilTagProcessor.Builder()

                // The following default settings are available to un-comment and edit as needed.
                .setDrawAxes(true)
                //.setDrawCubeProjection(false)
                .setDrawTagOutline(true)
                //.setTagFamily(AprilTagProcessor.TagFamily.TAG_36h11)
                .setTagLibrary(AprilTagGameDatabase.getCenterStageTagLibrary())
                .setOutputUnits(DistanceUnit.INCH, AngleUnit.RADIANS)

                // == CAMERA CALIBRATION ==
                // If you do not manually specify calibration parameters, the SDK will attempt
                // to load a predefined calibration for your camera.
                //.setLensIntrinsics(578.272, 578.272, 402.145, 221.506)
                // ... these parameters are fx, fy, cx, cy.

                .build();

        // Adjust Image Decimation to trade-off detection-range for detection-rate.
        // eg: Some typical detection data using a Logitech C920 WebCam
        // Decimation = 1 ..  Detect 2" Tag from 10 feet away at 10 Frames per second
        // Decimation = 2 ..  Detect 2" Tag from 6  feet away at 22 Frames per second
        // Decimation = 3 ..  Detect 2" Tag from 4  feet away at 30 Frames Per Second (default)
        // Decimation = 3 ..  Detect 5" Tag from 10 feet away at 30 Frames Per Second (default)
        // Note: Decimation can be changed on-the-fly to adapt during a match.
        //aprilTag.setDecimation(3);
    }


    @Override
    public void init(Mat frame)
    {
        // Allocate a native context object. See the corresponding deletion in the finalizer
        //nativeApriltagPtr = AprilTagDetectorJNI.createApriltagDetector(AprilTagDetectorJNI.TagFamily.TAG_36h11.string, 3, 3);

       // aprilTag.init();

    }



    @Override
    public void finalize()
    {
        // Delete the native context we created in the init() function
      //  AprilTagDetectorJNI.releaseApriltagDetector(nativeApriltagPtr);
    }

    @Override
    public Mat processFrame(Mat input)
    {

        // Convert to greyscale
        Imgproc.cvtColor(input, grey, Imgproc.COLOR_RGBA2GRAY);

        currentDetections = aprilTag.getDetections();
        webcam.telemetry.addData("# AprilTags Detected", currentDetections.size());

        // Step through the list of detections and display info for each one.
        for (AprilTagDetection detection : currentDetections) {
            if (detection.metadata != null) {
                webcam.tagPoseData[detection.id-1]=detection.ftcPose;
                webcam.telemetry.addLine(String.format("\n==== (ID %d) %s", detection.id, detection.metadata.name));
                webcam.telemetry.addLine(String.format("XYZ %6.1f %6.1f %6.1f  (inch)", detection.ftcPose.x, detection.ftcPose.y, detection.ftcPose.z));
                webcam.telemetry.addLine(String.format("PRY %6.1f %6.1f %6.1f  (deg)", detection.ftcPose.pitch, detection.ftcPose.roll, detection.ftcPose.yaw));
                webcam.telemetry.addLine(String.format("RBE %6.1f %6.1f %6.1f  (inch, deg, deg)", detection.ftcPose.range, detection.ftcPose.bearing, detection.ftcPose.elevation));
            } else {
                webcam.telemetry.addLine(String.format("\n==== (ID %d) Unknown", detection.id));
                webcam.telemetry.addLine(String.format("Center %6.0f %6.0f   (pixels)", detection.center.x, detection.center.y));
            }
        }   // end for() loop

        // Add "key" information to telemetry
        webcam.telemetry.addLine("\nkey:\nXYZ = X (Right), Y (Forward), Z (Up) dist.");
        webcam.telemetry.addLine("PRY = Pitch, Roll & Yaw (XYZ Rotation)");
        webcam.telemetry.addLine("RBE = Range, Bearing & Elevation");


        webcam.telemetry.update();

        return input;
    }

    public void setDecimation(float decimation)
    {
        synchronized (decimationSync)
        {
            this.decimation = decimation;
            needToSetDecimation = true;
        }
    }

    void constructMatrix()
    {
        //     Construct the camera matrix.
        //
        //      --         --
        //     | fx   0   cx |
        //     | 0    fy  cy |
        //     | 0    0   1  |
        //      --         --
        //

        cameraMatrix = new Mat(3,3, CvType.CV_32FC1);

        cameraMatrix.put(0,0, fx);
        cameraMatrix.put(0,1,0);
        cameraMatrix.put(0,2, cx);

        cameraMatrix.put(1,0,0);
        cameraMatrix.put(1,1,fy);
        cameraMatrix.put(1,2,cy);

        cameraMatrix.put(2, 0, 0);
        cameraMatrix.put(2,1,0);
        cameraMatrix.put(2,2,1);
    }

    /**
     * Draw a 3D axis marker on a detection. (Similar to what Vuforia does)
     *
     * @param buf the RGB buffer on which to draw the marker
     * @param length the length of each of the marker 'poles'
     * @param rvec the rotation vector of the detection
     * @param tvec the translation vector of the detection
     * @param cameraMatrix the camera matrix used when finding the detection
     */
    void drawAxisMarker(Mat buf, double length, int thickness, Mat rvec, Mat tvec, Mat cameraMatrix)
    {
        // The points in 3D space we wish to project onto the 2D image plane.
        // The origin of the coordinate space is assumed to be in the center of the detection.
        MatOfPoint3f axis = new MatOfPoint3f(
                new Point3(0,0,0),
                new Point3(length,0,0),
                new Point3(0,length,0),
                new Point3(0,0,-length)
        );

        // Project those points
        MatOfPoint2f matProjectedPoints = new MatOfPoint2f();
        Calib3d.projectPoints(axis, rvec, tvec, cameraMatrix, new MatOfDouble(), matProjectedPoints);
        Point[] projectedPoints = matProjectedPoints.toArray();

        // Draw the marker!
        Imgproc.line(buf, projectedPoints[0], projectedPoints[1], red, thickness);
        Imgproc.line(buf, projectedPoints[0], projectedPoints[2], green, thickness);
        Imgproc.line(buf, projectedPoints[0], projectedPoints[3], blue, thickness);

        Imgproc.circle(buf, projectedPoints[0], thickness, white, -1);
    }

    void draw3dCubeMarker(Mat buf, double length, double tagWidth, double tagHeight, int thickness, Mat rvec, Mat tvec, Mat cameraMatrix)
    {
        //axis = np.float32([[0,0,0], [0,3,0], [3,3,0], [3,0,0],
        //       [0,0,-3],[0,3,-3],[3,3,-3],[3,0,-3] ])

        // The points in 3D space we wish to project onto the 2D image plane.
        // The origin of the coordinate space is assumed to be in the center of the detection.
        MatOfPoint3f axis = new MatOfPoint3f(
                new Point3(-tagWidth/2, tagHeight/2,0),
                new Point3( tagWidth/2, tagHeight/2,0),
                new Point3( tagWidth/2,-tagHeight/2,0),
                new Point3(-tagWidth/2,-tagHeight/2,0),
                new Point3(-tagWidth/2, tagHeight/2,-length),
                new Point3( tagWidth/2, tagHeight/2,-length),
                new Point3( tagWidth/2,-tagHeight/2,-length),
                new Point3(-tagWidth/2,-tagHeight/2,-length));

        // Project those points
        MatOfPoint2f matProjectedPoints = new MatOfPoint2f();
        Calib3d.projectPoints(axis, rvec, tvec, cameraMatrix, new MatOfDouble(), matProjectedPoints);
        Point[] projectedPoints = matProjectedPoints.toArray();

        // Pillars
        for(int i = 0; i < 4; i++)
        {
            Imgproc.line(buf, projectedPoints[i], projectedPoints[i+4], blue, thickness);
        }

        // Base lines
        //Imgproc.line(buf, projectedPoints[0], projectedPoints[1], blue, thickness);
        //Imgproc.line(buf, projectedPoints[1], projectedPoints[2], blue, thickness);
        //Imgproc.line(buf, projectedPoints[2], projectedPoints[3], blue, thickness);
        //Imgproc.line(buf, projectedPoints[3], projectedPoints[0], blue, thickness);

        // Top lines
        Imgproc.line(buf, projectedPoints[4], projectedPoints[5], green, thickness);
        Imgproc.line(buf, projectedPoints[5], projectedPoints[6], green, thickness);
        Imgproc.line(buf, projectedPoints[6], projectedPoints[7], green, thickness);
        Imgproc.line(buf, projectedPoints[4], projectedPoints[7], green, thickness);
    }

    /**
     * Extracts 6DOF pose from a trapezoid, using a camera intrinsics matrix and the
     * original size of the tag.
     *
     * @param points the points which form the trapezoid
     * @param cameraMatrix the camera intrinsics matrix
     * @param tagsizeX the original width of the tag
     * @param tagsizeY the original height of the tag
     * @return the 6DOF pose of the camera relative to the tag
     */
    Pose poseFromTrapezoid(Point[] points, Mat cameraMatrix, double tagsizeX , double tagsizeY)
    {
        // The actual 2d points of the tag detected in the image
        MatOfPoint2f points2d = new MatOfPoint2f(points);

        // The 3d points of the tag in an 'ideal projection'
        Point3[] arrayPoints3d = new Point3[4];
        arrayPoints3d[0] = new Point3(-tagsizeX/2, tagsizeY/2, 0);
        arrayPoints3d[1] = new Point3(tagsizeX/2, tagsizeY/2, 0);
        arrayPoints3d[2] = new Point3(tagsizeX/2, -tagsizeY/2, 0);
        arrayPoints3d[3] = new Point3(-tagsizeX/2, -tagsizeY/2, 0);
        MatOfPoint3f points3d = new MatOfPoint3f(arrayPoints3d);

        // Using this information, actually solve for pose
        Pose pose = new Pose();
        Calib3d.solvePnP(points3d, points2d, cameraMatrix, new MatOfDouble(), pose.rvec, pose.tvec, false);

        return pose;
    }

    /*
     * A simple container to hold both rotation and translation
     * vectors, which together form a 6DOF pose.
     */
    class Pose
    {
        Mat rvec;
        Mat tvec;

        public Pose()
        {
            rvec = new Mat();
            tvec = new Mat();
        }

        public Pose(Mat rvec, Mat tvec)
        {
            this.rvec = rvec;
            this.tvec = tvec;
        }
    }
}


@Windwoes see reply above. But also, looking at the AprilTagProcessorImpl and sample code- it appears we need to be calling apriltag.processframe rather than getdetections? And then implement on draw frame to show it on the viewport? Do I have that right? It seems like maybe, except maybe I have to cast the output of processframe, because its declared as Object rather than detections

@MR-Who I think you’re making this much more complicated than it needs to be :')

Simply adapt what I posted above for the TFOD processor for the AprilTag one. I went ahead and did that, but fair warning I have not tested it.

package org.firstinspires.ftc.teamcode;

import android.graphics.Canvas;

import com.qualcomm.robotcore.eventloop.opmode.Disabled;
import com.qualcomm.robotcore.eventloop.opmode.LinearOpMode;
import com.qualcomm.robotcore.eventloop.opmode.TeleOp;

import org.firstinspires.ftc.robotcore.external.hardware.camera.WebcamName;
import org.firstinspires.ftc.robotcore.external.tfod.Recognition;
import org.firstinspires.ftc.robotcore.internal.camera.calibration.CameraCalibration;
import org.firstinspires.ftc.robotcore.internal.camera.calibration.CameraCalibrationHelper;
import org.firstinspires.ftc.robotcore.internal.camera.calibration.CameraCalibrationIdentity;
import org.firstinspires.ftc.vision.apriltag.AprilTagDetection;
import org.firstinspires.ftc.vision.apriltag.AprilTagProcessor;
import org.firstinspires.ftc.vision.tfod.TfodProcessor;
import org.firstinspires.ftc.vision.VisionProcessor;
import org.opencv.core.Mat;
import org.openftc.easyopencv.OpenCvCamera;
import org.openftc.easyopencv.OpenCvCameraFactory;
import org.openftc.easyopencv.OpenCvCameraRotation;
import org.openftc.easyopencv.OpenCvInternalCamera2;
import org.openftc.easyopencv.OpenCvWebcam;
import org.openftc.easyopencv.TimestampedOpenCvPipeline;

@TeleOp
@Disabled
public class AprilTagFromEocv extends LinearOpMode
{
    OpenCvCamera camera;
    AprilTagProcessor aprilTagProcessor;
    
    @Override
    public void runOpMode()
    {
        aprilTagProcessor = AprilTagProcessor.easyCreateWithDefaults();

        int cameraMonitorViewId = hardwareMap.appContext.getResources().getIdentifier("cameraMonitorViewId", "id", hardwareMap.appContext.getPackageName());
        camera = OpenCvCameraFactory.getInstance().createWebcam(hardwareMap.get(WebcamName.class, "Webcam 1"), cameraMonitorViewId);
        camera.setViewportRenderer(OpenCvCamera.ViewportRenderer.NATIVE_VIEW);
        camera.setViewportRenderingPolicy(OpenCvCamera.ViewportRenderingPolicy.OPTIMIZE_VIEW);
        camera.openCameraDeviceAsync(new OpenCvCamera.AsyncCameraOpenListener()
        {
            @Override
            public void onOpened()
            {
                MyPipeline myPipeline = new MyPipeline(aprilTagProcessor);

                if (camera instanceof OpenCvWebcam)
                {
                    myPipeline.noteCalibrationIdentity(((OpenCvWebcam) camera).getCalibrationIdentity());
                }

                camera.startStreaming(640, 480, OpenCvCameraRotation.SENSOR_NATIVE);
                camera.setPipeline(myPipeline);
            }

            @Override
            public void onError(int errorCode)
            {

            }
        });

        waitForStart();

        while (opModeIsActive())
        {
            for (AprilTagDetection det : aprilTagProcessor.getDetections())
            {
                telemetry.addLine(String.format("Tag ID %d", det.id));
            }
        }

        telemetry.update();
    }

    static class MyPipeline extends TimestampedOpenCvPipeline
    {
        private final VisionProcessor processor;
        private CameraCalibrationIdentity ident;

        public MyPipeline(VisionProcessor processor)
        {
            this.processor = processor;
        }

        public void noteCalibrationIdentity(CameraCalibrationIdentity ident)
        {
            this.ident = ident;
        }

        @Override
        public void init(Mat firstFrame)
        {
            CameraCalibration calibration = CameraCalibrationHelper.getInstance().getCalibration(ident, firstFrame.width(), firstFrame.height());
            processor.init(firstFrame.width(), firstFrame.height(), calibration);
        }

        @Override
        public Mat processFrame(Mat input, long captureTimeNanos)
        {
            Object drawCtx = processor.processFrame(input, captureTimeNanos);
            requestViewportDrawHook(drawCtx);
            return input;
        }

        @Override
        public void onDrawFrame(Canvas canvas, int onscreenWidth, int onscreenHeight, float scaleBmpPxToCanvasPx, float scaleCanvasDensity, Object userContext)
        {
            processor.onDrawFrame(canvas, onscreenWidth, onscreenHeight, scaleBmpPxToCanvasPx, scaleCanvasDensity, userContext);
        }
    }
}

thanks. Any complexity is left over from prior code (we also need to switch between pipelines). But per my note right above, I figured out last night that using processframe rather than getdetections was necessary, so we’d already made changes that look a lot like your example. Now we’ll see if that works.

OK, our revised code worked, thanks! We have our C920 set to the one resolution that doesn’t autocalibrate, so once we fix that we should be OK!