TensorFlow Crash on VuforiaFrameGenerator.run

We are having the same issues except that it is not showing any logs on the driver hub or the terminal and it is only happening in autonomous. The camera is causing it to restart the hub.https://1drv.ms/u/s!AkwjWXfbrlH-jXEvuBVVK6BE5-f1?e=sB4TOT

Looking at your logs, what I see in your OPMODE BlueAutonomus program is that your camera is being initialized correctly. Everything seems happy until TensorFlow has been initialized. Then I see:

12-05 10:38:17.393  1338  1523 I tflite  : Initialized TensorFlow Lite runtime.
--------- beginning of crash
12-05 10:38:17.431  1338  1522 E AndroidRuntime: FATAL EXCEPTION: VuforiaFrameGenerator
12-05 10:38:17.431  1338  1522 E AndroidRuntime: Process: com.qualcomm.ftcrobotcontroller, PID: 1338
12-05 10:38:17.431  1338  1522 E AndroidRuntime: java.lang.NullPointerException: Attempt to invoke virtual method 'boolean java.nio.ByteBuffer.isDirect()' on a null object reference
12-05 10:38:17.431  1338  1522 E AndroidRuntime:     at org.tensorflow.lite.task.vision.detector.ObjectDetector.createFromBufferAndOptions(ObjectDetector.java:202)
12-05 10:38:17.431  1338  1522 E AndroidRuntime:     at org.firstinspires.ftc.robotcore.internal.tfod.TfodFrameManager2$RecognizerPipeline2.<init>(TfodFrameManager2.java:66)
12-05 10:38:17.431  1338  1522 E AndroidRuntime:     at org.firstinspires.ftc.robotcore.internal.tfod.TfodFrameManager2$RecognizerPipeline2.<init>(TfodFrameManager2.java:55)
12-05 10:38:17.431  1338  1522 E AndroidRuntime:     at org.firstinspires.ftc.robotcore.internal.tfod.TfodFrameManager2.createRecognizerPipeline(TfodFrameManager2.java:52)
12-05 10:38:17.431  1338  1522 E AndroidRuntime:     at org.firstinspires.ftc.robotcore.internal.tfod.TfodFrameManager$MainPipeline.init(TfodFrameManager.java:227)
12-05 10:38:17.431  1338  1522 E AndroidRuntime:     at org.firstinspires.ftc.robotcore.internal.tfod.VuforiaFrameGenerator.run(VuforiaFrameGenerator.java:147)
12-05 10:38:17.431  1338  1522 E AndroidRuntime:     at java.lang.Thread.run(Thread.java:761)

This is where we hand off the model to TensorFlow, and let it start processing. I cannot tell from this log what is going on except that the Vuforia Frame Generator is not initializing correctly. From the log file I can see that a “match log” is being generated. Can you provide this log?

12-05 10:38:18.332  1338  1338 I RobotCore: ******************** STOP - OPMODE /storage/emulated/0/FIRST/matchlogs/Match-0-BlueAutonomus.txt ********************

-Danny

Also, can you provide your source code for this OpMode?

-Danny

I added a folder in the drive that I shared earlier called “Match Logs”. I am not sure if that is what you need, though. My students do all the programming, I help with the mechanical side.

Yes, thanks, that’s exactly what I needed. However, it just states exactly what the main robot log was telling me - that something in the Vuforia Frame Generator went belly-up, as it was about to begin processing camera images for TensorFlow.

Now I need you to ask a few questions to your programmers:

  1. Are you using the default TensorFlow model, or did you train your own? If you trained your own, where did you train it?
  2. Is BlueAutonomous the only OpMode that is causing this problem? Do you have other OpModes that use TensorFlow that do not cause this problem?
  3. I need a copy of your OpModes (both the ones that “work” and the ones that “crash”).

Thanks!
-Danny

This what My programmer says:

  1. Both, I’m working on my own model so I trained it using the ftc machine learning. But we are using a given one by FTC

  2. Yes

  3. brook testing 2022-2023 and then blue auto

Great. Once you’ve included both of the sources to those OpModes to the file share you provided I can continue this investigation.

-Danny

I think they are included
Files

Those are match logs. I want the source code for those opmodes. I believe there’s a mistake in the code that is causing it, or at least it will allow me to debug more easily.

-Danny

Hi I’m the programmer, I’ll get those for you shortly.

1 Like

Here is the blue Autonomous

public class AUTOMATION extends LinearOpMode {

      private DcMotor motor4;
      private DcMotor motor3;
      private DcMotor motor1;
      private DcMotor motor2;
      private DcMotor lift;
      private CRServo pivot;
      private Servo grabber;

    /*
     * Specify the source for the Tensor Flow Model.
     * If the TensorFlowLite object model is included in the Robot Controller App as an "asset",
     * the OpMode must to load it using loadModelFromAsset().  However, if a team generated model
     * has been downloaded to the Robot Controller's SD FLASH memory, it must to be loaded using loadModelFromFile()
     * Here we assume it's an Asset.    Also see method initTfod() below .
     */
    private static final String TFOD_MODEL_ASSET = "PowerPlay.tflite";
    private static final String blueCone = "/sdcard/FIRST/tflitemodels/BlueCone.tflite";
    // private static final String TFOD_MODEL_FILE  = "/sdcard/FIRST/tflitemodels/CustomTeamModel.tflite";


    private static final String[] LABELS = {
            "Bolt",
            "Bulb",
            "Panel"
    };
    
    private static final String[] coneLabels = {
            "bCone",
    };

    private static final String VUFORIA_KEY =
            "Ad4M7GP/////AAABmeZxPeyoWU0vk45dz6VmWoFZtZ3Beei2zVQN6SQjXKbzyNWwHkloibWVtf4m7s4WMhM6+3JUvKVBmBk0tPHahpeDFQi8gZ+x3/44X4M9xBxSoycd6WNwAst5YNIh8fN11Ml7v3tyxEtX7olacEfty/hSqWQm6GfwZBpgqXiJKJkjYy0K8XOutYdsKxVajNc7I636HDd970RxlvB73DgnsiFEtlX5CG/f4UFI1w99II6RKCj8fgFsaihHm1v2iWddGGOF24fcQF8bBejtwa4Hi9rs4ohh4t7yr6hpNraG6avcvtzMQzXhIbe1aWkaFSViHAMMpUDHoV/yIjFop6G7pQ2D7ysHtKidCRrM5Tq6mLaL";

    /**
     * {@link #vuforia} is the variable we will use to store our instance of the Vuforia
     * localization engine.
     */
    private VuforiaLocalizer vuforia;

    /**
     * {@link #tfod} is the variable we will use to store our instance of the TensorFlow Object
     * Detection engine.
     */
    private TFObjectDetector tfod;

    @Override
    public void runOpMode() {
        // The TFObjectDetector uses the camera frames from the VuforiaLocalizer, so we create that
        // first.
        initVuforia();
        initTfod();

        motor4 = hardwareMap.get(DcMotor.class, "motor 4");
        motor3 = hardwareMap.get(DcMotor.class, "motor 3");
        motor1 = hardwareMap.get(DcMotor.class, "motor 1");
        motor2 = hardwareMap.get(DcMotor.class, "motor 2");
        lift = hardwareMap.get(DcMotor.class, "lift");
        pivot = hardwareMap.get(CRServo.class, "pivot");
        grabber = hardwareMap.get(Servo.class, "grabber");


        /**
         * Activate TensorFlow Object Detection before we wait for the start command.
         * Do it here so that the Camera Stream window will have the TensorFlow annotations visible.
         **/
        if (tfod != null) {
            tfod.activate();

            // The TensorFlow software will scale the input images from the camera to a lower resolution.
            // This can result in lower detection accuracy at longer distances (> 55cm or 22").
            // If your target is at distance greater than 50 cm (20") you can increase the magnification value
            // to artificially zoom in to the center of image.  For best results, the "aspectRatio" argument
            // should be set to the value of the images used to create the TensorFlow Object Detection model
            // (typically 16/9).
            tfod.setZoom(1.0, 16.0/9.0);
        }

        /** Wait for the game to begin */
        telemetry.addData(">", "Press Play to start op mode");
        telemetry.update();
        waitForStart();

        if (opModeIsActive()) {
            while (opModeIsActive()) {
                if (tfod != null) {
                    // getUpdatedRecognitions() will return null if no new information is available since
                    // the last time that call was made.
                    List<Recognition> updatedRecognitions = tfod.getUpdatedRecognitions();
                    
                    if (updatedRecognitions != null && updatedRecognitions.size() > 0) {
                        telemetry.addData("# Objects Detected", updatedRecognitions.size());
                        telemetry.addData(updatedRecognitions.get(0).getLabel(), " ");
                        if (updatedRecognitions.get(0).getLabel() == "Bolt"){
                            motor1.setPower(1);
                            motor2.setPower(1);
                            motor3.setPower(-1);
                            motor4.setPower(-1);
                            sleep(1150);
                            motor1.setPower(-1);
                            motor2.setPower(1);
                            motor3.setPower(-1);
                            motor4.setPower(1);
                            sleep(1300);
                            motor1.setPower(1);
                            motor2.setPower(1);
                            motor3.setPower(-1);
                            motor4.setPower(-1);
                            sleep(200);
                            motor1.setPower(0);
                            motor2.setPower(0);
                            motor3.setPower(0);
                            motor4.setPower(0);
                            
                        }else if(updatedRecognitions.get(0).getLabel() == "Bulb"){
                            motor1.setPower(1);
                            motor2.setPower(1);
                            motor3.setPower(-1);
                            motor4.setPower(-1);
                            sleep(1150);
                            motor1.setPower(0);
                            motor2.setPower(0);
                            motor3.setPower(0);
                            motor4.setPower(0);
                        }else if(updatedRecognitions.get(0).getLabel() == "Panel"){
                           motor1.setPower(1);
                            motor2.setPower(1);
                            motor3.setPower(-1);
                            motor4.setPower(-1);
                            sleep(1000);
                            motor1.setPower(1);
                            motor2.setPower(-1);
                            motor3.setPower(1);
                            motor4.setPower(-1);
                            sleep(1400);
                            motor1.setPower(1);
                            motor2.setPower(1);
                            motor3.setPower(-1);
                            motor4.setPower(-1);
                            sleep(100);
                            motor1.setPower(0);
                            motor2.setPower(0);
                            motor3.setPower(0);
                            motor4.setPower(0);
                        }
                        // step through the list of recognitions and display image position/size information for each one
                        // Note: "Image number" refers to the randomized image orientation/number
                    }
                }
                telemetry.update();
            }
        }
    }

    /**
     * Initialize the Vuforia localization engine.
     */
    private void initVuforia() {
        /*
         * Configure Vuforia by creating a Parameter object, and passing it to the Vuforia engine.
         */
        VuforiaLocalizer.Parameters parameters = new VuforiaLocalizer.Parameters();

        parameters.vuforiaLicenseKey = VUFORIA_KEY;
        parameters.cameraName = hardwareMap.get(WebcamName.class, "Eye");

        //  Instantiate the Vuforia engine
        vuforia = ClassFactory.getInstance().createVuforia(parameters);
    }

    /**
     * Initialize the TensorFlow Object Detection engine.
     */
    private void initTfod() {
        int tfodMonitorViewId = hardwareMap.appContext.getResources().getIdentifier(
            "tfodMonitorViewId", "id", hardwareMap.appContext.getPackageName());
        TFObjectDetector.Parameters tfodParameters = new TFObjectDetector.Parameters(tfodMonitorViewId);
        tfodParameters.minResultConfidence = 0.75f;
        tfodParameters.isModelTensorFlow2 = true;
        tfodParameters.inputSize = 300;
        tfod = ClassFactory.getInstance().createTFObjectDetector(tfodParameters, vuforia);

        // Use loadModelFromAsset() if the TF Model is built in as an asset by Android Studio
        // Use loadModelFromFile() if you have downloaded a custom team model to the Robot Controller's FLASH.
        tfod.loadModelFromAsset(TFOD_MODEL_ASSET, LABELS);
        tfod.loadModelFromFile(blueCone, coneLabels);
        
        // tfod.loadModelFromFile(TFOD_MODEL_FILE, LABELS);
    }
}

The post that I posted and Brook posted are not spam.

Heres the source code for blue auto

Danny, I posted the other code “Brook’s Stuff” in the previous shared folder in OpMode Codes. She tried to post it but she got blocked for being a new user. You did get the “BlueAuto” it appears.

This happened because both you and Brook posted from the same IP address with different accounts. Discourse automatically flags this condition as possible spam for admin review. The rule specifically is, “If a new user replies to a topic from the same IP address as the user who started the topic, flag both of their posts as potential spam.” This is a common approach taken by bots, which clearly doesn’t apply here.

I’ve restored both of the posts.

2 Likes

any news on this issue?

You need to comment out one of the tfod.loadModelFrom… calls. You can only have one.
However, when I try calling loadModelFromFile after I’ve just called loadModelFromAsset, I get a different error. I see a IllegalThreadStateException. I’m not sure why you don’t see that error.

Oh that makes sense but how do you have multiple models to sense then?

You don’t. At least not at the same time. If you want to use multiple models, you would need to stop and close out your first model, and then load your second model. It’s more efficient to create a single model that has everything you want to detect in it - e.g. a single model that detects both red and blue cones, rather than trying to make individual models to detect a red or a blue cone. If we had all of the resources in the world (processing power, memory, etc…), we could develop an interface that could run multiple inference models and pass images to those multiple inference models, but that requires a desktop computer rather than a mobile processor. For mobile, we can only handle a single inference model at a time, so our interface and APIs can only handle a single inference model at a time.

-Danny

1 Like

OOOH okay, I’ll see what I can do with that information.

1 Like