Making the robot move?!

OK, we have the program recognizing our team marker (YAY). Now, We cannot figure out how to make the robot move based on if or not it sees the marker. We’ve tried to say “if (updatedRecognitions != null){ robot.leftduck.setPower(1);” which appears to work except the motor spins even if there is no team marker visible i.e “No Objects Visible”.

Here is our code:
package org.firstinspires.ftc.teamcode;

import com.qualcomm.robotcore.eventloop.opmode.Disabled;
import com.qualcomm.robotcore.hardware.DcMotor;
import com.qualcomm.robotcore.hardware.Servo;
import com.qualcomm.robotcore.util.ElapsedTime;
import com.qualcomm.robotcore.eventloop.opmode.Autonomous;
import com.qualcomm.robotcore.eventloop.opmode.LinearOpMode;
import com.qualcomm.robotcore.eventloop.opmode.TeleOp;
import java.util.List;
import org.firstinspires.ftc.robotcore.external.ClassFactory;
import org.firstinspires.ftc.robotcore.external.navigation.VuforiaLocalizer;
import org.firstinspires.ftc.robotcore.external.tfod.TFObjectDetector;
import org.firstinspires.ftc.robotcore.external.tfod.Recognition;

@Autonomous(name = “TeamMarker_Copy”, group=“Pushbot”)
public class TeamMarker_Copy extends LinearOpMode {

 HardwarePushbot         robot   = new HardwarePushbot();   // Use a Pushbot's hardware
 private ElapsedTime     runtime = new ElapsedTime();

private static final String TFOD_MODEL_ASSET = "/sdcard/FIRST/tflitemodels/TeamMarker.tflite";
private static final String[] LABELS = {
  "8866 blackmarker"

private static final String VUFORIA_KEY =
        "key goes here";

private VuforiaLocalizer vuforia;

private TFObjectDetector tfod;

public void runOpMode() {

    if (tfod != null) {
        tfod.setZoom(2.5, 16.0/9.0);

    /** Wait for the game to begin */
    telemetry.addData(">", "Press Play to start op mode");

    if (opModeIsActive()) {
        while (opModeIsActive()) {
            if (tfod != null) {
                // getUpdatedRecognitions() will return null if no new information is available since
                // the last time that call was made.
                List<Recognition> updatedRecognitions = tfod.getUpdatedRecognitions();
                if (updatedRecognitions != null){
                  telemetry.addData("# Object Detected", updatedRecognitions.size());
          // step through the list of recognitions and display boundary info.
                  int i = 0;
                  for (Recognition recognition : updatedRecognitions) {
                    telemetry.addData(String.format("label (%d)", i), recognition.getLabel());
                    telemetry.addData(String.format("  left,top (%d)", i), "%.03f , %.03f",
                            recognition.getLeft(), recognition.getTop());
                    telemetry.addData(String.format("  right,bottom (%d)", i), "%.03f , %.03f",
                            recognition.getRight(), recognition.getBottom());

 * Initialize the Vuforia localization engine.
private void initVuforia() {
     * Configure Vuforia by creating a Parameter object, and passing it to the Vuforia engine.
    VuforiaLocalizer.Parameters parameters = new VuforiaLocalizer.Parameters();

    parameters.vuforiaLicenseKey = VUFORIA_KEY;
    parameters.cameraName = hardwareMap.get(WebcamName.class, "Webcam 1");

    //  Instantiate the Vuforia engine
    vuforia = ClassFactory.getInstance().createVuforia(parameters);

    // Loading trackables is not necessary for the TensorFlow Object Detection engine.

 * Initialize the TensorFlow Object Detection engine.
private void initTfod() {
    int tfodMonitorViewId = hardwareMap.appContext.getResources().getIdentifier(
        "tfodMonitorViewId", "id", hardwareMap.appContext.getPackageName());
    TFObjectDetector.Parameters tfodParameters = new TFObjectDetector.Parameters(tfodMonitorViewId);
   tfodParameters.minResultConfidence = 0.8f;
   tfodParameters.isModelTensorFlow2 = true;
   tfodParameters.inputSize = 320;
   tfod = ClassFactory.getInstance().createTFObjectDetector(tfodParameters, vuforia);
   tfod.loadModelFromFile(TFOD_MODEL_ASSET, LABELS);


You are using the sample OpMode, so it’s best to start by understanding exactly how it works. Then you can modify it for recognizing your TSE in autonomous. A guide is here at the FTC wiki.

You mentioned the motor spins ‘every time’. It’s always a good exercise to carefully walk through a few cycles of a while() loop and a for() loop, line by line. Assume there might be cycles with and without recognitions; what will the code do?

The FTC wiki guide includes suggestions re. where and how to identify the Duck, which would also work for your TSE.

When the for() loop has a valid recognition of your TSE, you can save its position to a variable. After exiting the main loop and deactivating TensorFlow, you can use the stored TSE position variable to operate your lifter – before or after driving to the Alliance Hub.

You have not changed the default zoom of 2.5, carried over from a previous season. To see more of the barcode region, you might try various lower values, down to 1.0 (1:1 for no zoom).