There are four main parts to the robot's control system:
The driving system, or the part in charge of controlling the speed of each wheel according to the information available (encoder counts, data from AI Camera, etc.).
The object-tracking system, or the subsystem that interprets data from the AI Camera and converts it into the desired movement of the robot (kinematic and rotational velocities).
AI camera puts a yellow frame around the recognized object, and displays the instantaneous location of the frame on camera screen.
The Bluetooth communications system, or the subsystem that talks to the connected smartphone and enables remote control.
The data collection system, A.K.A. the part that logs data from the motor encoders and weight sensors to be used by other systems later.
Because the driving system is constantly running, It could also take care of tasks that require constant checking but small runtime, such as changing the status of the overweight warning LED according to the weight sensors' data.
Several problems we had to solve to achieve full functionalities:
When the Bluetooth mode runs, it must enter into a while loop to wait for user input; otherwise, it will not work. During the while loop, the core running the loop is occupied, which malfunctions all other subsystems. Fortunately, the ESP32 has two cores, which allows multi-processing, albeit there are many quirks in getting this set-up to work properly. More about this in the later part of this document.
The motors could not be controlled simply by increasing or decreasing the voltage. Simple voltage control is only useful when the system operates under a steady, predictable environment. However, our suitcase robot is expected to follow a person, be remote-controlled, and move in a changing environment. Therefore, a more complex control was necessary, hence the PID (proportional-integration-differentiation) control.
A single layer of PID control is not enough to enable complex movements during the automatic follow mode. Translating the movements of the frame on camera screen to the actual movement of the robot was necessary. Therefore, we decided to implement two layers of PIDs, with the top layer having multiple PIDs in parallel. More about it later on this page as well.
State Machine
Refer to finalSystem.ino file in Code Base for implementation of the state machine
Code Base
Code Highlight
Multi-processing
The following code is responsible for "splitting the cores":
xTaskCreatePinnedToCore(
Task1code, /* Task function. */
"Task1", /* name of task. */
10000, /* Stack size of task */
NULL, /* parameter of the task */
1, /* priority of the task */
&Task2, /* Task handle to keep track of created task */
0); /* pin task to core 0 */
delay(100);
//+
//
//create a task that will be executed in the Task2code() function, with priority 1 and executed on core 1
xTaskCreatePinnedToCore(
Task2code, /* Task function. */
"Task2", /* name of task. */
10000, /* Stack size of task */
NULL, /* parameter of the task */
1, /* priority of the task */
&Task1, /* Task handle to keep track of created task */
1); /* pin task to core 1 */
delay(100);
Task1code and Task2code are separately defined, each containing the specific instructions denoted in the state machine.
Task1 and Task2 are separately defined in the rest of the code.
PID Control
Speed
A simple speed PID was implemented and used for both manual and automatic (AI follow) modes:
void setSpeeds(int leftVDes, int rightVDes) {
if (deltaT) { // triggered when motor encoders update their counts
portENTER_CRITICAL(&timerMux1);
deltaT = false;
portEXIT_CRITICAL(&timerMux1);
// collect all the errors from target for PID calculations
int leftVDifference = leftVDes - leftCount;
int rightVDifference = rightVDes - rightCount;
int dLError = leftVDifference - prevLeftDifference;
int dRError = rightVDifference - prevRightDifference;
prevLeftDifference = leftVDifference;
prevRightDifference = rightVDifference;
iLError += leftVDifference;
iRError += rightVDifference;
// limit the steady-state error accumulation so that they do not become too dominant
if (abs(iLError) >= sIMax) {
if (iLError < 0) {
iLError = -sIMax;
} else {
iLError = sIMax;
}
}
if (abs(iRError) >= sIMax) {
if (iRError < 0) {
iRError = -sIMax;
} else {
iRError = sIMax;
}
}
// where speed is converted from encoder count per second to voltage to maintain a constant speed
int leftD = calculateD(leftVDifference, iLError, dLError);
int rightD = calculateD(rightVDifference, iRError, dRError);
// Prevent overdraw by limiting the max voltage
if (leftD >= MAX_PWM_VOLTAGE) {
leftD = MAX_PWM_VOLTAGE;
}
if (leftD <= -MAX_PWM_VOLTAGE) {
leftD = -MAX_PWM_VOLTAGE;
}
if (rightD >= MAX_PWM_VOLTAGE) {
rightD = MAX_PWM_VOLTAGE;
}
if (rightD <= -MAX_PWM_VOLTAGE) {
rightD = -MAX_PWM_VOLTAGE;
}
// left wheel control
if (leftD > 0) { // if left wheel needs to move forward
analogWrite(IN1, leftD);
analogWrite(IN2, LOW);
} else if (leftD < 0) { // if left wheel needs to move backward
analogWrite(IN1, LOW);
analogWrite(IN2, -leftD);
} else { // during other unforeseen scenarios, stop moving
analogWrite(IN1, LOW);
analogWrite(IN2, LOW);
}
// right wheel control
if (rightD > 0) { // if right wheel needs to move forward
analogWrite(IN3, rightD);
analogWrite(IN4, LOW);
} else if (rightD < 0) { // if right wheel needs to move forward
analogWrite(IN3, LOW);
analogWrite(IN4, -rightD);
} else { // during other unforeseen scenarios, stop moving
analogWrite(IN3, LOW);
analogWrite(IN4, LOW);
}
}
}
The input is speed, in encoder counts per second, and the output is voltage. Encoder counts are not constant since the environment around the robot constantly changes, so different voltages are needed to keep the robot moving at the target speed (input). A voltage cap (255) is in place to prevent current overdraw.
AI follow mode
One PID controls one behavior, and by behavior, we mean that it can at most control one degree of freedom (DoF). Since we need the robot to move forward and backward, as well as rotate, there are two DoFs to manage. Therefore, the AI follow mode requires multiple PIDs working together.
if (!swept) { // for survey motion, which looks for lost target
int currentTime = millis();
if (!timeReset) {
surveyTimeComp = currentTime;
timeReset = true;
}
int initialLSpeed = 0;
int initialRSpeed = 0;
if (xOrigin < SCREEN_X_CENTER) {
initialLSpeed = -SURVEY_SPEED;
initialRSpeed = SURVEY_SPEED;
} else if (xOrigin > SCREEN_X_CENTER) {
initialLSpeed = SURVEY_SPEED;
initialRSpeed = -SURVEY_SPEED;
}
lD = initialLSpeed;
rD = initialRSpeed;
// surveyTimeComp = millis();
if (currentTime - surveyTimeComp > SWEEP_INTERVAL / 4 && currentTime - surveyTimeComp < SWEEP_INTERVAL / 4 * 3) {
lD = -initialLSpeed;
rD = -initialRSpeed;
} else if (currentTime - surveyTimeComp >= SWEEP_INTERVAL / 4 * 3 && currentTime - surveyTimeComp < SWEEP_INTERVAL) {
lD = initialLSpeed;
rD = initialRSpeed;
} else if (currentTime - surveyTimeComp >= SWEEP_INTERVAL) {
swept = true;
}
setSpeeds(lD, rD);
} else {
swept = true;
stopMoving();
}
} else {
lD = 0;
rD = 0;
swept = false;
timeReset = false;
HUSKYLENSResult result = huskylens.read();
printResult(result);
// horizontal tracking
xOrigin = result.xCenter;
int xDifference = SCREEN_X_CENTER - xOrigin;
int xOutput = 0;
if (abs(xDifference) > angleTolerance) {
int dError = xDifference - prevXDifference;
prevXDifference = xDifference;
pErrorX += xDifference;
if (abs(pErrorX) >= IMax) {
if (pErrorX < 0) {
pErrorX = -IMax;
} else {
pErrorX = IMax;
}
}
xOutput = Kp * xDifference + Ki * pErrorX + Kd * dError;
lD = -xOutput;
rD = xOutput;
}
// vertical tracking
int yCenter = result.yCenter;
int yDifference = SCREEN_Y_CENTER - yCenter;
int yOutput = 0;
if (abs(yDifference) > VERTICAL_TOLERANCE) {
int dError = yDifference - prevYDifference;
prevYDifference = yDifference;
pErrorY += yDifference;
if (abs(pErrorY) >= lIMax) {
if (pErrorY < 0) {
pErrorY = -lIMax;
} else {
pErrorY = lIMax;
}
}
yOutput = Klp * yDifference + Kli * pErrorY + Kld * dError;
lD += -yOutput;
rD += -yOutput;
}
// depth perception
if (initialWidth == 0) {
initialWidth = result.width;
}
int currentWidth = result.width;
int wDifference = initialWidth - currentWidth;
int wOutput = 0;
if (abs(wDifference) > WIDTH_TOLERANCE) {
int dError = wDifference - prevWDifference;
prevWDifference = wDifference;
pErrorW += wDifference;
if (abs(pErrorW) >= wIMax) {
if (pErrorW < 0) {
pErrorW = -wIMax;
} else {
pErrorW = wIMax;
}
}
wOutput = Kwp * wDifference + Kwi * pErrorW + Kwd * dError;
lD += wOutput;
rD += wOutput;
}
// Data validation
if (rD >= HUSKYSPEED) {
rD = HUSKYSPEED;
} else if (rD <= -HUSKYSPEED) {
rD = -HUSKYSPEED;
}
if (lD <= -HUSKYSPEED) {
lD = -HUSKYSPEED;
} else if (lD >= HUSKYSPEED) {
lD = HUSKYSPEED;
}
setSpeeds(lD, rD); // call the speed set function to turn target speeds into voltages
The Horizontal section controls the PID for left-and-right movements.
The Depth Perception controls the PID for the scenario when the yellow frame becomes larger or smaller, indicating the object is moving closer or away.
In an ideal world, the PIDs above are sufficient to command movements. But we have to deal with the fact that our robot has to look up to people, which distorts the actual size and movement of the target. The Vertical section suffers severely. For example, when someone is close to the camera, their movements in the up-down and front-rear directions will have a much larger impact on the vertical length of the box compared to if they are far away from the camera. The vertical section PID was designed to compensate for this consideration.
Sensor Calibration
The Vertical section controls the PID for the movement of the yellow frame (refer to for details).
For weight sensor calibration, we used a revised version of the automated instrument calibration process, which is documented in STAR Liquid Engine project .