16-299: Lab Assignment 2.



This assigment focuses on getting your TWIP robot to balance upright.


I would suggest putting "training wheels" on the robot to limit how far it can fall and the damage caused by falling, and putting strain relief on your USB cable (tying one end of the cable to the robot and the other end to a heavy object (desk leg)) so you don't destroy USB connectors on either end when the robot drives off with the cable attached.



The first thing to do is to read the IMU, which tells the robot which way is up and how the body is rotating. Here is example code that reads the IMU, and in theory could balance the robot. It is similar to the previous servo code. I had to change the sampling interval to 3ms to accomodate reading the IMU. The numbers for the Kalman filter in this code are not right, but should be in the ballpark. Figuring out the bias of each sensor when you turn the robot on for each session is important. Here is a program to print out averaged IMU readings. Prop the robot upright to find out the biases. My experience is the gyro biases are stable day to day, but the accelerometer biases change within a single session of working with the robot (or maybe I can't reliably prop the robot up in the same position).

Figuring out how to calibrate and filter the sensor outputs is also important. Here is some data and a program that might help you. This data was taken with the wheels locked in a stand:

and the robot was moved by hand. Accelerometers is atan2(Y accelerometer, Z accelerometer). Online Kf is a Kalman filter that integrated the gyro velocity to get body angle (prediction step), and then used the Y accelerometer reading to fix any bias in the gyro signal. The "velocity from encoders" is from a numerical differentiation of the encoder signals. So the encoder, accelerometer, and gyro should all match. Here are the signals after calibration. From data file run527/f000:


From data file run527/f002:


The calibration coefficients are in the regress.m program. To get these figures,
Get the data and program,
Set up the in_stand directory,
cd in_stand/model1/,
run Matlab,
and then type:
c527
plot_it(ff000)
plot_it(ff002)
into Matlab


Here is some data from my first attempts at manually choosing gains to balance the robot. The commands to the motor largely were saturated, and every time they switched from one extreme to the other the robot would vibrate. I noticed that the motor commands matched the wheel accelerations fairly well, so 1) the angle and angular velocity don't seem to affect what the wheel does, and 2) I can make an independent model of just the wheel responding to the motor commands. Here are the accelerations and commands plotted together:

Here are the actual and predicted velocities. It is pretty unusual that an open loop simulation that is just driven by commands matches the actual behavior for so long (about 3 seconds):

Since the simulated angular velocities match the actual angular velocities well, it is not surprising that the simulated wheel angle(s), which are integrations of the simulated angular velocities, match the actual wheel angles(s):

You can use this data to create a model of the entire robot, including the body which acts as an inverted pendulum.


Here are my best attempts to model the body rotation. Body angular velocities:

Body angle predictions, which are quite bad. Then again, this is an unstable mode, so open loop prediction was never going to work well.

I had to delay the commands to get this model to match the timing of the gyro slope discontinuities.

The question is whether this imperfect model is good enough to design a model-based controller and Kalman filter.


Today I tried improving the model. First I weighed all the parts of the robot, and tried to locate where they are relative to the wheel axle. I then tried to compute a body center of mass, and moments of inertia. Note that the rotor of the motor rotates with the wheel, at gear_ratio^2 times the wheel velocity. The stator of the motor, and the gearbox, rotate with the body. I guessed that the rotor was 1/4 the weight of the entire motor, and modeled it as a uniform density cylinder with half the radius of the motor. Because it is geared, it contributes gear_ratio^2 times its moment of inertia, which makes it dominate all other moments of inertia. Here are my notes.


Next I derived a new model whose state variables are the amount of motor axle angle (not x = r_w*wheel_angle measured with respect to a vertical axis) and body angle. The reason I did this is that the encoder measures this relative angle between the wheel and the body, not the wheel angle with respect to vertical. Note that motor_axle_angle = x/r_w - body_angle. The state vector order is also rearranged: [ wheel_angle, wheel_angular_velocity, body_angle, body_angular_velocity ], so the A matrix will be closer to block diagonal.

I added viscous friction to this model. I adjusted this friction and scale parameters for the motor command, since I don't know what torques the commands generate. Here is the fit I got for the early part of data file run526/f000. (This data is mentioned above and is available in this file.)

Here is a typical fit:

The new model twip2.m is in the model2 directory, which should be inserted in the balance-data1 directory mentioned above next to the model1 directory. Look at model2/notes.txt to see how to generate the plots above.

So the above fits are pretty good for matching an unstable system with an open loop simulation (just input the commands into a model). One remaining mystery is why the body velocity is delayed about 4 ticks (12ms) relative to the wheel velocity. This is mentioned above, and is clear if you look at the real and model body velocity, and try to match up the slope discontinuities caused by the command going from one extreme to another. The discontinuities are synchronized for the relative wheel angular velocity plot. I am going to look if the delay is in the gyro hardware, and we are not reading it with the lowest latency possible.




You can pursue a "model free" approach where you manually adjust gains until the robot balances, or you can pursue a "model based" approach where you attempt to model the robot, and use that model to select appropriate gains. You can also do both, perhaps by initially manually finding gains that allow the robot to stay upright long enough that you can collect data to make a model.


Subject to the TA's approval: You should turn in 1) A video on Youtube that shows your robot balancing upright for at least 10 seconds, and 2) a web page describing how you got the robot to balance upright, including example data and any algorithms or software you used.