Upgrading the Web-Controlled Robot

After my previous post about my web-controlled robot, I was in no way finished working on it. There were still a lot of things I wanted to add, improvements to make here and there. Projects like these are never truly finished: there’s always something more you’ll want to add. For this round, I made a number of improvements to the existing system and added some entirely new systems as well.

 

Switching over to Robotmoose

The first, and perhaps most radical change I made to the system was that I replaced the firmware and even the entire control stack of the robot with the Robotmoose Web Robotics System, a system described as “a simple, modern version of a networked robotics stack, built around the modern web”. It is being designed at the University of Alaska, Fairbanks, for a research project I am a part of that aims to increase young student interest in STEM topics by using telerobotics in the classroom. The Robotmoose system is highly configurable, so it can easily be used to control a wide range of robotic platforms!

20151217_123200

The control stack is fairly simple, and can be summarized into four major components:

  • A web front end that allows for robot setup, teleoperation, and programming in a very user-friendly manner.
  • A central server called superstar, which sends commands to the robot and also receives sensor data from the robot.
  • An on-robot backend that sends commands from superstar to the Arduino, and sends telemetry and sensor data to superstar.
  • An Arduino flashed with the custom tabula rasa firmware, firmware that can be configured at runtime, allowing new robot hardware to be easily added.

There are many advantages to using Robotmoose. Aside from being able to reconfigure your robot without having to re-flash your Arduino, you don’t need to know the IP address of your robot: you simply have to go to robotmoose.com/robots, select your robot from the drop down menu, and you’re good to go! If connecting your robot to the internet is not your thing, then you can run your own local copy of superstar, which is detailed in the readme.

To get started with robotmoose, you simply have to go to our github and follow our helpful readme. If your robot has hardware that is not supported by default, you can easily edit the firmware to add it, and I will be going over how to do this further down this page when I talk about adding ultrasonic sensors. Once you are all set up, take some time to explore the robotmoose website. There are a lot of cool features to play with, like a coding section for creating your own UI or for programming autonomous robot motions!

 

Automatic Starting of Robot

Another limitation of my first version of this robot was that every time I started up my robot, I would have to ssh into my raspberry pi and start the backend manually. Also, if I wanted connect the robot to my school’s network, I would have to connect the raspberry pi to a monitor, start up the gui, open up the web browser, and log into the WiFi: definitely a very cumbersome process. So my next step was to get the robot to log into the WiFi automatically and then start the backend automatically.

The solution to this was to first get my robot to connect to eduroam, which allows for connecting to the wireless network without logging in. Setting this up for the raspberry pi was more involved than setting it up for more mainstream operating systems, and I detailed the entire process in another blog post, Connecting Raspberry Pi to Eduroam at UAF.

Starting the backend automatically was a lot more straightforward: I simply added the command to rc.local. However, since this can run before the pi is connected to the network I had to add something else. In the newest version of Raspbian, Raspbian Jessie, there is a gui alternative to raspi-config. In the gui version there is an option to wait for the network on startup. After I enabled this, my raspberry pi consistently connects to eduroam and starts the backend all on its own!

 

Two Way Audio

Back when I first posted about my web-controlled robot, I had just started experimenting with sending audio over WebRTC using UV4L . As of writing this, the only video setup that robotmoose supports is Gruveo, a WebRTC based video calling application. But since you have to connect to the call on both ends, it would not work for my headless robot. Thus, I continued using the UV4L streaming server to send video and audio back and forth.

The first step in this endeavor was to simply hook a speaker up to the raspberry pi. This resulting in one-way audio, allowing the operator to speak from their computer at people near the robot. Such a setup feels very unnatural, though: when I was testing this functionality, I found that if the operator is speaking to people robot-side, their natural instinct is to talk back. Thus, adding a microphone was a must.

However, the raspberry pi’s embedded sound card only supports audio playback, so it was not as simple as just hooking up a microphone. This problem was solved by using a usb soundcard. I chose this usb soundcard from Adafruit: it is only a couple of dollars and with the RPI 2 running Raspbian Jessie it was almost entirely plug-and-play. The only change I had to make was to edit /etc/asound.conf and insert:

pcm.!default {
type asym
playback.pcm "plug:hw:0"
capture.pcm "plug:dsnoop:1"
}

This just tells the UV4L streaming server that the default sound card should be used for audio playback and that the usb sound card should be used for audio capture. I experimented with hooking up the speaker to the usb sound card as well, but found that I could not change the playback volume very much: it would either be very loud or not at all.

TwoWayAudio

After that, it was simply hook up the speaker and microphone and bam, two-way audio! The audio capture stream is a bit crackly, but still intelligible. I experimented with some different microphones, and found that this was mostly due to the cheap-o microphone I got. The only process that is still ongoing is to tune the audio playback and capture volumes with alsamixer: tuning this decreases the amount of audio feedback and makes the audio capture stream more intelligible, but finding the sweet spot is tricky. Plus, I’m sure that sweet spot will change based upon what environment the robot is in, whether it be loud or quite, or whether the walls and floor reflect of absorb more sound.

 

Camera UPgrades: Pan-Tilt Module & Wide angle lens

For a robot that you have to steer, knowing about your immediate surrounding is important for basic movement, more so than for a robot that can turn in place using differential drive. The raspberry pi camera module has a very narrow field of view, so I decided to make two upgrades to my camera system: I added pan/tilt capabilities and a wide-angle lens. Adding the wide-angle lens wasn’t too hard: for a couple of dollars, you can buy wide-angle lenses that are designed to be attached to phones. These stick on magnetically, so all I had to do was line up the lens on the camera module and glue a metal ring in place. This makes it easy to swap between lenses (wide-angle, fish-eye), but there is the potential that they may fall off. As of yet, I have not tested this.

For the pan tilt module, I found one that looked pretty sturdy on thingiverse and printed it after I made sure it fit my servos, Tower Pro Micro Servos. Unfortunately, I printed the parts with rafts, so they came out a little bit rough around the edges, and the pieces required a lot of work with my dremel. Regardless, I ended up with a fairly decent pan-tilt system which was pretty easy to add to my robot.

20151207_181912

 

Raspi_PanTilt

After testing the fully upgraded camera system, however, I decided that the wide-angle lens was enough, that I didn’t need the pan-tilt system. The pan-tilt module was less stable than its stationary Lego counterpart; it vibrated more while driving, ultimately making it more difficult. It also just added more things you had to control while driving the robot. If I decide in the future that I want to add a pan-tilt mechanism, I will definitely put more work into it. I’ll use some beefier servos and print a beefier camera mount. Still, adding just the wide-angle lens made a world of difference, and now that I have it I don’t know how I ever drove it around before.

 

Side-Mounted Ultrasonic Sensors

The final in my round of upgrades was to add an ultrasonic sensor to the left and right sides of my robot. As I mentioned before, it is important to know about your surroundings when steering a robot, so I added these sensors to give me some sort of idea. The sensor that I used was the HC-SR04 Ultrasonic Sensor. This sensor has a range of 2 cm – 400 cm and a resolution of 0.3 cm, and is pretty accurate when facing objects straight on (I tested it, and it was definitely centimeter accuracy). The best part is, each sensor is only about $2.00! It is a more accurate sensor than the Parallax Ping, which costs over $20.00!

Ultrasonic Sensor

There are several existing Arduino libraries for interfacing with ultrasonic sensors, but the one I liked the most was the NewPing library. The most significant advantage of this library is that it utilizes interrupts instead of blocking and waiting for the return signal, so it does not hold up the robot. Blocking IO for robotics platforms is something you have to be very careful with: you do not want the robot to get in an accident because blocking IO was keeping it from executing a stop command, and in general it just slows down all of your other processes. The NewPing library also allows you to set a maximum distance, beyond which it will stop waiting for the return signal and return 0: this defaults to 200 cm.

After reading up on the library and getting a basic sketch working well, I had to add the ultrasonic sensors to the robotmoose firmware, as they were not yet supported. To do this I first had to modify devices.cpp, which is in the robotmoose git in the tabula_rasa/arduino folder. There are many examples in this file of how to add new devices, I just added a new section to the end for ultrasonic sensors.

In tabula_rasa/arduino/devices.cpp:

// #include "NewPing.h"
// HC-SR04 Ultrasonic Sensor (Dedicated Trigger & Echo Pins)
class ultrasonic_sensor : public action {
public:

    NewPing * _sonar;
    tabula_sensor<unsigned char> _reading_cm;

    virtual void loop()
    {
        _reading_cm = (_sonar -> ping_cm());
    }

    ultrasonic_sensor(int trigPin, int echoPin) : _sonar(new NewPing(trigPin, echoPin, 200)), _reading_cm(0)
    {}

};

REGISTER_TABULA_DEVICE(ultrasonic_sensor, "PP",
    int trigPin=src.read_pin();
    int echoPin=src.read_pin();
    ultrasonic_sensor * device = new ultrasonic_sensor(trigPin, echoPin);
    src.sensor_index("Reading (cm)",F("Reading (cm)"),device->_reading_cm.get_index());
    actions_100ms.add(device);
)

After adding the device to the arduino firmware, there is still one more step: you have to add the device to the web front end. This is done by editing backend.cpp in the tabula_rasa folder. The part you have to change is in the robot_backend::setup_devices function, and again there are many existing examples in the file. First, you have to add an integer counter for your device if you want there to be more than one. Then, you have to go down to the big if – else if statement and tack on your device.

In tabula_rasa/backend.cpp (code has been abbreviated with ,,,):

void robot_backend::setup_devices(std::string robot_config)
{
    ...
    // Counters for various devices:
    int ultrasonics=0;
    ...
    // Parse lines of the configuration ourselves, to
    // find the command and sensor fields and match them to JSON
    std::istringstream robot_config_stream( robot_config );
    while (robot_config_stream) {
        ...
        else if (device=="ultrasonic_sensor") {
            sensors.push_back(new json_sensor<int, uint8_t>(json_path("ultrasonic",ultrasonics++)));
}

The ultrasonic sensors work fairly decently when facing an object straight on, but only have an effectual range of +/- 15°. Translated into reality, this means that they fail miserably when they are at any sort of angle, which is common while turning the robot. Thus, the values from the ultrasonic sensor should be taken with a grain of salt. Future work regarding ultrasonic sensors would be to write an algorithm that took in the raw ultrasonic data, removed the outliers, get an average value, and spit out data that could actually be used for autonomous robot movement.

Advertisements
Upgrading the Web-Controlled Robot

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s