Practical exercises on Lego Mindstorm robots


    brick with sensors attachedFrom Tobias Amnell, Uppsala Universiteit (original).

The LEGO Mindstorms Robotics Invention System consists of a bunch of LEGO pieces and the RCX (Robotics command System) unit (see picture on the right). The RCX unit is an autonomous programmable microcomputer, using a Hitachi H8/3932 microcontroller. The RCX brick can be used to control actuators, like a sound generator, lights, and motors, and read input from various sensors, like light sensors, pressure sensors, rotation sensors, and temperature sensors. The RCX brick also has an LCD display (useful for printing information) and an IR-transceiver (for downloading programs and communicating with other bricks). The RCX unit is built for the easy attachment of LEGO building blocks and pieces.

BrickOS (formarly known as LegOS) is a small Operating System for the LEGO Mindstorms RCX unit. BrickOs is cross-compiled on a PC under Linux or Windows 95/98/2000/NT environments. BrickOS programs are written in C and compiled to native code for the Hitachi H8 (most other programming environments for the RCX depends on the virtual machine that is the default programming environment from Lego). BrickOS offers preemptive multitasking, dynamic memory management, POSIX semaphores, as well as native access to display, buttons, IR communication, motors and sensors. Observe that BrickOS is not a hard Real-Time Operating system but does offer functionality similar to that of most soft real-time embedded OSes on the

Getting started

The assignment should be solved in groups of two or three persons. It is not recommended that you work alone, since there is a limited number of lab equipments available.

Before you start make sure that you have the following materials available:

  • LEGO Mindstorms Robotics Invention System 1.5 package.
  • Six AA/LR6-batteries (for the RCX unit).
  • One 9-voltage 6LR61-battery (for the IR-transmitter).

Installing and testing the software

Goto the BrickOs installation site for installation instructions.  You need a pc with a serial port.

As explained on the site mentioned above, windows users need to install cygwin.  Please note that the standard installation will not do.  You need to specify manually to download additional files.  The packages make, gcc, flex, automake, autoconf and patch are certainly not included in the default installation but required.  Please make sure to select all required packages.  A detailed list is available from the above mentioned website.

The LegOS Command Reference 10-0.2.4 contains a short summary of commands available for programming the RCX unit. If you want details you can investigate the system header files in the PATH_TO_LEGOS_DIR/include/ directory or you can check the API.

A handy editor for editing the C source code is TextPad.


  • If your RCX turns completely dead, that is, nothing happens when the buttons are pressed etc., then remove the batteries. This will allow the capacitor that preserves the RAM during battery switches to discharge. You will have to let the batteries be removed for some time (at least 10 minutes).
  • When download of the legOS and programs fails, make sure that :
    - you have installed lego robotics invention system 1.5 
    - you have a working serial connection (check whether the green led in the tower lights up when you try to transfer a program.
    - you are not in the close neighbourhooud of an other tower.
    - the rcx is turned on.
  • When downloads really don't succeed, press the program button and the on/off button simultaneously.  This will cause the legOS to be erased and the original OS to be restored.  Try again.
  • If you are having problems to download the firmware, use the -s option (slow).
  • When you do not manage do switch the RCX off, press the On/off button and the Run button simultaneously for a long time.  If that does not help, remove batteries. 

Assignment 1: Study the rover program

Investigate the rover.c program in the demo folder.  Try to understand how the rover program works.  Do you observe - or can you imagine - situations that the rover program can not recover from?  How can this be solved?  Change the implementation to overcome this problem.  Are there problems that can not be solved by your program but that require a mechanical solution?

Assignment 2: Threads

Reimplement the rover.c program using threads. Use one thread for the left sensor / wheel and one thread for the right sensor / wheel.

Assignment 3: A simple light detector

Attach a light sensor to the RCX unit. Write a short program (one task, coded into the main function) which works as a Geiger counter for light, ie. when the light sensor receives a lot of light it should be generating a very frequent sound (ticking). When little light is received the sound (ticking) should be less frequent. The light value should also be displayed on the LCD.

Investigate the PATH_TO_LEGOS_DIR/demo/robots.c file and the LegOS Command Reference 10-0.2.4 on how to play sounds, read the light sensor, and write something on the LCD display.

Note: To be able to use the lcd_int() function you should include the conio.h include file.

Assignment 4 : Braitenberg Vehicles

In this exercise you will compare the behaviour you predicted in exercise session "Sensor-motor control" on Braitenberg vehicles with a working implementation.

 a.  Replace the two bumpers by two light sensors.   Program your vehicle such that it drives towards the light or away from the light.  What are the problems you encounter?  Are there fundamental differences with the behaviour you expected?
b.  Make the speed of the robot depending on the light sensor values. 
c.  Add the two bumpers and implement two parallel behaviours.  One behaviour drives the robot towards / away from light, the other behaviour implements obstacle avoidance.  Note that you only have one input for the two bumpers.  How can you connect them such that both bumpers work, but you don't know anymore what bumpers was touched?

Assignment 5 : An illustration of emergent behaviour

Prepare your robot such that it has two light sensors and a bumper at the front.  Make sure that the light sensors are not parallel, but slightly oriented away from each other.  Make your robot exploring the arena and make sure that it detects and recovers collisions with the wall, other robots, ...  The arena will be filled with small obstacles that are too light to be detected by the bumper, but can be seen by the light sensors.  If this behaviour is shared by all robots, an interesting emergent behaviour might be observed.  What will be the expected behaviour if there is only one robot / if there are many robots?  What's happening in the real world?

Assignment 6: Q-learning

In this exercise we will build a robot that will learn a task using reinforcement learning, specifically we will opt for the Q-learning algorithm.

Suppose Mr. Jones purchased a lawn mower robot. Unwrapping the box he finds a robot and a charging station. He needs to place the charging station somewhere in the garden, close to a wall outlet of course. Now, the robot does not know Jones’ garden. The mowing behaviour of the robot is just random navigating. But as soon as the batteries are low, the robot needs to find the charging station quickly without getting stuck against obstacles. Now, let us write a Q-learning algorithm which guides the robot to the charging station.

Build a robot with 2 light sensors, left and right. Build a bumper in front of the robot: when the bumper hits an obstacle, the robot should be able to detect this. The charging station will be a bright light, to which the robot has to navigate.

Define states for the robot. We have 3 sensors: the left sensor (L), the right sensor (R ) and the bumper (B).

























Define actions for the robot. We have two actuators, the left and right motor.












Now, let’s define some rewards for the robot. Make sure to have a negative reward for bumping against obstacles. And of course, a positive reward for reaching the charging station. You will need some other rewards as well, but that you will figure out for yourself (or if you paid attention during the lecture, you might remember which rewards you still need).

  Now for the Q-learning pseudocode…

  The Q-table is a 2D-array, with dimensions [ number_of_states, number_of_actions ]

current_state = read_current_state();

DO until_explored_enough
            current_action = select_random_action();
            execute_action( current_action );
            next_state = read_current_state();
            reward = get_reward();

            /* Update the Q-table */
            Q[ current_state, current_action ] = reward + lambda *
                max_reward( next_state );

            current_state = next_state;


Now, once the robot has explored enough, it should behave according to the optimal policy.

In this file, a framework for the Q-learning exercise is given.  Fill in the missing gaps in the code.

Debugging your program is not easy on a RCX.  In order to be sure that your Qlearning is working correctly, 
test it first as follows: Use only two states : hitting an object and not hitting an object.  Use only two actions : driving forward and driving backward.  Now punish hitting objects with a low reward, while driving forward is encouraged.  Check whether your robot learns how to behave when hitting an obstacle

Assignment 7 : RoboTag

A popular game in robotics is RoboTag.  Two or more robots drive around in a simple arena, the edges of the arena are marked with a line on the ground.  When a robot bumps into another robot, it shouts "Tag!" by sending this message out its IR port.  The tagged robot is punished and has to sit still for 3 seconds.  The tagged robot must immediately reply by sending "Ok".  When the bumping robot receives the "Ok", it raises it's score by one, turns around and starts exploring again. 
This exercise combines several issues : You should be able to detect lines on the ground, you should be able to communicate over the IR port, you should come up with a general architecture and with a good strategy.  Strategy for instance is required in the case that you bumped into something, but could not receive an "Ok".  How to find out that you really bumped into a robot, how to maximize the chances of getting a response, ...

An example of a communication program that uses the IR port for sending and receiving messages is available here.