NXT Programming

Lesson 11, Localization Part 2

In this lab session we will investigate if it is possible to make a robot stay in the middle of the main hall of Aarhus Banegaard within the area marked with white lines in Figure 1. This area will be called the robot arena in the following. Furthermore, we will investigate if it is possible to use the dark and light brown tiles to localize the robot within the robot arena. We will use Monte Carlo Localization to localize the robot on a map of the tiles in the robot arena, [2].

Figure 1 The main hall of Aarhus Banegaard with the robot arena in the middle.

To make the investigation we will use a differential driven car such as the base vehicle of Lesson 6 with a light sensor mounted as shown in Figure 2. The light sensor is going to be used to measure the brightness of the surface underneeth the robot.

Figure 2 The base vehicle with a light sensor.

In the experiment we will only use the methods travel and rotate of the leJOS class DifferentialPilot to move the robot around and we will use the leJOS class OdometryPoseProvider to make an odometric estimate of the position of the robot within the robot arena while the robot is moving around.

Make the robot stay within the robot arena

First of all note that there is an edge around the robot arena, Figure 3. This edge can be detected by sensors on the robot.

Figure 3 The edge of the robot arena is either delimited
by a pillar or a thin dark line.

Program the robot to stay within the robot arena by making a behavior based control program with two behaviors:

  • Wander that makes the robot drive randomly around.

  • AvoidEdge that detects the edge of the robot arena and makes the robot stay within the robot arena. Use e.g. a single touch sensor bumper as used on the ExpressBot to detect the pillars and the light sensor to detect the thin dark line.

Test the program in a simple model of the robot arena with pillars and dark thin lines to mark the edge. Use the method of the program PilotSquare.java from Lesson 10 to report on the LCD and the PC the estimated position of the robot within the robot arena. Compare this estimate of the robot position with the real position and describe how the error of the estimate develope over time.

Localization by means of particle filters

Now we will try to estimate the position of the robot within the robot arena more accurately than the accuracy we obtained by means of the odometry estimate. We will use the patterns of dark and light brown tiles in the robot arena and the method of Monte Carlo localization or particle filter localization, [2]: "The algorithm uses a particle filter to represent the distribution of likely states, with each particle representing a possible state, i.e. a hypothesis of where the robot is", Figure 4.

Figure 4 The Monte Carlo Localization particle filter algorithm, [2].

Inspired by the leJOS MCLParticleSet a particle filter localization algorithm has been implemented in the PC program RobotMonitor, [1]. The map used in the program is a 2D model of the tiles in the robot arena, Figure 5.

Figure 5 A 2D map of the tiles in the robot arena with dark and bright tiles of
equal size.

In the RobotMonitor program the method goSimulation show how a sequence of TRAVEL moves and light sensor readings can be simulated to demonstrate how the particles will behave during each step of the Monte Carlo localization algorithm. A simple 1D tile map similar to the patterns of doors in the 1D world in Figure 6 has been used in RobotMonitor. The result after several moves and sensor readings can be seen in Figure 7.

Figure 6 A pattern of doors in the 1D world used in [2].

Figure 7 The particles around the true position of the robot after having driven
the blue route starting at the edge to the left.

In the RobotMonitor program the motionUpdate method implements the motion_update step of the algorithm, Figure 4. The motion model used in the class Particle is the same as used in the leJOS class MCLParticle with distanceNoiseFactor = 0.02, angleNoiseFactor = 1:

   * Apply the robot's move to the particle with a bit of random noise.
   * Only works for rotate or travel movements.
   * @param move the robot's move
  public void applyMove(Move move, float distanceNoiseFactor, float angleNoiseFactor) 
    float ym = (move.getDistanceTraveled() * ((float) Math.sin(Math.toRadians(pose.getHeading()))));
    float xm = (move.getDistanceTraveled() * ((float) Math.cos(Math.toRadians(pose.getHeading()))));

    pose.setLocation(new Point(
                     (float) (pose.getX() + xm + (distanceNoiseFactor * xm * rand.nextGaussian())),
                     (float) (pose.getY() + ym + (distanceNoiseFactor * ym * rand.nextGaussian()))));
       (float) (pose.getHeading() + move.getAngleTurned() + (angleNoiseFactor  * rand.nextGaussian())));
    pose.setHeading((float) ((int) (pose.getHeading() + 0.5f) % 360));
The sensorUpdate method in the RobotMonitor program implements the sensor_update step of the algorithm, Figure 4. The sensor model used in the class Particle is:
  public void calculateWeight(int lightValue, Map m) 
      if ( m.getColor(pose) == Color.BLACK )
          if ( lightValue > blackWhiteThreshold) 
              weight = 0.9f;
              weight = 0.1f;
          if ( m.getColor(pose) == Color.WHITE )
              if ( lightValue > blackWhiteThreshold) 
                  weight = 0.1f;
                  weight = 0.9f;
          else // outside the map
              weight = 0.0f;
The initial set of particles have been chosen by the method:
  private Particle generateParticle(Map m)
    int sizeX = m.getDimX()*m.getWidth();
    int sizeY = m.getDimY()*m.getWidth();
    // Generate a particle with a location (x,y) randomly chosen within the
    // 2D area of the map. The heading can be chosen as suggested in several
    // different ways.
    Particle p = new Particle(new Pose(
         (float)(Math.random()*sizeX), (float)(Math.random()*sizeY), 
     return p;
Now we are going to investigate if the localization algorithm implemented in the simulation can be used in a physical 1D black/white tile world similar to the one in Figure 7. This can be done in two steps:
  • Make a program RobotController similar to the program PilotController of Lesson 10 that reports the moves and sensor readings of the robot to the program RobotMonitor and implement a method go in the RobotMonitor similar to the go of the PilotMonitor of Lesson 10. The go should receive moves and sensor readings from the robot and use them to update the particles.

  • Make simple 1D black/white tile worlds to test the localization of the robot in such a physical world.
Maybe another light sensor should be added to align the robot with a black/white edge to keep the heading as either pointing in the positive or negative x direction, i.e. close to 0 or 180 degrees.

Localization while avoiding the edge and objects

How can you localize the robot moving randomly in a 1D black/white tile world by means of travel and rotate steps while the robot avoids the edges and objects in front of it ? Mount an ultrasonic sensor in front of the robot to detect objects.


Last update: 26-5-15