Extra – Line Following Using Two Color Sensors

Why use use two Color Sensors?

If you have followed the tutorials so far, you will have added a Color Sensor to your robot, and successfully programmed it to follow a line – good work! This would enable you to enter a Robot in the Tasmanian RoboCup Junior Rescue Championships that are held about August each year.
Robot Rescue runs in the Rescue event are timed, with robots that are faster having an advantage over slower robots. Some people think that line-following Robots using two Color Sensors are a little faster than one-Color-Sensor line-following robots. Let us experiment and see if that idea is true!

NOTE: LEGO only includes one Color Sensor in each Education or Home EV3 set. When we are in a classroom, we can “borrow” the Color Sensors in other sets to allow us to experiment with two color sensors attached to one robot. If you have a need to purchase additional Color Sensors, we suggest you approach the LEGO suppliers in your country, or, failing that, try one of the International web sites such as https://www.brickowl.com/catalog/lego-parts or https://www.bricklink.com/v2/main.page (we have no connection with these sites and receive no financial feedback from these sites).

Programming your robot when it has two Color Sensors.

When you have added a second Color Sensor, your robot may look something like the robots shown below.

Photo 1. Both Color Sensors on WHITE.

Photo 2. LEFT Sensor on BLACK, RIGHT on WHITE.

Photo 3. RIGHT Sensor on BLACK, LEFT on WHITE.

Photo 4. Both Color Sensors on BLACK.

Programming Method 1

It has been suggested that we could program a Robot so that each of the Color Sensors work to keep the robot away from the line by just using the one-Color-sensor program twice in a row, with the motors reversed, as shown below:-


Try replacing our previous Line-Following code (see diagram below) with the code above.


Does using this two-sensor code allow our Robot to follow a line more quickly than our previous robot, which used one Color sensor? If it is better, how much faster is it? If it is not better, why is this not a useful program?

HINT: To work out why it is behaving in the way it is behaving, follow through the code to see what happens in the cases of photos 1:, 2: and 3: on the previous page; (we can ignore photo 4: for a while). When you feel you have tested Programming Method 1 and understand how this program works, experiment by replacing the Programming Method 1 code with the Programming Method 2 code below. Is the result better or worse?

Programming Method 2

In this case we should think carefully what we want our robot to do when it comes across the circumstances shown in photos 1:, 2: and 3: (ignore photo 4: at this stage).


One way of coding this diagram is shown below. Try replacing your previous line following code with your version of code that could be similar to the code shown below.


Is your Robot better or worse? Can you work out what the program above does if both sensors see black (photo 4)? Also, this program is not the only way to represent the previous red outlined diagram – can you think of a different way? How would your program be different to the program above?

Extension tasks:-

• You could try varying the width between the Color Sensors – what difference does this make – does your robot go around corners more easily, or does it have more problems?
• You could try varying the distance the Color Sensors are ahead of your driving wheels – what difference does it make if the sensors are a long way ahead, or even closer than you originally had them – does your robot go around corners more easily, or does it have more problems?
• The Australian competition RoboCup Rescue mat has small green squares as well as black and white. What do you want your Robot to do when it finds these green squares? How could you teach your Robot to do what you want it to do?

Lots of alternatives to have fun with – experiment, and see what you can find out! ?

6 thoughts on “Extra – Line Following Using Two Color Sensors”

  1. Hi,
    My student had an amazing line following code when she competed in the robocup, it was very smooth. However she came undone when she had to navigate up the bridge, her sensors were too low so she could not get up. She has since modified the robot, but now her sensors are out of whack and the robot has difficulty navigating sharp corners. I think it is because she has had to mount the sensors quite high above the ground. Is there any way to combat this? As any change that she makes to the values does not seem to affect it.
    Kind regards,
    Viola

    1. Hi Viola, thank you for your query. This is a difficult question to answer quickly. An adequate answer really deserves a complete tutorial (which I must produce at some stage in the future). However, for a quick answer, yes your student’s experiences echo our student’s experiences. The height of the color sensor is critical for reliable and fast line following. Stage one of a Robot build is to get this right, and it sounds as if your student has done this well – congratulate her! Stage two is to find out how to help your robot cope with hazards like the bridge, the cattle grid, or a course tile that has been set at an angle, e.g. see some sample courses here. In many of these cases, the hazard changes the distance between the sensor and the course surface – bother! There are two main options I have seen students take. The first is to mount the sensors near to the centre-line of the front motors, a bit further back than shown in the 2nd and 23rd robots from the top here. This type of mounting means that when the wheels finds a bridge, the sensors rise just as the wheels do. This has the advantage keeping a reasonably unchanged distance between the sensor and the course surface. The disadvantage is that students have found that their robot tend to “oscillate” from side to side more during line following, resulting in slower line following. In the second option, rather than have the sensors firmly attached to the front of the robot, they are mounted instead on some sort of swivel or lever which is kept at a preferred height above the RoboCup course by a slide or wheel – e.g. see Tasmanian robot “Clumsy”, the 7th robot from the top of the page here. The advantage of this is that theoretically the sensors can be kept at a reasonably constant height above the course surface, thus coping with height variations in the course. The disadvantage of this is that this type of sensor mounting usually means an almost complete re-build of a student’s robot. If you look at the approximately 4 dozen robots on this page, you can see that students have achieved many remarkably good robots using lots of different approaches. Which of these successful approaches you and your student choose is up to the two of you. Hope this helps you and your student Viola. Have fun! Graeme.

  2. thanks for the tutorial.

    Would using the custom Myblock from mindcub3r to determine RGB values be allowed in robocup? Apparently that functionality is available in EV3 basic. If it is allowed, would the students be given the sample Green to calibrate their programs on the day?

    1. Hi Sam, In answer to the second part of your question, at every RoboCup Rescue event that I have attended, there have been “practice courses” available for student calibration of sensors on the day. At one event where there was an unavoidable variation in illumination between competition courses, the officials were very fair and even allowed students to calibrate on the actual course on which they were about to compete – but this is unusual as it delays events. In relation to determining RGB values, you would be best to ask the RoboCup officials (e.g. at https://www.robocupjunior.org.au/contact ) if they would regard this as legal. Usually items that are available to all student competitors are allowable, although I did once ask for clarification about a particularly effective sensor that was generally available – and they banned it! Regarding whether it is useful to separate RGB values in a color sensor, I have not tried that, so I can not comment from personal experience. However some years ago I did investigate using differently colored LEDs, and while I found differing reflection from differing materials, the gain was small because the sensors responded over a wider range of colors. I’m guessing that you would need a sensor that is fairly precisely “tuned” to the RoboCup green color to be able to gain advantage (this is what some claim LEGO has done with their color sensors – training them to respond to the colors of LEGO pieces). The LEGO “green” is not the same as the RoboCup “green”, so that is one of the challenges of RoboCup Rescue! 🙂 Hope this helps, Graeme.

  3. Turning at intersections on the green squares is really tricky to do as the colour sensors find it challenging to differentiate between the black and the green. We are trying to get ready for the robocup in NZ but my students have found the intersection challenge really hard to solve. We have looked at calibrating the white and black to 0 and 100 to create a bigger differentiation but have not really made a solution that works 100% of the time.
    A video on how to do this successfully would be really helpful. If you do it, please let me know.

    1. Hi Carol,
      You have a lovely spot in the world there – I was lucky enough to spend two years teaching computing at Massey University in delightful Palmerston North, and toured a lot. Beautiful country! Regarding green squares, you are correct Carol – in our experience most students find “turning reliably on the green squares” the most difficult of the software-related problems in RoboCup Rescue. None of the students I have mentored have found a “100% reliable” method. In the past the colour “green” has varied in different parts of the Rescue field, but I note that rule 2.3.4 of the RoboCup Junior Australia 2018 rules now states that “The colour of markers will be consistent across a single field.” I’m guessing this will now mean that it may be worth seeing if using the colour sensors will (in 2018) be worth while. The big “Rescue problem” with colour sensors is that LEGO’s colour sensors seem to be “tuned” to the colours of LEGO’s plastic pieces – and the colour of RoboCup Rescue green squares is not very close to “green LEGO piece colour”, and hence seems to be near the edge of detect-ability for these sensors, leading to sometimes unreliable green detection. A partial “solution” that I publicized a few years ago involved tilting the colour sensors a little (e.g. see the fourth robot from the top of the web page here). Tilting the sensor took advantage of a “manufacturing imperfection” in the colour sensors sold at that time, to increase the sensitivity of the colour sensors to detect green. I don’t have any recently-purchased EV3 colour sensors to know if this “imperfection” still exists in currently sold colour sensors, so I’m not sure if this still works. Guess your class could experiment to find out? The other option is to use the colour sensors in black-white mode, as you suggested Carol. This is the option that most of the students I have mentored have preferred. Greg Tardiani has an NXTG Programming Guide at https://bit.ly/2zUrKkI that on page 3 shows a programming methodology that he suggests using to separate the white, green and black areas of the Rescue mat. I am not certain if you are using NXT or EV3 based robots. Greg’s method would be useful in NXT robots, and the method he suggests could be re-coded in EV3-G. In all of these methods, to my (admittedly limited) knowledge, no student has so far found a 100% reliable way of dealing with the “RoboCup Rescue green square problem” using LEGO sensors, if you can get about 80% or 90% reliability, you are doing well! All we can suggest that your students try all of these methods, or any other that they can think of, and use the best of these ideas in your local RoboCup completitions. Sorry I don’t have a complete solution, but I hope this discussion has been useful. – Wish your students good luck from me Carol, Graeme.

Leave a Reply

Your email address will not be published. Required fields are marked *