zaterdag 20 juni 2009

20-06-09





1st pic: Omnicam WSM No Rejectionfilter: Route is from the middle to the exit of the maze
2nd pic: Official groundtruth deadreckoning laser map from the same route
3rd pic: Omnicam QWSM Rejectionfiler set at 1.25 meters
4th pic: Laser QWSM same route as 3rd pic

I've created the outlier rejectionfilter and i've set it at 1.25 meters.
Rejectionfilter works as follows:
- Compute averagedistance from all omnicamranges
- Maxboundary and Minboundary are set at averagedistance +/- Rejectiondistance(1.25m)
- Go through the omnicamranges and for every distance higher or lower than the Max/Min boundary respectively will get his value set to either MaxRange or MinRange.

Rejectionfilter is to make sure the scanmatching algorithms are working better because they are sensitive to outliers. I've tried WSM + rejectionfilter but that didn't seem to work. QWSM + Rejectionfilter is working you can see it at the 3rd pic.

vrijdag 19 juni 2009

19-06-09






I've done a experiment run on the factory map, i've already did it for the grassmaze and the results were quite ok. The factory map works really well with the omnicam.

1st pic: Original image of an object where you can look through but you can't ride through it.
2nd pic: The ranges from the omnicam, the omnicam detects the object but the laser doesn't because the laser looks right through it.
3rd pic: This is how the omnicam performs on other solid objects, the laser can pick up these things too though.
4th pic: The laser map of the factory route that I have driven, I've done a small circle as you can see.
5th pic: The omnicam map of the factory, you can see that the omnicam does a pretty good job with the same route. At the downleft part of the map you can see a difference with the laser. It is the object from the 1st and 2nd pic. The omnicam detected it and drew it in its map but the laser did not. Apart from the small noise the omnicam does a better job at this situation.

dinsdag 16 juni 2009

16-06-09



I've tried to implement a double histogram method. One histogram is to identify free space and the other one is used to identify objects. However this method doesn't seem to be working. The first picture shows that the objecthistogram is identifying the grass and not the walls?! A very weird result but i've trained it the right way I think. I've collected data the same way as I collected free space trainingdata. The 2nd picture can show this, this picture is a free space detector created with the laserrangedata, the yellow lines are the collected trainingdata for the objecthistogram. These yellow lines are as you can see the edges of the laser, so it is trained to detect the walls but it doesn't do that.

I can try and figure this out but i'm afraid that it might take me lots of time. I think i'm going to start doing experiments and getting results because i'm afraid i am not going to have enough time with the project. I think i'm going to try and tweak my existing project to detect some objects, while i'm doing the experiments.

maandag 15 juni 2009

15-06-09


I've tried the free space detector on two other levels. The first pic shows DM-Factory_250, the free space is detecting it on a non normalised image and it couldn't detect lighting changes. The 2nd pic shows the histogram in DM-plywood_corner_maze_250 and you can see it can detect the maze but it couldn't the detect the wall of the level.

This shouldn't be a problem because the objective is to detect the maze so the walls around the map should not be counted.

I've also had a picture which shows that DM-Factory_250 can be used for free space detection, it worked(upload ASAP)

Eventhough it can detect the floor in the factory map but it couldn't the detect the objects(i.e tables, ladders, ventilators). A fix might be creating another color histogram and train it to detect objects. You can then see for each pixel what the probability is that it's free space and the probability of it being an object.

What I need to do:
1) Change the wall texture of DM-plywood_corner_maze_250 with unreal editor
2) Implement an objectdetecting histogram and rewrite code to use it(new histogram will also be trained like the old one but its sort of inverted now, only take pixels of objects not of free space)

vrijdag 12 juni 2009

12-06-09

I've finished setting up the experiments for next week. I've also tried to train the office/compworldday2 histogram with more training data but it didn't seem to work. The environment seems to dark to let the color histogram work, the colors are nearly the same that is the problem.

I'm going to use this weekend to think about the problem.

donderdag 11 juni 2009

11-06-09








I've spent my day trying to understand Gideons code to create histograms of new levels. I've finally managed to find out how to create trainingdata, how to train the histogram and how to output the histogram as a .hist file. I can now experiments with different environments to see if the omnicam is effictive there.

I haven't done any real testing but the first result. The first picture shows the original image of the office. The 2nd picture shows how when it is normalized, it loses a lot of detail which is making this environment hard to classify. The 3rd picture(don't look at the yellow scanline) shows the initial result, it classifies the wall as free space... I have then tried to put the probability threshold higher, at 0.38 i've gotten the 4th picture. It now classifies the floor but it loses a big part of the floor, which is not what we found. 0.37 would give the 3rd picture again. It might be that the trainingsdata isn't good, i haven't spend much time researching if its good. It can also be that the environment is somewhat the same color, which seems like a good answer. I will need to run some more tests to figure it out.

woensdag 10 juni 2009

10-06-09

I've been trying to find a solution for the MinRange problem of the omnicam, The omnicam can only work with a MinRange of about 70 cm. A solution would be to make the camera higher but that isn't achievable.

I've also been setting up the experiments in the code, I've got experiment 1: the accuracy test running, it outputs the laserranges compared to omnicam ranges to D:\QuangResults\, with average absolute error and percentage error, It would be best to display the error rate as a gaussian function.

I've got experiment 2 too, that is the mapping experiment. I'm going to drive around with the omnicam and make a log, i'm going to rerun the log with a laser rangescanner to make a map of the same route, then I can compare both maps. The log gets outputted in D:\...\Usar\UsarCommander\bin\debug\Logs

I've set up some booleans of Experiment1 and 2 in OmnicamRangefinder.vb, however to let a laser map the environment you will need to popdata in Manifoldslam.vb for the laser.

Usaragent.vb is where i can mount a hokuyo or sick laser on the omnip2dx.

dinsdag 9 juni 2009

09-09-2009


The problem with the formula is that it works good at close range but very bad at higher ranges. Yesterday Arnoud gave me a better image server that can get image resolutions till up to 1600x1200 pixels.

This has it's advantages because when there's an off by one pixel error, which can certainly be expected, the error of the resulting distance is not too damaging. The distance error can be reduced to for example 10 cm instead of 40-50 cm on a low resolution image.

The computer isn't fast enough to run 1600x1200 pixels so i've decided to run it at 1024x768 which it could. I've also implemented an error propagation yesterday but it doesn't seem to be very helpful.

I've been experimenting with the distance formula and i've found out that it can accurately work to up to 4 meters with the omnicam I am using. Therefore the range of the omnicam will be 4 meters. Arnoud told me that will be ok because the average laser sensor also has a 4 meter range. I've tested the omnicam with a 1024x768 resolution and a 4 meter range with only drawing point correspondences, the result is the above picture.

The robot started at the downright corner and i've decided to head south first. You can see that it works pretty good but at the end it kind of broke down, it lost its self localization. This must because at that time I was driving in pretty narrow passages. I have to set a good minimum range to avoid this problem.

I've cleaned up the code: It is revision 1881 at the moment.

I will now start to setup my experiments, here is a list of what I can test:

1) Evaluate the measured distances of the omnicam
1.1 Laser scanner measurements will be used as ground truth
1.2 Give the error rate of how much the omnicam was of, in percentage for example at least something statistical

2) Evaluate a created map of an omnicam against a created map of a laser
2.1) Use the real map to evaluate by hand (or automatic if it exists)
2.2) Find the differences in the maps of the sensors.
Optional 2.3) Use another test environment

3) Show the advantages of the omnicam compared to a laser
3.1) Find situations where the omnicam is better (Examples: Holes in the floor)
Optional 3.2) Test the hypothesis in USARSim
Optional 3.3) Evaluate how well the omnicam is dealing with the situation

Optional 4) Let a real robot with omnicam drive around at science park

To test the robot in different environments I will need to know how to create new color histograms which I hopefully will find out tomorrow.

vrijdag 5 juni 2009

05-06-2009



I've been working on trying to clean up the omnicam map. I've noticed something very important with the pixel-to-meters formula. It works very accurately on small distances. I've found out that it can create great maps when the maximum distance it can measure is set to 2.5 meters. I haven't tried higher distances yet but the higher it gets the noisier the map becomes.
The two maps above here are made by the omnicam and laser respectively.

The two created maps above are created by the omnicam and laser respectively.
The omnicam isn't creating any new patches when it rotates(it can already look 360 degrees), I've also set the maxerrormeasurement distance a bit higher(factor 1.5) than by the laserscanner.

As you can see the omnicam can make a pretty accurate map if it only measures small distances. However you can also see that it doesn't classify as free space, this is some kind of side effect of this approach. You can also see that the laserscanner is a bit unaccurate but this is mostly because of the long distances, it has a bit of the same problem. However it classifies more free space compared to the omnicam.

I don't know if it is valid to limit the measureddistance ranges for the omnicam, it has it's advantages but of course the disadvantages is that it cannot look far. I should find other techniques instead of this one, I guess I can use this one as a backup though.

donderdag 4 juni 2009

04-06-09

I've merged my work with revision 1323 of sroebert. This is an upgrade because sroebert created an omnimap layer for USARSim. I've committed my work fused with sroeberts version as revision 1879. I've done some tweaking and trying to come up with new filter methods because the techniques I used so far aren't working. I've made a filter saying that something is considered free space if N pixels after it are also free space. This didn't give a cleaner view of the map. I'm going to come up with some new techniques tomorrow.

woensdag 3 juni 2009

03-06-09



The omnicam rangefinder is working but it's not very accurate at drawing maps. This is why I've tried to do some "map cleaning techniques" today. I've tried not to draw the uncorresponded points of the scan matcher, you can see this at the 1st picture. It's a little bit cleaner but you can see that the map is missing some spots.

I've also tried to increase the maxtranslation ranges in ManifoldSlam.vb, a higher value means that the error distance compared to an older patch can be higher. Therefore it draws fewer patches and thus the map will be less updated. The 2nd picture is the result of that and the technique from the first paragraph, you can see that the robot can't localize itself anymore after a while. The maxtranslation parameter is thus too high and you can clearly see holes in the map thanks to the previous technique.

I've also noticed that the points that are drawn far away are not so accurate, I think this is because of the distance formula. I'm still stuck with that and I guess I have to solve it as soon as possible.

So no real improvements today but I at least know what not to do now, which in turn is a small improvement.

dinsdag 2 juni 2009

02-06-2009





I've fixed the mirrored distances today. Suddenly the drawing of the omnicam rangefinder works alot better. I think this is because scanmatching can match the distances a lot better now. The 1st picture is a picture of using a laser rangefinder, as you can see it works pretty accurate. The 2nd picture is using an omnicam rangefinder, you can also see that there is a path but it's not vey clean. You can clearly see thick concentration of black dots and black dots that shoot out. The 3rd picture shows that the shoot out might be caused by not having a strict filtering. For example classify free space when the probability is high.

I might look at the distance formula again to try and clean up the concentration of black dots, however I still don't have a clue how to solve it. The rangefinder is now working with an estimated formula, I have estimated the formula by using some manual measuring of the real distance in meters and the pixel distance. I've also tried other estimations of the distance formula to see what might happen. I noticed that the map drawing really depends on the distance formule because i've tried changing some variables and the outcome is horrible. The 4th picture shows this outcome.

I can also try some error propagation like how Scaramuzza described it. Arnoud might have some ideas too, maybe it's because of the synchronize problem?! I'll figure this out tomorrow.