zaterdag 20 juni 2009

20-06-09





1st pic: Omnicam WSM No Rejectionfilter: Route is from the middle to the exit of the maze
2nd pic: Official groundtruth deadreckoning laser map from the same route
3rd pic: Omnicam QWSM Rejectionfiler set at 1.25 meters
4th pic: Laser QWSM same route as 3rd pic

I've created the outlier rejectionfilter and i've set it at 1.25 meters.
Rejectionfilter works as follows:
- Compute averagedistance from all omnicamranges
- Maxboundary and Minboundary are set at averagedistance +/- Rejectiondistance(1.25m)
- Go through the omnicamranges and for every distance higher or lower than the Max/Min boundary respectively will get his value set to either MaxRange or MinRange.

Rejectionfilter is to make sure the scanmatching algorithms are working better because they are sensitive to outliers. I've tried WSM + rejectionfilter but that didn't seem to work. QWSM + Rejectionfilter is working you can see it at the 3rd pic.

vrijdag 19 juni 2009

19-06-09






I've done a experiment run on the factory map, i've already did it for the grassmaze and the results were quite ok. The factory map works really well with the omnicam.

1st pic: Original image of an object where you can look through but you can't ride through it.
2nd pic: The ranges from the omnicam, the omnicam detects the object but the laser doesn't because the laser looks right through it.
3rd pic: This is how the omnicam performs on other solid objects, the laser can pick up these things too though.
4th pic: The laser map of the factory route that I have driven, I've done a small circle as you can see.
5th pic: The omnicam map of the factory, you can see that the omnicam does a pretty good job with the same route. At the downleft part of the map you can see a difference with the laser. It is the object from the 1st and 2nd pic. The omnicam detected it and drew it in its map but the laser did not. Apart from the small noise the omnicam does a better job at this situation.

dinsdag 16 juni 2009

16-06-09



I've tried to implement a double histogram method. One histogram is to identify free space and the other one is used to identify objects. However this method doesn't seem to be working. The first picture shows that the objecthistogram is identifying the grass and not the walls?! A very weird result but i've trained it the right way I think. I've collected data the same way as I collected free space trainingdata. The 2nd picture can show this, this picture is a free space detector created with the laserrangedata, the yellow lines are the collected trainingdata for the objecthistogram. These yellow lines are as you can see the edges of the laser, so it is trained to detect the walls but it doesn't do that.

I can try and figure this out but i'm afraid that it might take me lots of time. I think i'm going to start doing experiments and getting results because i'm afraid i am not going to have enough time with the project. I think i'm going to try and tweak my existing project to detect some objects, while i'm doing the experiments.

maandag 15 juni 2009

15-06-09


I've tried the free space detector on two other levels. The first pic shows DM-Factory_250, the free space is detecting it on a non normalised image and it couldn't detect lighting changes. The 2nd pic shows the histogram in DM-plywood_corner_maze_250 and you can see it can detect the maze but it couldn't the detect the wall of the level.

This shouldn't be a problem because the objective is to detect the maze so the walls around the map should not be counted.

I've also had a picture which shows that DM-Factory_250 can be used for free space detection, it worked(upload ASAP)

Eventhough it can detect the floor in the factory map but it couldn't the detect the objects(i.e tables, ladders, ventilators). A fix might be creating another color histogram and train it to detect objects. You can then see for each pixel what the probability is that it's free space and the probability of it being an object.

What I need to do:
1) Change the wall texture of DM-plywood_corner_maze_250 with unreal editor
2) Implement an objectdetecting histogram and rewrite code to use it(new histogram will also be trained like the old one but its sort of inverted now, only take pixels of objects not of free space)

vrijdag 12 juni 2009

12-06-09

I've finished setting up the experiments for next week. I've also tried to train the office/compworldday2 histogram with more training data but it didn't seem to work. The environment seems to dark to let the color histogram work, the colors are nearly the same that is the problem.

I'm going to use this weekend to think about the problem.

donderdag 11 juni 2009

11-06-09








I've spent my day trying to understand Gideons code to create histograms of new levels. I've finally managed to find out how to create trainingdata, how to train the histogram and how to output the histogram as a .hist file. I can now experiments with different environments to see if the omnicam is effictive there.

I haven't done any real testing but the first result. The first picture shows the original image of the office. The 2nd picture shows how when it is normalized, it loses a lot of detail which is making this environment hard to classify. The 3rd picture(don't look at the yellow scanline) shows the initial result, it classifies the wall as free space... I have then tried to put the probability threshold higher, at 0.38 i've gotten the 4th picture. It now classifies the floor but it loses a big part of the floor, which is not what we found. 0.37 would give the 3rd picture again. It might be that the trainingsdata isn't good, i haven't spend much time researching if its good. It can also be that the environment is somewhat the same color, which seems like a good answer. I will need to run some more tests to figure it out.

woensdag 10 juni 2009

10-06-09

I've been trying to find a solution for the MinRange problem of the omnicam, The omnicam can only work with a MinRange of about 70 cm. A solution would be to make the camera higher but that isn't achievable.

I've also been setting up the experiments in the code, I've got experiment 1: the accuracy test running, it outputs the laserranges compared to omnicam ranges to D:\QuangResults\, with average absolute error and percentage error, It would be best to display the error rate as a gaussian function.

I've got experiment 2 too, that is the mapping experiment. I'm going to drive around with the omnicam and make a log, i'm going to rerun the log with a laser rangescanner to make a map of the same route, then I can compare both maps. The log gets outputted in D:\...\Usar\UsarCommander\bin\debug\Logs

I've set up some booleans of Experiment1 and 2 in OmnicamRangefinder.vb, however to let a laser map the environment you will need to popdata in Manifoldslam.vb for the laser.

Usaragent.vb is where i can mount a hokuyo or sick laser on the omnip2dx.