zaterdag 20 juni 2009

20-06-09





1st pic: Omnicam WSM No Rejectionfilter: Route is from the middle to the exit of the maze
2nd pic: Official groundtruth deadreckoning laser map from the same route
3rd pic: Omnicam QWSM Rejectionfiler set at 1.25 meters
4th pic: Laser QWSM same route as 3rd pic

I've created the outlier rejectionfilter and i've set it at 1.25 meters.
Rejectionfilter works as follows:
- Compute averagedistance from all omnicamranges
- Maxboundary and Minboundary are set at averagedistance +/- Rejectiondistance(1.25m)
- Go through the omnicamranges and for every distance higher or lower than the Max/Min boundary respectively will get his value set to either MaxRange or MinRange.

Rejectionfilter is to make sure the scanmatching algorithms are working better because they are sensitive to outliers. I've tried WSM + rejectionfilter but that didn't seem to work. QWSM + Rejectionfilter is working you can see it at the 3rd pic.

vrijdag 19 juni 2009

19-06-09






I've done a experiment run on the factory map, i've already did it for the grassmaze and the results were quite ok. The factory map works really well with the omnicam.

1st pic: Original image of an object where you can look through but you can't ride through it.
2nd pic: The ranges from the omnicam, the omnicam detects the object but the laser doesn't because the laser looks right through it.
3rd pic: This is how the omnicam performs on other solid objects, the laser can pick up these things too though.
4th pic: The laser map of the factory route that I have driven, I've done a small circle as you can see.
5th pic: The omnicam map of the factory, you can see that the omnicam does a pretty good job with the same route. At the downleft part of the map you can see a difference with the laser. It is the object from the 1st and 2nd pic. The omnicam detected it and drew it in its map but the laser did not. Apart from the small noise the omnicam does a better job at this situation.

dinsdag 16 juni 2009

16-06-09



I've tried to implement a double histogram method. One histogram is to identify free space and the other one is used to identify objects. However this method doesn't seem to be working. The first picture shows that the objecthistogram is identifying the grass and not the walls?! A very weird result but i've trained it the right way I think. I've collected data the same way as I collected free space trainingdata. The 2nd picture can show this, this picture is a free space detector created with the laserrangedata, the yellow lines are the collected trainingdata for the objecthistogram. These yellow lines are as you can see the edges of the laser, so it is trained to detect the walls but it doesn't do that.

I can try and figure this out but i'm afraid that it might take me lots of time. I think i'm going to start doing experiments and getting results because i'm afraid i am not going to have enough time with the project. I think i'm going to try and tweak my existing project to detect some objects, while i'm doing the experiments.

maandag 15 juni 2009

15-06-09


I've tried the free space detector on two other levels. The first pic shows DM-Factory_250, the free space is detecting it on a non normalised image and it couldn't detect lighting changes. The 2nd pic shows the histogram in DM-plywood_corner_maze_250 and you can see it can detect the maze but it couldn't the detect the wall of the level.

This shouldn't be a problem because the objective is to detect the maze so the walls around the map should not be counted.

I've also had a picture which shows that DM-Factory_250 can be used for free space detection, it worked(upload ASAP)

Eventhough it can detect the floor in the factory map but it couldn't the detect the objects(i.e tables, ladders, ventilators). A fix might be creating another color histogram and train it to detect objects. You can then see for each pixel what the probability is that it's free space and the probability of it being an object.

What I need to do:
1) Change the wall texture of DM-plywood_corner_maze_250 with unreal editor
2) Implement an objectdetecting histogram and rewrite code to use it(new histogram will also be trained like the old one but its sort of inverted now, only take pixels of objects not of free space)

vrijdag 12 juni 2009

12-06-09

I've finished setting up the experiments for next week. I've also tried to train the office/compworldday2 histogram with more training data but it didn't seem to work. The environment seems to dark to let the color histogram work, the colors are nearly the same that is the problem.

I'm going to use this weekend to think about the problem.

donderdag 11 juni 2009

11-06-09








I've spent my day trying to understand Gideons code to create histograms of new levels. I've finally managed to find out how to create trainingdata, how to train the histogram and how to output the histogram as a .hist file. I can now experiments with different environments to see if the omnicam is effictive there.

I haven't done any real testing but the first result. The first picture shows the original image of the office. The 2nd picture shows how when it is normalized, it loses a lot of detail which is making this environment hard to classify. The 3rd picture(don't look at the yellow scanline) shows the initial result, it classifies the wall as free space... I have then tried to put the probability threshold higher, at 0.38 i've gotten the 4th picture. It now classifies the floor but it loses a big part of the floor, which is not what we found. 0.37 would give the 3rd picture again. It might be that the trainingsdata isn't good, i haven't spend much time researching if its good. It can also be that the environment is somewhat the same color, which seems like a good answer. I will need to run some more tests to figure it out.

woensdag 10 juni 2009

10-06-09

I've been trying to find a solution for the MinRange problem of the omnicam, The omnicam can only work with a MinRange of about 70 cm. A solution would be to make the camera higher but that isn't achievable.

I've also been setting up the experiments in the code, I've got experiment 1: the accuracy test running, it outputs the laserranges compared to omnicam ranges to D:\QuangResults\, with average absolute error and percentage error, It would be best to display the error rate as a gaussian function.

I've got experiment 2 too, that is the mapping experiment. I'm going to drive around with the omnicam and make a log, i'm going to rerun the log with a laser rangescanner to make a map of the same route, then I can compare both maps. The log gets outputted in D:\...\Usar\UsarCommander\bin\debug\Logs

I've set up some booleans of Experiment1 and 2 in OmnicamRangefinder.vb, however to let a laser map the environment you will need to popdata in Manifoldslam.vb for the laser.

Usaragent.vb is where i can mount a hokuyo or sick laser on the omnip2dx.

dinsdag 9 juni 2009

09-09-2009


The problem with the formula is that it works good at close range but very bad at higher ranges. Yesterday Arnoud gave me a better image server that can get image resolutions till up to 1600x1200 pixels.

This has it's advantages because when there's an off by one pixel error, which can certainly be expected, the error of the resulting distance is not too damaging. The distance error can be reduced to for example 10 cm instead of 40-50 cm on a low resolution image.

The computer isn't fast enough to run 1600x1200 pixels so i've decided to run it at 1024x768 which it could. I've also implemented an error propagation yesterday but it doesn't seem to be very helpful.

I've been experimenting with the distance formula and i've found out that it can accurately work to up to 4 meters with the omnicam I am using. Therefore the range of the omnicam will be 4 meters. Arnoud told me that will be ok because the average laser sensor also has a 4 meter range. I've tested the omnicam with a 1024x768 resolution and a 4 meter range with only drawing point correspondences, the result is the above picture.

The robot started at the downright corner and i've decided to head south first. You can see that it works pretty good but at the end it kind of broke down, it lost its self localization. This must because at that time I was driving in pretty narrow passages. I have to set a good minimum range to avoid this problem.

I've cleaned up the code: It is revision 1881 at the moment.

I will now start to setup my experiments, here is a list of what I can test:

1) Evaluate the measured distances of the omnicam
1.1 Laser scanner measurements will be used as ground truth
1.2 Give the error rate of how much the omnicam was of, in percentage for example at least something statistical

2) Evaluate a created map of an omnicam against a created map of a laser
2.1) Use the real map to evaluate by hand (or automatic if it exists)
2.2) Find the differences in the maps of the sensors.
Optional 2.3) Use another test environment

3) Show the advantages of the omnicam compared to a laser
3.1) Find situations where the omnicam is better (Examples: Holes in the floor)
Optional 3.2) Test the hypothesis in USARSim
Optional 3.3) Evaluate how well the omnicam is dealing with the situation

Optional 4) Let a real robot with omnicam drive around at science park

To test the robot in different environments I will need to know how to create new color histograms which I hopefully will find out tomorrow.

vrijdag 5 juni 2009

05-06-2009



I've been working on trying to clean up the omnicam map. I've noticed something very important with the pixel-to-meters formula. It works very accurately on small distances. I've found out that it can create great maps when the maximum distance it can measure is set to 2.5 meters. I haven't tried higher distances yet but the higher it gets the noisier the map becomes.
The two maps above here are made by the omnicam and laser respectively.

The two created maps above are created by the omnicam and laser respectively.
The omnicam isn't creating any new patches when it rotates(it can already look 360 degrees), I've also set the maxerrormeasurement distance a bit higher(factor 1.5) than by the laserscanner.

As you can see the omnicam can make a pretty accurate map if it only measures small distances. However you can also see that it doesn't classify as free space, this is some kind of side effect of this approach. You can also see that the laserscanner is a bit unaccurate but this is mostly because of the long distances, it has a bit of the same problem. However it classifies more free space compared to the omnicam.

I don't know if it is valid to limit the measureddistance ranges for the omnicam, it has it's advantages but of course the disadvantages is that it cannot look far. I should find other techniques instead of this one, I guess I can use this one as a backup though.

donderdag 4 juni 2009

04-06-09

I've merged my work with revision 1323 of sroebert. This is an upgrade because sroebert created an omnimap layer for USARSim. I've committed my work fused with sroeberts version as revision 1879. I've done some tweaking and trying to come up with new filter methods because the techniques I used so far aren't working. I've made a filter saying that something is considered free space if N pixels after it are also free space. This didn't give a cleaner view of the map. I'm going to come up with some new techniques tomorrow.

woensdag 3 juni 2009

03-06-09



The omnicam rangefinder is working but it's not very accurate at drawing maps. This is why I've tried to do some "map cleaning techniques" today. I've tried not to draw the uncorresponded points of the scan matcher, you can see this at the 1st picture. It's a little bit cleaner but you can see that the map is missing some spots.

I've also tried to increase the maxtranslation ranges in ManifoldSlam.vb, a higher value means that the error distance compared to an older patch can be higher. Therefore it draws fewer patches and thus the map will be less updated. The 2nd picture is the result of that and the technique from the first paragraph, you can see that the robot can't localize itself anymore after a while. The maxtranslation parameter is thus too high and you can clearly see holes in the map thanks to the previous technique.

I've also noticed that the points that are drawn far away are not so accurate, I think this is because of the distance formula. I'm still stuck with that and I guess I have to solve it as soon as possible.

So no real improvements today but I at least know what not to do now, which in turn is a small improvement.

dinsdag 2 juni 2009

02-06-2009





I've fixed the mirrored distances today. Suddenly the drawing of the omnicam rangefinder works alot better. I think this is because scanmatching can match the distances a lot better now. The 1st picture is a picture of using a laser rangefinder, as you can see it works pretty accurate. The 2nd picture is using an omnicam rangefinder, you can also see that there is a path but it's not vey clean. You can clearly see thick concentration of black dots and black dots that shoot out. The 3rd picture shows that the shoot out might be caused by not having a strict filtering. For example classify free space when the probability is high.

I might look at the distance formula again to try and clean up the concentration of black dots, however I still don't have a clue how to solve it. The rangefinder is now working with an estimated formula, I have estimated the formula by using some manual measuring of the real distance in meters and the pixel distance. I've also tried other estimations of the distance formula to see what might happen. I noticed that the map drawing really depends on the distance formule because i've tried changing some variables and the outcome is horrible. The 4th picture shows this outcome.

I can also try some error propagation like how Scaramuzza described it. Arnoud might have some ideas too, maybe it's because of the synchronize problem?! I'll figure this out tomorrow.

donderdag 28 mei 2009

28-05-2009

I haven't figured out the pixel to meters formula yet but i've done something else. I've measured what the longest meterdistance is with the laser rangefinder and i've measured what the longest pixeldistance is and i've used 1.20 meters for the height in order to get the formula to work. It is still a approximation but it seems to work good enough for now.

However I've got some syncproblems now according to Arnouds hypothesis. The groundtruthsensor works to slow compared with the omnicamrangefinder and that is why these two sensors aren't in sync, this is why the SLAM methods do not work. I will need to look at Steven Roeberts and Bas Terwijns logs in order to solve this problem because they had this problem last year too.

I've also noticed that I've measured the distances from a mirror, because the camera is looking at a mirror. I should now unmirror these distances, which should not be that hard.

woensdag 27 mei 2009

27-05-2009


I've got the normalization to work in visual basic. I've had some difficulties mainly because i had the wrong formula's... I've tested the normalized image with the current grassHistogram.hist and it seems to work a lot better than yesterday(look at the picture, red is freespace and white dots are the detected edges). I've also tried to get the newest version of the grassHistogram and i've created a few methods in Gideons free space detection software in order to do this. I've emailed Tijn about how to create the Histogram file because Gideon didn't know that. The current histogram is working pretty good with the normalized image so getting the updated version is not my priority right now.

The Rangefinder can now find the free space in a picture which is what I want. However it isn't very clean at the moment, it still have some measure errors. I should find out if this isn't a problem for the scanmatching and SLAM algorithms. I can do this later when i've got the full system operational.

I've still got 2 things to do right now:

1) Figure out the pixel to meters formula
2) Convert the omnicam ranges to laserrangedata and notify the lasersensor

usarsim.ini tells me the specs of the OmniP2DX and Arnoud says he estimates the height of the camera and image to be 1.20 meters.

I've implemented the omnicamranges conversion to laserrangedata. It seems to work for now eventhough i've put all the ranges on 20 meters, it is just for testing because i haven't figured out the pixel to meters formula yet. I've implemented it so that it calls the Manifold.ProcessLaserRangeData directly. I've also disabled the laserrangesensor for the SLAM methods, The laserrangesensor will just pop off it's laserrangedata but it will not be used for anything.

If I want to fully test if my implementation works I really need to figure out the pixel to meters formula because that is the only thing I can do right now.

dinsdag 26 mei 2009

26-05-2009



I've been busy figuring out how to get the colorhistogram to work. I have to put the grassHistogram.hist file in
D:\...\Usar\UsarCommander\bin\Debug\

The grassHistogram.hist uses a binsize of 20, i've found that number in "D:\Doas Gideon\Usar gideon working version" in the file SkinDetector.vb

The pictures above are some results i've gotten with the grassHistogram.hist. They are quite bad though.

The red dots are free space detected by the histogram and the white dots on the 2nd picture is what the rangefinder detects as the boundaries of an object, as i said it works really bad.

I am going to normalize the image in order to get the colorhistogram to work better. I am also going to send Gideon an email about what his newest grasshistogram is, hopefully he has a more updated version of the histogram because the results are quite bad now. I am going to try and implement the the normalisation of RGB in visual basic tonight.

woensdag 20 mei 2009

20-05-2009

I've tried to implement a piece of the program that is supposed to make Laserrangedata of the omnicam rangefinder distances. It isn't done yet.
I've still got three things to do:

1) Get the colorHistogram to work
2) Figure out the pixel to meters formula
3) Convert the omnicam ranges to laserrangedata and notify the lasersensor

dinsdag 19 mei 2009

19-05-2009


Here is the result of the fixed scanlines, all the blue lines are the scanlines of the rangefinder. The program will detect for objects along these scanlines.

These are the scanranges of the omnicam this is to avoid considering the black parts of the image(the corners and the center):
362-365 min pixel radius seems good with a 320 pixel center.
547 max pixel radius so 544 is good enough with a 320 pixel center.

These are some values for the pixel-meter formula I have no idea what to do with these right now:
Formula: r^2 / 0.2333 - z^2 / 1.1357 = -1
corresponds to
k = 11.546, c = 2.321
z-values have to be corrected by 1.161

maandag 18 mei 2009

18-05-2009

I've tested my code at the UvA. It at least ran, and my omnicam rangefinder kept getting new images so I think that i've implemented it at the right spot. There is still a bug in drawing the scanlines, it draws them at the wrong directions which is weird because I really think I used the right formula's for this. The lines are coming from the center of the image which is a good thing. I'm already using a histogram but I don't think it works yet because it doesn't detect anything at the scanlines. I'm gonna look at that when I have fixed the scanlines. I've also emailed Arnoud about question 3 and 4 from 17-05-2009.

zondag 17 mei 2009

17-05-2009

I've finally got the time to work more on this project(no more classes). I've tried to implement the omnicam code into the system and I can only test this at the UvA which I will be heading to tomorrow, it is not fully done yet because I need to find out some constant variables of the omnicamP2DX. This is a list what I need to do for now.

1) Find the grassmodelhistogram for testing

2) Find out the radius of the omnicam and the minradius because the center is black

3) Find out what the mirrorheight is and the alpha of the omnicamP2DX according to Scaramuzza's formula for distance calculating

4) What to set as distance if no object has been found on the scanline

dinsdag 12 mei 2009

12-05-2009

Arnoud has helped me with understanding how the code works. He has also showed me the place where I can put in my code. I have also been given an older revision(rev1251 merged with Dialogs from rev1323) of the code where I can easily pop omnicam images from the sensor.

Arnoud has also shown me how to mask the omnicamrangefinder data into LaserRangeData.vb which is very helpful. "D:\Doas\Gideon\Usar" tells me how Gideon used the color histogram for free spacedetection.

The codeline: "observers.notifysensorupdate in agent.vb/camerasensor.vb" will update all sensors with a new omnicam image. The skindetector will now pop images but I can throw the skindetector out and put in the omnicam rangefinder so that my program will get the images.

maandag 4 mei 2009

04-05-2009

After sending Arnoud an email about scanning an omnicam image i've decided to use polar scanning instead of unwrapping an image as a rectangular image. Polar scanning is much faster and it is much easier once you calculated the math.

I've already written some code for the omnicam rangefinder but the problem is to implement it in the right files. I am really going to need Arnouds help on this one because I don't have the foggiest idea where to put it. All my attempts are in doing this are failing.

dinsdag 28 april 2009

28-04-2009

My method of trying to implement the omnicam rangefinder isn't really working, i've tried to implement it in the Agent and in Imageanalysis but I get stuck all the time mostly because i do not fully understand the code yet I guess.

I would also need to create an entire system just for the omnicam rangefinder which is going to take me alot of time. I really need to research on masking the omnicam as a LaserRangeData object. Therefore I will now try and find out where LaserRangeData.vb is getting it's data from and where it saves it's data.

vrijdag 24 april 2009

24-04-2009

Patch.vb #Region " Omnicam Observations " method ProcessCameraData tells WHEN(ex. how many per second) to process the omnicam image.
ManifoldSlam.vb method ProcessOmnicamData processes the cameradata, not giving images only processing, it processes the currentPose too.
ManifoldSlam.vb method ProcessSensorUpdate forwards any sensor update for SLAM, thus this is kind of the controller method.

Find out what is one laserRangeData scan. Is it only 1 point or really 180 points as one object?

Find out if the cameradata is in meters or anything else, because scanmatching works on millimeters.

I think i have to replace ScanObservation.vb with something that works for the omnicam. it is using laser rangedata right now as input. I can also mask the omnicam output again as laser rangedata which is actually still the most easiest way to tackle this problem. The only problem is that laser rangedata has a max of 180 points while omnicam should at least make use of 360 points(1 radial line per degree), will this work on the current SLAM system?? I haven't checked this yet. I can also divide an image in 2 parts (180 points each) and throw them both through the system.

donderdag 23 april 2009

23-04-2009

I have to find the file calibrate_camera.m from scaramuzza about the omnicam rangefinder. This file tells me how to calibrate the omnicam.
OmnicamRangeSensor.vb is a file i created, i am going to try and implement the entire omnicamrangefinder in this method. I am still going to need a seperate file for the colorhistogram.

donderdag 9 april 2009

09-04-2009

There are now 2 robots that i can use to test my rangefinder("OmniP2AT" and "OmniP2DX"). Arnoud has fused rev. 1323 with all the new techniques from rev. 1799. It is now some kind of experimental/custom revision that is having some bugs but the omnicam part of the program works, which is of course the most important.

Both the MAZE coordinates that I found on 08-04-2009 are correct, but they are just different locations in the maze.
I have also found out how to retrieve omnicam images, \Agents\Sensors\Camerasensor.vb is being used to retrieve omnicam images.

Patch.vb is being used alot throughout the program. This function is being used for the localisation of the robot. This function will take a picture at a certain location and the robot will then drive away from that location. While driving the robot is taking a new picture, if the picture match gets below the 90% the robot is going to make a new patch.

I have also looked at the laser rangefinder function of the program. An omnicam rangefinder will be some kind of copy of the laser rangefinder, that is why it is important to look at the structure of the laser rangefinder. The method "ProcessLaserRangeData" in ManifoldSlam.vb is being used for the rangefinder.
I can create my omnicam rangefinder system by following the structure of the "ProcessLaserRangeData" method. I think I know enough to start writing the code. I am going to do this next week.

woensdag 8 april 2009

08-04-2009

Arnoud helped me to start up USARSim today. The problem was that I did not put in the rotation coordinates. I will also use the "extended image server" program for USARSim. This program can create camera images with a resolution of 640*480.
We will also make the omnicam rangefinder parse 1 image per second, this is because finding free space on an image can take some time. 1 image per second should give the system sufficient time to keep up.

I also know the coordinates of the "DM-compWorldDay1_250" maze, this is the environment where I am going to test my rangefinder.
(-35.64, -11.1,-4.0 ; according to Arnoud) found at this link:

http://staff.science.uva.nl/~arnoud/research/roboresc/March2007.html

Arnoud gave me sroebert revision 1323 to work with. I am going to find the files that are used to get an omnicam image.

Images are being stored in the OmnicamObservation.class and OmniMaplayer.class makes use of the images. OmniMaplayer.class makes alot of references to Patch.class, I will find out what that class does tomorrow.

I have also found other coordinates for the maze in the sroebert program.

MAZE coordinates (-54.51, -6.97, -4.0 ; according to SRoebert) from the file compday-maze.cfg from sroebert rev.1323, i am going to find out what the position of these coordinates are next time I am going to use USARSim.

dinsdag 7 april 2009

07-04-2009

I have turned in my time schedule yesterday. The time schedule did not really helped me but it gave me a good representation about what to do. I have been to the robolab today and I tried to start up USARSim. USARSim gave me a "wrong coordinates error", but I knew for sure that I had put in the right starting position coordinates. I will have to ask Arnoud(supervisor) about this.

zaterdag 4 april 2009

04-04-2009

I still have to get used with USARSim(the program that I am going to use to do my research). Unfortunately I did not had time to work on my research this week. I am going to use this weekend to create a timeschedule which should be done by tomorrow. I am going to use my next week to get used to USARSim. I have also turned in my project proposal last week.

About this blog

Hi,

I am a student of the UvA and i'm studying 'kunstmatige intelligentie' = 'artificial intelligence'. My bachelorthesis is going to be about the creation and performance testing of an omnicam rangefinder. I will develop and test the rangefinder in a program called UsarSim which is a simulationprogram of the real world.