Results of the ImageCLEF05 Automatic Annotation Task 2005

ImageCLEF 2005 Automatic Annotation Task is part of the Cross Language Evaluation Forum (CLEF), a benchmarking event for multilingual information retrieval held annually since 2000. CLEF first began as a track in the Text Retrieval Conference (TREC, trec.nist.gov).

The Automatic Annotation task uses no textual information, but image-content information only. The objective is to classify 1000 previously unseen images. 9000 classified training images are given which can be used in any way to train a classifier.

The task website can be found here.

12 groups participated in this year's evaluation:
If I misspelled the name of your group or you want a link to your website please contact me.

Each of these groups was allowed to hand in several submissions. The results are given in the next table ranked by error rate (note that each .1% corresponds to 1 misclassification).

For a nearest neighbor classifier comparing the images down-scaled to 32x32 pixels using Euclidean distance the error rate is 36.8% which means 368 images were misclassified.

submissionerror rate[%]
rwth-i6/IDMSUBMISSION: 12.6
rwth-mi/rwth_mi-ccf_idm.03.tamura.06.confidence 13.3
rwth-i6/MESUBMISSION: 13.9
ulg.ac.be/maree-random-subwindows-tree-boosting.res 14.1
rwth-mi/rwth_mi1.confidence 14.6
ulg.ac.be/maree-random-subwindows-extra-trees.res 14.7
geneva-gift/GIFT5NN_8g.txt 20.6
infocomm/Annotation_result4_I2R_sg.dat 20.6
geneva-gift/GIFT5NN_16g.txt 20.9
infocomm/Annotation_result1_I2R_sg.dat 20.9
infocomm/Annotation_result2_I2R_sg.dat 21.0
geneva-gift/GIFT1NN_8g.txt 21.2
geneva-gift/GIFT10NN_16g.txt 21.3
miracle/mira20relp57.txt 21.4
geneva-gift/GIFT1NN_16g.txt 21.7
infocomm/Annotation_result3_I2R_sg.dat 21.7
ntu/NTU-annotate05-1NN.result 21.7
ntu/NTU-annotate05-Top2.result 21.7
geneva-gift/GIFT1NN.txt 21.8
geneva-gift/GIFT5NN.txt 22.1
miracle/mira20relp58IB8.txt 22.3
ntu/NTU-annotate05-SC.result 22.5
nctu-dblab/nctu_mc_result_1.txt 24.7
nctu-dblab/nctu_mc_result_2.txt 24.9
nctu-dblab/nctu_mc_result_4.txt 28.5
nctu-dblab/nctu_mc_result_3.txt 31.8
nctu-dblab/nctu_mc_result_5.txt 33.8
Euclidean Distance, 32x32 images, 1-Nearest-Neighbor 36.8
cea/pj-3.txt 36.9
mtholyoke/MHC_CQL.RESULTS 37.8
mtholyoke/MHC_CBDM.RESULTS 40.3
cea/tlep-9.txt 42.5
cindi/Result-IRMA-format.txt 43.3
cea/cime-9.txt 46.0
montreal/UMontreal_combination.txt 55.7
montreal/UMontreal_texture_coarsness_dir.txt 60.3
nctu-dblab/nctu_mc_result_gp2.txt 61.5
montreal/UMontreal_contours.txt 66.6
montreal/UMontreal_shape.txt 67.0
montreal/UMontreal_contours_centred.txt 67.3
montreal/UMontreal_shape_fourier.txt 67.4
montreal/UMontreal_texture_directionality.txt 73.3
If you want to have a link to a method description from one of these lines, feel free to contact me.

The correct classification of the test data is available here. It can be used together with the readconfidencefile.py script to obtain the error rate from a confidence file as necessary for submission. If you make experiments using these data, take care that you don't use any information from the test data to avoid an optimistic evaluation, i.e. create a development set from the original training data, tune your parameters using these data only and afterwards classify the test data.

Papers using these data

If you have published using these data, or know of papers using it, I would be pleased if you could inform me.
Thomas Deselaers
Last modified: Tue Oct 25 17:51:54 CEST 2005

Valid HTML 4.01! xemacs logo debian logo

;