ImageCLEF 2006 - Medical Automatic Annotation Task
The ImageCLEF 2006 Medical Automatic Annotation Task
is part of the Cross Language Evaluation Forum (CLEF), a
benchmarking event for multilingual information retrieval held
annually since 2000. CLEF first began as a track in the Text Retrieval
Conference (TREC, trec.nist.gov).
Retrieval tasks
In ImageCLEFmed 2006, there are two medical image retrieval tasks.
Both tasks will likely require the use of image retrieval
techniques for best results. The automatic image annotation task
will not contain any text as input for the task and is aimed at
image analysis research groups.
On request, we will try to make results of GIFT
and FIRE available to
participants without access to an own CBIR system.
This page is concerned with the Medical Automatic Annotation Task
You might also be interested in
Automatic image annotation
Automatic image annotation or image classification can be an
important step when searching for images from a database. Base
on the IRMA
project a database of 10,000 fully classified radiographs
taken randomly from medical routine is made available and can be
used to train a classification system. 1,000 radiographis for
which classification labels are not available to the
participants have to be classified. The aim is to find out how
well current techniques can identify image modality, body
orientation, body region, and biological system examined based
on the images. The results of the classification step can be
used for multilingual image annotations as well as for DICOM
header corrections.
Although only approx. 120 simple class numbers will be provided
for ImageCLEFmed 2006. The images are annotated with complete
IRMA code, a multi-axial code for image annotation. The code is
currently available in English and German. For ImageCLEF 2007
it is planned to use the complete code and let the participants
decide to what level of detail they can classify an image. It
is planned to use the results of such automatic image annotation
tasks for further, textual image retrieval tasks in the future.
Database & Download
NEW DATA AVAILABLE
The training data is available for download:
It consists of two parts and one meta data part:
- A
training set consisting of 9000 images from 116
classes.
- A
development set consisting of 1000 images. They are
subdivided in 116 classes and are meant to tune your
system.
The idea is that participants optimize their systems using these
data, i.e. training on 9000 images and testing on 1000
images.
Then, all data are put together to train a system with the
parameters determined above to classify the test data for 2006
that will be released at a later date.
- textual
annotation for the classes
This archive contains the
textual description of the classes in German and English.
Test data
Now the test
data is available for download.
The login information is available from Carol Peters after registration for CLEF/ImageCLEF 2006.
Submission of Results
Results have to be submitted by June 9, 2006.
The submission
website is now online: http://www-i6.informatik.rwth-aachen.de/~deselaers/imageclef06/medaat-submission/ic06submit.py.
In case you experience problems with submission over this site,
please contact me.
You have to specify the following information
- Contact person
- Name, Email, Contact address
- Group
- Group identifier (for presentation of results)
- Complete name of the group
- Address
- Run
- Identifier (for presentation of results)
- description (200-500 words)
- file with classification results following the format explained below
- do you consider this run your primary run?
Submission format
- comment lines start with #, be sure to put contact
information into a comment
- all other lines are of the format
<imageno> <confidence for class 1> <confidence for class 2>... <confidence for class 115> <confidence for class 116>
- the class with the highest confidence is considered to be
the class of the image
- for each of the images to be classified there has to be one line
- if you have several submissions you can submit several files
- a program to check the
submission format and a list of the
files which have to be classified is available.
Together, these can be used to check whether you have a
valid submission. If you have a valid submission a run of this
program will look like this:
# ./check_submission.python -c filenames [submissionfile]
conffile=../MYSUBMISSIONS/idmsubmission classfile=filenames
2034 is classified as X
2229 is classified as Y
2630 is classified as Z
....
373446 is classified as XX
373447 is classified as YY
373450 is classified as ZZ
classified: 1000 wrong: 0 correct: 0 illegal: 0 missing: 0
If it looks differently, e.g. there is something as illegal or
missing you should check your submission file.
Questions & Comments
If you have any questions or comments on these information, feel
free to contact us:
- Thomas
Deselaers for technical questions concerning the data
transfer and evaluation.
- Thomas Lehmann for general questions
concerning the IRMA code and/or IRMA database.
Thomas Deselaers
Last modified: Wed Jun 7 18:11:50 CEST 2006
;