ImageCLEF 2007 - Medical Automatic Annotation Task

The ImageCLEF 2007 Medical Automatic Annotation Task is part of the Cross Language Evaluation Forum (CLEF), a benchmarking event for multilingual information retrieval held annually since 2000. CLEF first began as a track in the Text Retrieval Conference (TREC, trec.nist.gov).

Retrieval tasks

In ImageCLEFmed 2007, there are two medical image retrieval tasks. Both tasks will likely require the use of image retrieval techniques for best results. The automatic image annotation task will not contain any text as input for the task and is aimed at image analysis research groups. On request, we will try to make results of GIFT and FIRE available to participants without access to an own CBIR system. This page is concerned with the Medical Automatic Annotation Task

You might also be interested in

Automatic image annotation

Automatic image annotation or image classification can be an important step when searching for images from a database. Base on the IRMA project a database of 11,000 fully classified radiographs taken randomly from medical routine is made available and can be used to train a classification system. 1,000 radiographs for which classification labels are not available to the participants have to be classified. The aim is to find out how well current techniques can identify image modality, body orientation, body region, and biological system examined based on the images. The results of the classification step can be used for multilingual image annotations as well as for DICOM header corrections.

The classification task this year will consider the complete IRMA code and it will be up to the groups to what level of detail the images will be annotated. Therefore, errors in the annotation will be counted depending on the depth in the tree, and the difficulty of the choice.

Database & Download

Training set and extended Training set now available:

The training data and the extended training data (dev set for tuning the parameters) is now available.

Downloads (you will need the username/password that is provided when registering for ImageCLEF 2007):

Note: if you are not able to download the data using the links directly you have to go to the "Information on the datasets"-page, login using the credentials that are provided upon registration for ImageCLEF and then you can retry the links or navigate to on the IRMA page to the downloads.

RESULTS

Error counting scheme

In ImageCLEF 2007, the medical automatic annotation task considers the complete IRMA code and penalizes misclassifications at different levels of the code differently.

A detailed description of the error counting scheme is here

The submission format will be

<imageno> <imagecode>
<imageno> <imagecode>
...
e.g.
2034 1121-127-700-500
2229 1121-110-411-700
2630 1121-120-942-700
2633 1121-120-951-700
2711 1121-120-921-700
...

For assistance, we offer some tools:

Schedule

Submission of Results

For the submission of the results we will provide a web interface and you will have to specify the following information:

Submission format

The submission format is not yet specified, but we will provide a syntax checker tool in advance.

Publications

The most interesting and successful submissions to this task are invited to submit a paper to a special issue in Pattern Recognition Letters :

Automatic annotation of medical images for image retrieval -- ImageCLEF 2007
Guest Editors: Thomas Deselaers, Henning Müller, and Thomas M. Lehmann

Questions & Comments

If you have any questions or comments on these information, feel free to contact us:
Thomas Deselaers
Last modified: Thu Aug 16 08:43:40 CEST 2007

Valid HTML 4.01! xemacs logo debian logo ;