[Corpora-List] Final CFP - MSQA-Eval 2004 to held with IJCNLP-04

From: Chin-Yew Lin (cyl@ISI.EDU)
Date: Wed Dec 10 2003 - 06:00:01 MET

  • Next message: Magali Jeanmaire: "[Corpora-List] ELRA news 1/2"

    CALL FOR PAPERS

    WORKSHOP ON MULTILINGUAL SUMMARIZATION AND QUESTION ANSWERING 2004

    Workshop on
    Multilingual Summarization and Question Answering (2004)
    - Towards Systematizing and Automatic Evaluations

    (post-conference workshop in conjunction with IJCNLP-04)

    March 25, 2004
    Hainan Island, China

    WEB SITE: http://www.isi.edu/~cyl/msqa-eval-ijcnlp04

    [INTRODUCTION]
    Automatic summarization and question answering (QA) are now enjoying a
    period of revival and they are advancing at a much quicker pace than before.
    Recently in the United States, TREC started an English QA track in 1999 and
    DUC sponsored by NIST also started a new English summarization evaluation
    series in 2001. In Japan, NTCIR project included Japanese text summarization
    task in 2000 and QA task in 2001.

    One major challenge of these large scale evaluation efforts is how we can
    evaluate summarization and QA systems systematically and automatically. In
    other words, is there a consistent and principled way in estimating the
    quality of any summarization and QA systems accurately and can we automate
    the evaluation process? The release of the "Framework for Machine
    Translation Evaluation in ISLE (FEMTI)" and the recent adoption of the
    automatic evaluation metrics, BLEU and NIST, in the machine translation
    community are good examples that we might be able to find leverage from and
    extend them to summarization and QA evaluations. A good example in automatic
    evaluation of summaries is the ROUGE method developed at the Information
    Sciences Institute, University of Southern California.

    This workshop focuses on automatic summarization and QA, and enable
    participants to discuss the integration of multiple languages and multiple
    functions and most importantly how to robustly estimate quality of
    summarization and QA. We also welcome submissions related to any aspects of
    summarization and QA with main sections dedicated to evaluation.

    [FORMAT FOR SUBMISSIONS]
          Submissions are limited to original, unpublished work. Submissions
    must use the ijc-NLP LaTeX style files or Microsoft Word Style files
    tailored for ijc-NLP. The ijc-NLP style files can be found here. Paper
    submissions should consist of a full paper (5000 words or less, exclusive of
    title page and references). Papers outside the specified length are subject
    to be rejected without review. The paper should be written in English.

    [SUBMISSION QUESTIONS]
          Please send submission questions to Chin-Yew Lin [cyl at isi.edu].

    [SUBMISSION PROCEDURE]
          Electronic submission only: send the pdf (preferred), postscript, or
    MS Word form of your submission to: Chin-Yew Lin [cyl at isi.edu]. The
    Subject line should be "IJCNLP-04 WORKSHOP PAPER SUBMISSION". Because
    reviewing is blind, no author information is included as part of the
    paper. An identification page must be sent in a separate email with the
    subject line: "IJCNLP-04 WORKSHOP ID PAGE" and must include title, all
    authors, theme area (i.e. summarization, QA, or both), keywords, word
    count, and an abstract of no more than 5 lines. Late submissions will not
    be accepted. Notification of receipt will be e-mailed to the first author
    shortly after receipt.

    [DEADLINES (Tentative)]
          Paper submission deadline: Dec 12, 2003
          Notification of acceptance for papers: January 10, 2004
          Camera ready papers due: January 24, 2004
          Workshop date: March 25, 2004

    [PROGRAM CHAIRS]
          Hang Li Microsoft Research, Asia, China
          Chin-Yew Lin USC/ISI, USA

    [PROGRAM COMMITTEE]
          Hsin-Hsi Chen, National Taiwan University, Taiwan
          Tat-Seng Chua, National University of Singapore, Singapore
          Junichi Fukumoto, Ritsumeikan University, Japan
          Takahiro Fukusima, Otemon Gakuin University, Japan
          Donna Harman, NIST, USA
          Hongyan Jing, IBM Research, USA
          Tsuneaki Kato, University of Tokyo, Japan
          Gary Geunbae Lee, Postech, South Korea
          Bernardo Magnini, Istituto Trentino di Cultura (ITC)/IRST, Italy
          Tadashi Nomoto, National Institute of Japanese Literature, Japan
          Manabu Okumura, Tokyo Institute of Technology, Japan
          John Prager, IBM Research, USA
          Drago Radev, University of Michigan, USA
          Karen Sparck-Jones, Cambridge University, UK
          Simone Teufel, Cambridge University, UK
          Benjamin K Tsou, City University of Hong Kong, China



    This archive was generated by hypermail 2b29 : Wed Dec 10 2003 - 06:07:32 MET