[Corpora-List] CFP: Summ & QA Evalution Workshop, IJCNLP'04 Deadline Extended

From: Chin-Yew Lin (cyl@ISI.EDU)
Date: Sat Jan 10 2004 - 03:24:10 MET

  • Next message: Declerck: "[Corpora-List] LREC Workshop on "A Registry of Linguistic Data Categories within an Integrated Language Resources Repository Area""

    CALL FOR PAPERS (Deadline Extended: Jan 19, 2004)

    WORKSHOP ON MULTILINGUAL SUMMARIZATION AND QUESTION ANSWERING 2004

    Workshop on
    Multilingual Summarization and Question Answering (2004)
    - Towards Systematizing and Automatic Evaluations

    (post-conference workshop in conjunction with IJCNLP-04)

    March 25, 2004
    Hainan Island, China

    WEB SITE: http://www.isi.edu/~cyl/msqa-eval-ijcnlp04

    [INTRODUCTION]
    Automatic summarization and question answering (QA) are now enjoying a
    period of revival and they are advancing at a much quicker pace than
    before. Recently in the United States, TREC started an English QA track
    in 1999 and DUC sponsored by NIST also started a new English summarization
    evaluation series in 2001. In Japan, NTCIR project included Japanese text
    summarization task in 2000 and QA task in 2001.

    One major challenge of these large scale evaluation efforts is how we
    can evaluate summarization and QA systems systematically and
    automatically. In other words, is there a consistent and principled way
    in estimating the quality of any summarization and QA systems accurately
    and can we automate the evaluation process? The release of the
    "Framework for Machine Translation Evaluation in ISLE (FEMTI)" and the
    recent adoption of the automatic evaluation metrics, BLEU and NIST, in
    the machine translation community are good examples that we might be
    able to find leverage from and extend them to summarization and QA
    evaluations. A good example in automatic evaluation of summaries is the
    ROUGE method developed at the Information Sciences Institute, University
    of Southern California.

    This workshop focuses on automatic summarization and QA, and enable
    participants to discuss the integration of multiple languages and
    multiple functions and most importantly how to robustly estimate quality
    of summarization and QA. We also welcome submissions related to any
    aspects of summarization and QA with main sections dedicated to
    evaluation.

    [FORMAT FOR SUBMISSIONS]
       Submissions are limited to original, unpublished work. Submissions
       must use the ijc-NLP LaTeX style files or Microsoft Word Style files
       tailored for ijc-NLP. The ijc-NLP style files can be found here.
       Paper submissions should consist of a full paper (5000 words or less,
       exclusive of title page and references). Papers outside the specified
       length are subject to be rejected without review. The paper should be
       written in English.

    [SUBMISSION QUESTIONS]
       Please send submission questions to Chin-Yew Lin [cyl at isi.edu].

    [SUBMISSION PROCEDURE]
       Electronic submission only: send the pdf (preferred), postscript, or
       MS Word form of your submission to: Chin-Yew Lin [cyl at isi.edu].
       The Subject line should be "IJCNLP-04 WORKSHOP PAPER SUBMISSION".
       Because reviewing is blind, no author information is included as part
       of the paper. An identification page must be sent in a separate email
       with the subject line: "IJCNLP-04 WORKSHOP ID PAGE" and must include
       title, all authors, theme area (i.e. summarization, QA, or both),
       keywords, word count, and an abstract of no more than 5 lines. Late
       submissions will not be accepted. Notification of receipt will be
       e-mailed to the first author shortly after receipt.

    [DEADLINES (Tentative)]
       Paper submission deadline: Jan 19, 2004
       Notification of acceptance for papers: Feb 12, 2004
       Camera ready papers due: Feb 26, 2004
       Workshop date: March 25, 2004

    [PROGRAM CHAIRS]
       Hang Li Microsoft Research, Asia, China
       Chin-Yew Lin USC/ISI, USA

    [PROGRAM COMMITTEE]
       Hsin-Hsi Chen, National Taiwan University, Taiwan
       Tat-Seng Chua, National University of Singapore, Singapore
       Junichi Fukumoto, Ritsumeikan University, Japan
       Takahiro Fukusima, Otemon Gakuin University, Japan
       Donna Harman, NIST, USA
       Hongyan Jing, IBM Research, USA
       Tsuneaki Kato, University of Tokyo, Japan
       Gary Geunbae Lee, Postech, South Korea
       Bernardo Magnini, Istituto Trentino di Cultura (ITC)/IRST, Italy
       Tadashi Nomoto, National Institute of Japanese Literature, Japan
       Manabu Okumura, Tokyo Institute of Technology, Japan
       John Prager, IBM Research, USA
       Drago Radev, University of Michigan, USA
       Karen Sparck-Jones, Cambridge University, UK
       Simone Teufel, Cambridge University, UK
       Benjamin K Tsou, City University of Hong Kong, China



    This archive was generated by hypermail 2b29 : Sat Jan 10 2004 - 03:34:23 MET