Corpora: Extended Deadline LREC Workshop "Distributing and Accessing

Wim Peters (W.Peters@dcs.shef.ac.uk)
Wed, 11 Feb 1998 15:13:12 GMT

***************
Last Call for papers

EXTENDED DEADLINE : March 15, 1998

***************
Distributing and Accessing Linguistic Resources
***********************************************

Workshop immediately before the First International Conference on
language Resources and Evaluation (LREC),
May 27 1998
Granada, Spain
http://www.icp.grenet.fr/ELRA/conflre.html

Short description:

This workshop will discuss ways to increase the efficacy of linguistic
resource distribution and programmatic access, and work towards the
definition of a new method for these tasks based on distributed processing
and object-oriented modelling with deployment on the WWW.

Organizers: Yorick Wilks, Hamish Cunningham, Wim Peters, Remi Zajac

Workshop Scope and Aims
-----------------------

In general the reuse of of NLP data resources (such as lexicons or corpora)
has exceeded that of algorithmic resources (such as lemmatisers or parsers).
However, there are still two barriers to data resource reuse:

1) each resource has its own representation syntax and corresponding
programmatic access mode (e.g. SQL for CELEX, C or Prolog for Wordnet,
SGML for the BNC);

2) resources must generally be installed locally to be usable (and of
course precisely how this happens, what operating systems are supported
etc. varies from case to case).

The consequences of 1) are that although resources share some structure in
common (lexicons are organised around words, for example) this commonality
is wasted when it comes to using a new resource (the developer has to learn
everything afresh each time) and that work which seeks to investigate or
exploit commonalities between resources (e.g. to link several lexicons to an
ontology) has to first build a layer of access routines on top of each
resources. So, for example, if we wish to do task-based evaluation of lexicons
by measuring the relative performance of an information extraction system
with different instantiations of lexical resource, we might end up writing
code to translate several different resources into SQL or SGML.

The consequence of 2) is that there is no way to "try before you buy": no
way to examine a data resource for its suitability for your needs before
licencing it. Correspondingly there is no way for a resource provider to
expose limitted access to their products for advertising purposes, or gain
revenue through piecemeal supply of sections of a resource.

This workshop will discuss ways to overcome these barriers. The proposers
will discuss a new method for distributing and accessing language resources
involving the development of a common programmatic model of the various
resources types, implemented in CORBA IDL and/or Java, along with a
distributed server for non-local access. This model is being designed as
part of the GATE project (General Architecture for Text Engineering:
http://www.dcs.shef.ac.uk/research/groups/nlp/gate/) and goes under the
provisional title of an Active CREOLE Server. (CREOLE: Collection of REusable
Objects for Language Engineering. Currently CREOLE supports only algortihmic
objects, but will be extended to data objects.)

A common model of language data resources would be a set of inheritance
hierarchies making up a forest or set of graphs. At the top of the hierarchies
would be very general abstractions from resources (e.g. lexicons are about
words); at the leaves would be data items that were specific to individual
resources. Programmatic access would be available at all levels, allowing
the developer to select an appropriate level of commonality for each
application.

Note that although an exciting element of the work could be to provide
algorithms to dynamically merge common resources (e.g. connect WordNet to
Celex), what we're suggesting initially is not to develop anything
substantively new, but simply to improve access to existing resources. This
is NOT a new standards initiative, but a way to build on previous initiatives.

Of course, the production of a common model that fully expressed all the
subtleties of all resources would be a large undertaking, but we believe
that it can be done incrementally, with useful results at each stage. Early
versions will stop decomposing the object structure of resources at a fairly
high level, leaving the developer to handle the data structures native to
the resources at the leaves of the forest. There should still be a
substantial benefit in uniform access to higher level strucures.

Draft Program Committee
-----------------------

Yorick Wilks
Hamish Cunningham
Wim Peters
Remi Zajac
Roberta Catizone
Paola Velardi
Maria Teresa Pazienza
Louise Guthrie
Roberto Basili
Bran Boguraev
Sergei Nirenburg
James Pustejowsky
Ralph Grishman
Christiane Fellbaum

Paper Submission
----------------

FORMATTING GUIDELINES:

Papers should not exceed 4000 words or 10 pages.

HARD COPIES:

Three hard copies should be sent to:

Gill Callaghan, FAO Yorick Wilks
Dept. Computer Science
University of Sheffield
Regent Court
211 Portobello St.,
Sheffield S1 4DP
UK

ELECTRONIC SUBMISSION:

Electronic submission will be allowed in Poscript or HTML.
An ftp site will be available on demand.
Authors should send an info email to (Hamish Cunningham -
hamish@dcs.shef.ac.uk) even if they submit in paper form. An electronic
submission should be accompanied by a plain ascii text.

# NAME : Name of first author
# TITLE: Title of the paper
# PAGES: Number of pages
# FILES: Name of file (if also submitted electronically)
# NOTE : Anything you'd like to add
# KEYS : Keywords
# EMAIL: Email of the first author
# ABSTR: Abstract of the paper
# . . . . . .

IMPORTANT DATES

Paper Submission Deadline (Hard Copy/Electronic) March 15th 1998
Paper Notification April 1st
Camera-Ready Papers Due May 1st
DALR workshop May 27st