Participation

Registration & Communication

We ask all interested parties, including prospective participants, to subscribe to our spam-protected mailing list, where we will post updates a little more frequently than on the general task web site.  Access to the task data requires prior registration to this list, where we will make available a licensing template and download instructions.

Schedule (Revised in December 2014)

  • Sunday, June 1, 2014: trial data available
  • Tuesday, August 3, 2014: training data available
  • Thursday, December 18, 2014: test data available
  • Monday, December 22, 2014: last date to schedule the testing period
  • Thursday, January 15, 2015: submission of system results

Access to the test data is limited to a period of six (6) days prior to the upload of system results.  We ask each team to let us know in advance when you want to start your individual five-day test period.  In other words, please send us a date between December 18, 2014 and January 10, 2015 on which you would like to gain access to the test data.

System Submissions

Submissions of system outputs must be made to the ‘official’ SemEval submission system (which will be opened to participants in December 2014).  This server is accessed through an individual, per-team upload address, which you will receive in email from the task organizers towards the end of the five-day evaulation period for your team (note that the latest possible submission deadline for this task is Thursday, January 15, 2015).

Our task has three target representations (DM, PAS, PSD), three tracks (closed, open, and gold), and two optional sub-tasks (predicate disambiguation and cross-linguistic variations).  Participants are expected to submit results for all three target representations, in-domain and out-of-domain, for English, but are free to submit to any of the three tracks, or to all of them, dependending on whether or not any data or tools were used in additional to the training semantic dependency graphs provided for this task.  Furthermore, participants are allowed to submit up to two runs for each target representation, language, and track, for example reflecting different parameterizations of their system.  For details on the difference between the three tracks and definition of different runs, please see the evaluation page.

Given the above parameters, each submission can contain between six and forty-four result files, all in the official tab-separated SDP file format as is documented on the data page.  To not have to deal with a large number of individual files, we ask that the complete submission be ‘packacked up’ in a single compressed zip archive before upload to the SemEval server.

We ask that you name result files uniformly using the scheme language.domain.track.format.run.sdp’, e.g. ‘en.ood.closed.dm.2.sdp’ for the second run using the DM target representation in the closed track for the English out-of-domain test data, or ‘cz.id.gold.pas.1.sdp’ for the first run with gold-track Chinese in-domain PSD outputs.

In addition to the ‘.sdp’ result files, each submission archive must included a README file that provides the following information:

  • Team identifier (provided by SemEval organizers);
  • Team member name(s) and affiliation(s);
  • Designated contact person and email address;
  • Inventory of results files included in the archive;
  • System characteristics, including (if applicable):
    • core approach;
    • important features;
    • critical tools used;
    • data pre- or post-processing;
    • additional data used.
  • Bibliographic references (if applicable).

To make our task in receiving and evaluating submisisons as smooth as possible, please be careful in implementing the above requirements and naming schemes.  The submission system makes it possible to re-submit results, and only the most recent file (submitted within the six-day evaluation period for each team) will be considered for evaluation.  As always, please do not hesitate to contact the task organizers (at the email address indicated in the right column) in case you require additional information or clarification.

 

Contact Info

Organizers

  • Dan Flickinger
  • Jan Hajič
  • Angelina Ivanova
  • Marco Kuhlmann
  • Yusuke Miyao
  • Stephan Oepen
  • Daniel Zeman

sdp-organizers@emmtee.net

Other Info

Announcements

[06-feb-15] Final evaluation results for the task are now available; we are grateful to all (six) participating teams.

[08-dec-15] The evaluation period is nearing completion; we have purged inactive subscribers from the task-specific mailing list and sent out important information on the submssion of system outputs for evaluation to the list; if you have not received this email but are actually preparing a system submission, please contact the organizers immediately.

[17-dec-14] We are about to enter the evaluation phase, but recall that the closing date has been extended to Thursday, January 15, 2015. We have sent important instructions on how to participate in the evaluation to the task-specific mailing list; if you plan on submitting system results to this task but have not seen these instructions, please make contact with the organizers immediately.

[22-nov-14] English ‘companion’ syntactic analyses in various dependency formats are now available, for use in the open and gold tracks.

[20-nov-14] We have completed the production of cross-lingual training data: some 31,000 PAS graphs for Chinese and some 42,000 PSD graphs for Czech. At the same time, we have prepared an update of the English training data, with somewhat better coverage and a few improved analyses in DM, as well as with additional re-entrancies (corresponding to grammatical control relations) in PSD. The data is available for download as Version 1.1 from the LDC. Owing to the delayed availability of the cross-lingual data, we have moved the closing date for the evaluation period to mid-January 2015.

[14-nov-14] An update to the SDP toolkit (now hosted at GitHub) is available, implementing the additional evaluation metrics ‘complete predicates’ and ‘semantic frames’.

[05-aug-14] We are (finally) ready to officially ‘launch’ SDP 2015: the training data is now available for distribution through the LDC; please register for SemEval 2015 Task 18, and within a day (or so) we will be in touch about data licensing and access information.

[03-aug-14] Regrettably, we are running late in making available the training data and technical details of the 2015 task setup; please watch this page for updates over the next couple of days!

[01-jun-14] We have started to populate the task web pages, including some speculative information on extensions (compared to the 2014 variant of the task) that we are still discussing. A first sample of trial data is available for public download.