The Taboo Challenge Handbook

Under construction...

<!--

  • -->

     

    How the game is played

    A single game consists of an exchange of request-response message pairs, initiated by a first START request sent by the guesser agent to the describer agent. The first response of the describer returns a first hint. Each subsequent request sent by the guesser will be a guess of a city. Each subsequent response to an incorrect guess will contain the answer no. and a new hint. The response to a correct guess is a simple yes. This is the last message exchanged in the game.

    An example gameplay is provided below:

                                Guesser: START
    Describer: sea  
                                Guesser: Sydney
    Describer: no. yearly festival
                                Guesser: Rio de Janeiro
    Describer: no. bridges
                                Guesser: Amsterdam
    Describer: no. renaissance art  
                                Guesser: Venice
    Describer: yes
    
    The hints

    Each hint is a simple textual phrase in English. More specifically, the hint is always a simple noun phrase consisting of one to three words that are common nouns, adjectives, or connectors (such as ‘and’). The words might be inflected (e.g., in plural). Guesser agents are supposed to include some logic capable of understanding these hints (or at least matching them or looking them up in a resource).

    In response to an incorrect guess, the hint is preceded by the string prefix "no. ".

    The guesses

    Guesses should consist of one or both of the following:

    • a single city name (composed of one or more words) in English, e.g., "Rio de Janeiro" or "Venice" but not "Venezia";
    • latitude-longitude coordinates for the city guessed, as integer numbers, e.g., 48,5 but not 48.1224,5.7783.

    The guess is evaluated as correct if from the two possible types of answers above at least one is correct.

     

    Evaluation

    Evaluation consists of running the guesser agents a predefined number of times, each time playing a different game. The set of games to be used in the evaluation is predefined, is the same for all participants, and is naturally kept secret by the organisers until the end of the evaluations.

    The guessers are scored by the number of guesses they emit before providing the correct answer. For example, the example game above will be scored 4. If a guesser cannot find the answer until all the hints run out, it will be scored number_of_hints+10. The total score for the guesser is the sum of the scores of the individual games it played. The winner is the guesser with the lowest total score.

     

    Specifications for the agent to be submitted

    Input and Output

    Your agent implementation is not required to access the API directly. Instead we are providing you with a complete test suite (that is going to be used for the evaluation too, so please double check to be compliant with it!) with will take care of handling the API request/response packing and unpacking and that will give your agent the hints and get from it your guesses.

    In order to do so, your agent will have to read the hints from the STDIN and write the guesses to the STDOUT. The test infrastructure will take care of delays and to wait for the responses to arrive. We will be open to any issue you will encounter and we will be helping you to overcome and get through those problems.

    Programming languages and environment

    We do not want to hinder your creativity by imposing any programming language on you. What we expect, however, is that your implementation should be able to run within the virtualisation environment we will use to run and evaluate submissions. We are going to create two different VM with both Windows and Linux (information about the version will be provided soon) and we'll get you access them for testing purposes. You will soon find here a detailed description of the requirements and a step-by-step tutorial.

     

    The online describer

    We provide an online describer agent for the purpose of testing your solution. The games (hints) provided by this describer are similar to those in the evaluation. (Note that for the evaluation we will use a different instance of this service, with the same API specification.)

     

    Resources available to guessers

    We

     

    The pilot competition

     

    Submitting the workshop paper

     

    Questions

    You can ask any questions regarding the Challenge specification in the comment by dropping and email at challenge@essence-network.com and we will try to answer ASAP!

    Leave a Reply

    Evolution of Shared SEmaNtics in Computational Environments – A Marie Curie Initial Training Network