Installing and configuring Judge

จาก Theory Wiki
ไปยังการนำทาง ไปยังการค้นหา

Judge system reads user submissions and test submissions and grades. It can be run with different configurations (called "environments") depending on the situations, e.g., during the exam, the judge probably grades with example test cases but while grading it grades with different sets of test cases. This multiple configurations can be done with multiple "environments" where configured so that the judge finds different sets of test data.

Basic directory structure

/[judge-root]
  /ev                     (containing grading information for each problem)
    /problem_1
     ...
    /problem_n
    /test_request         (containing TEST interface template for each problem)
      /problem_1
      ...
      /problem_n
  /result                 (containing grading results)
    /user_1
      /problem_1
        /submission_1     
    ...
    /user_n
  /scripts                (where all script are)
    /config               (where all config files are)
    /lib         
    /std-script           (grading scripts)
    /templates            (used for importing scripts)
    /test
  /log

Judge environments

Currently there are 3 environments: exam, grading, and test. Only the first two are relevant to normal usage. The test environment is only used when the system runs its unit test scripts.

The main different between the exam environment and grading environment, other than having different locations for test cases, is on how the outputs from the grading are shown to the user. In the exam environment, the system is configured so that it only reports 'passed' or 'failed', but in grading environment, each result for each test case is shown.

How to install and use

Easy as it is: check out, edit config files, put in test data, and go!

Check out the scripts directory from the SVN

Edit config files

Config files are in (judge-home)/scripts/config. In that, you will find sample config files (under *.SAMPLE).

  • First you have to copy and edit environment.rb.
    • RAILS_ROOT --- The judge accesses submissions through Rails ActiveRecord; therefore, it has to run Rails' environment. You should set RAILS_ROOT to point to where the root of Rails application for the web interface is. (There is a drawback for this design: you have to install and configure the web interface even when you just want to run the judge system.)
    • GRADER_ROOT --- This is the directory where the scripts are. It should be (judge-home)/scripts/config. (Notes: This should actually read JUDGE_SCRIPT_ROOT, will fix it later ---Jittat 17:35, 16 มีนาคม 2008 (ICT))
  • For each environment, you'll have to edit its configuration. The configuration file for environment (ENV) is env_(ENV).rb. Most configuration should work as it is, (except that current both grading environment and exam environment are configured to share the same ev directory). You configure the system by modifying Ruby commands running inside a Grader::Initializer.run do |config| ... end block. For each configuration parameter, you set the attribute of the config variable.
    • Basic attributes
      • config.problems_dir --- This is where test data are. Usually it is (judge-home)/ev, but you may want to configure this differently for exam and grading environments.
      • config.user_result_dir --- This is where the grading results are stored. Again, as in problem_dir, you may want to set it differently for different environments.
    • Other attributes (shall be documented later --- Jittat 18:03, 16 มีนาคม 2008 (ICT))
      • Locations
        • config.problems_dir
        • config.user_result_dir
        • config.test_request_input_base_dir
        • config.test_request_output_base_dir
        • config.test_request_problem_templates_dir
      • Logging and reporting status of the judge
        • config.talkative
        • config.logging
        • config.log_dir
        • config.report_grader
      • Reporting result
        • config.comment_report_style

Test data

The judge system keeps grading information for each problem under a directory with the same name inside config.problem_dir (usually (judge-home)/ev or (judge-home)/ev-exam). This article documents how each directory would look like.

=Importing test data

In normal usage where every test case has the same constraint and each corresponds to a unique test run, you can use a script to import the problem test data in a simpler format: each test case i has two files i.in and i.sol, all stored in some directory. For example you may have 1.in, 1.sol, 2.in, 2.sol, ..., and so on.

Then you invoke the script call import_problem in the test data directory (e.g., (judge-home)/ev). The following is from the script usage:

using: import_problem name dir num check [options]
   where: name = problem_name [
          dir = importing testcase directory
          num = number of testcases
          check = check script, which can be 
                   'integer', 'text' (for standard script), 
                   path_to_your_script, or
                   'wrapper:(path_to_your_wrapped_script)'
   options: -t time-limit (in seconds)
            -m memory-limit (in megabytes)
What it does:
  * creates a directory for a problem in the current directory,
  * copies testdata in the old format and create standard testcase config file
  * copies a check script for grading
  * creates a test_request template in the current directory + '/test_request'
For wrapped checked script see comment in templates/check_wrapper for information.

Note that you have to specify how the submission's output will be graded by specifying check script (See this for specification). Standard checkers are available: integer and text. To decouple the development of the check script and the grading system, one can develop the checker with no direct access to problem configuration and then wrap it using wrapper check script. The wrapping check script would call the real script with the following arguments

 (real-script) <lang> <test-num> <in-file> <out-file> <ans-file> <full-score>

and the wrapped script should report to the standard output in the format specified in the spec above.