oioioi.programs.handlers

Module Contents

Functions

_make_filename(env, base_name)

Create a filename in the filetracker for storing outputs

_skip_on_compilation_error(fn)

A decorator which skips the decorated function if the

compile(env, **kwargs)

Compiles source file on the remote machine and returns name of

compile_end(env, **kwargs)

_override_tests_limits(language, tests)

Given language and list of Test objects, returns

collect_tests(env, **kwargs)

Collects tests from the database and converts them to

run_tests(env[, kind])

Runs tests and saves their results into the environment

run_tests_end(env, **kwargs)

grade_tests(env, **kwargs)

Grades tests using a scoring function.

grade_groups(env, **kwargs)

Grades ungraded groups using a aggregating function.

grade_submission(env[, kind])

Grades submission with specified kind of tests on a Job layer.

_make_base_report(env, submission, kind)

Helper function making: SubmissionReport, ScoreReport,

make_report(env[, kind, save_scores])

Builds entities for tests results in a database.

delete_executable(env, **kwargs)

fill_outfile_in_existing_test_reports(env, **kwargs)

Fill output files into existing test reports that are not directly

insert_existing_submission_link(env, src_submission, ...)

Add comment to some existing submission with link to submission view

Attributes

oioioi.programs.handlers.logger[source]
oioioi.programs.handlers.COMPILE_TASK_PRIORITY = 200[source]
oioioi.programs.handlers.EXAMPLE_TEST_TASK_PRIORITY = 300[source]
oioioi.programs.handlers.TESTRUN_TEST_TASK_PRIORITY = 300[source]
oioioi.programs.handlers.DEFAULT_TEST_TASK_PRIORITY = 100[source]
oioioi.programs.handlers._make_filename(env, base_name)[source]

Create a filename in the filetracker for storing outputs from filetracker jobs.

By default the path is of the form /eval/<contest_id>/<submission_id>/<job_id>-<base_name> with fields absent from env skipped. The folder can be also specified in env['eval_dir'].

oioioi.programs.handlers._skip_on_compilation_error(fn)[source]

A decorator which skips the decorated function if the compilation fails.

This is checked by looking for OK in env['compilation_result']. If the key is not present, it is assumed that the compilation succeeded.

oioioi.programs.handlers.compile(env, **kwargs)[source]

Compiles source file on the remote machine and returns name of the executable that may be ran

USES
  • env[‘source_file’] - source file name

  • env[‘language’] - if env['compiler'] is not set and env['language'] is, the compiler is set to 'default-' + env['language'].

  • the entire env is also passed to the compile job

PRODUCES
  • env[‘compilation_result’] - may be OK if the file compiled successfully or CE otherwise.

  • env[‘compiled_file’] - exists if and only if env[‘compilation_result’] is set to OK and contains compiled binary path

  • env[‘compilation_message’] - contains compiler stdout and stderr

  • env[‘exec_info’] - information how to execute the compiled file

oioioi.programs.handlers.compile_end(env, **kwargs)[source]
oioioi.programs.handlers._override_tests_limits(language, tests)[source]

Given language and list of Test objects, returns the dictionary of memory and time limits. The key is test’s pk. In case language overriding is defined in the database, the value of key is specified by overriding. Otherwise, the limits are the same as initial.

oioioi.programs.handlers.collect_tests(env, **kwargs)[source]

Collects tests from the database and converts them to evaluation environments.

Used environ keys:
  • problem_instance_id

  • language

  • extra_args

  • is_rejudge

Produced environ keys:
  • tests: a dictionary mapping test names to test envs

oioioi.programs.handlers.run_tests(env, kind=None, **kwargs)[source]

Runs tests and saves their results into the environment

If kind is specified, only tests with the given kind will be run.

Used environ keys:
  • tests: this should be a dictionary, mapping test name into the environment to pass to the exec job

  • unsafe_exec: set to True if we want to use only ulimit() to limit the executable file resources, False otherwise (see the documentation for unsafe-exec job for more information),

  • compiled_file: the compiled file which will be tested,

  • exec_info: information how to execute compiled_file

  • check_outputs: set to True if the output should be verified

  • checker: if present, it should be the filetracker path of the binary used as the output checker,

  • save_outputs: set to True if and only if each of test results should have its output file attached.

  • sioworkers_extra_args: dict mappting kinds to additional arguments passed to :fun:`oioioi.sioworkers.jobs.run_sioworkers_jobs` (kwargs).

Produced environ keys:
  • test_results: a dictionary, mapping test names into dictionaries with the following keys:

    result_code

    test status: OK, WA, RE, …

    result_string

    detailed supervisor information (for example, where the required and returned outputs differ)

    time_used

    total time used, in miliseconds

    mem_used

    memory usage, in KiB

    num_syscalls

    number of syscalls performed

    out_file

    filetracker path to the output file (only if env['save_outputs'] was set)

    If the dictionary already exists, new test results are appended.

oioioi.programs.handlers.run_tests_end(env, **kwargs)[source]
oioioi.programs.handlers.grade_tests(env, **kwargs)[source]

Grades tests using a scoring function.

The env['test_scorer'], which is used by this Handler, should be a path to a function which gets test definition (e.g. a env['tests'][test_name] dict) and test run result (e.g. a env['test_results'][test_name] dict) and returns a score (instance of some subclass of ScoreValue) and a status.

Used environ keys:
  • tests

  • test_results

  • test_scorer

Produced environ keys:
  • score, max_score and status keys in env['test_result']

oioioi.programs.handlers.grade_groups(env, **kwargs)[source]

Grades ungraded groups using a aggregating function.

The group_scorer key in env should contain the path to a function which gets a list of test results (wihtout their names) and returns an aggregated score (instance of some subclass of ScoreValue).

Used environ keys:
  • tests

  • test_results

  • group_scorer

Produced environ keys:
  • score, max_score and status keys in env['group_results']

oioioi.programs.handlers.grade_submission(env, kind='NORMAL', **kwargs)[source]

Grades submission with specified kind of tests on a Job layer.

If kind is None, all tests will be graded.

This Handler aggregates score from graded groups and gets submission status from tests results.

Used environ keys:
  • group_results

  • test_results

  • score_aggregator

Produced environ keys:
  • status

  • score

  • max_score

oioioi.programs.handlers._make_base_report(env, submission, kind)[source]

Helper function making: SubmissionReport, ScoreReport, CompilationReport.

Used environ keys:
  • status

  • score

  • compilation_result

  • compilation_message

  • submission_id

  • max_score

Alters environ by adding:

Returns: tuple (submission, submission_report)

oioioi.programs.handlers.make_report(env, kind='NORMAL', save_scores=True, **kwargs)[source]

Builds entities for tests results in a database.

Used environ keys:
  • tests

  • test_results

  • group_results

  • status

  • score

  • compilation_result

  • compilation_message

  • submission_id

Produced environ keys:
oioioi.programs.handlers.delete_executable(env, **kwargs)[source]
oioioi.programs.handlers.fill_outfile_in_existing_test_reports(env, **kwargs)[source]

Fill output files into existing test reports that are not directly related to present submission. Also change status of UserOutGenStatus object to finished.

Used environ keys:
  • extra_args dictionary with submission_report object

  • test_results

Add comment to some existing submission with link to submission view of present submission.

Used environ keys:
  • extra_args dictionary with submission_report object

  • contest_id

  • submission_id