:py:mod:`oioioi.programs.handlers` ================================== .. py:module:: oioioi.programs.handlers Module Contents --------------- Functions ~~~~~~~~~ .. autoapisummary:: oioioi.programs.handlers._make_filename oioioi.programs.handlers._skip_on_compilation_error oioioi.programs.handlers.compile oioioi.programs.handlers.compile_end oioioi.programs.handlers._override_tests_limits oioioi.programs.handlers.collect_tests oioioi.programs.handlers.run_tests oioioi.programs.handlers.run_tests_end oioioi.programs.handlers.grade_tests oioioi.programs.handlers.grade_groups oioioi.programs.handlers.grade_submission oioioi.programs.handlers._make_base_report oioioi.programs.handlers.make_report oioioi.programs.handlers.delete_executable oioioi.programs.handlers.fill_outfile_in_existing_test_reports oioioi.programs.handlers.insert_existing_submission_link Attributes ~~~~~~~~~~ .. autoapisummary:: oioioi.programs.handlers.logger oioioi.programs.handlers.COMPILE_TASK_PRIORITY oioioi.programs.handlers.EXAMPLE_TEST_TASK_PRIORITY oioioi.programs.handlers.TESTRUN_TEST_TASK_PRIORITY oioioi.programs.handlers.DEFAULT_TEST_TASK_PRIORITY .. py:data:: logger .. py:data:: COMPILE_TASK_PRIORITY :annotation: = 200 .. py:data:: EXAMPLE_TEST_TASK_PRIORITY :annotation: = 300 .. py:data:: TESTRUN_TEST_TASK_PRIORITY :annotation: = 300 .. py:data:: DEFAULT_TEST_TASK_PRIORITY :annotation: = 100 .. py:function:: _make_filename(env, base_name) Create a filename in the filetracker for storing outputs from filetracker jobs. By default the path is of the form ``/eval///-`` with fields absent from ``env`` skipped. The folder can be also specified in ``env['eval_dir']``. .. py:function:: _skip_on_compilation_error(fn) A decorator which skips the decorated function if the compilation fails. This is checked by looking for ``OK`` in ``env['compilation_result']``. If the key is not present, it is assumed that the compilation succeeded. .. py:function:: compile(env, **kwargs) Compiles source file on the remote machine and returns name of the executable that may be ran USES * env['source_file'] - source file name * env['language'] - if ``env['compiler']`` is not set and ``env['language']`` is, the compiler is set to ``'default-' + env['language']``. * the entire ``env`` is also passed to the ``compile`` job PRODUCES * env['compilation_result'] - may be OK if the file compiled successfully or CE otherwise. * env['compiled_file'] - exists if and only if env['compilation_result'] is set to OK and contains compiled binary path * env['compilation_message'] - contains compiler stdout and stderr * env['exec_info'] - information how to execute the compiled file .. py:function:: compile_end(env, **kwargs) .. py:function:: _override_tests_limits(language, tests) Given language and list of Test objects, returns the dictionary of memory and time limits. The key is test's pk. In case language overriding is defined in the database, the value of key is specified by overriding. Otherwise, the limits are the same as initial. .. py:function:: collect_tests(env, **kwargs) Collects tests from the database and converts them to evaluation environments. Used ``environ`` keys: * ``problem_instance_id`` * ``language`` * ``extra_args`` * ``is_rejudge`` Produced ``environ`` keys: * ``tests``: a dictionary mapping test names to test envs .. py:function:: run_tests(env, kind=None, **kwargs) Runs tests and saves their results into the environment If ``kind`` is specified, only tests with the given kind will be run. Used ``environ`` keys: * ``tests``: this should be a dictionary, mapping test name into the environment to pass to the ``exec`` job * ``unsafe_exec``: set to ``True`` if we want to use only ``ulimit()`` to limit the executable file resources, ``False`` otherwise (see the documentation for ``unsafe-exec`` job for more information), * ``compiled_file``: the compiled file which will be tested, * ``exec_info``: information how to execute ``compiled_file`` * ``check_outputs``: set to ``True`` if the output should be verified * ``checker``: if present, it should be the filetracker path of the binary used as the output checker, * ``save_outputs``: set to ``True`` if and only if each of test results should have its output file attached. * ``sioworkers_extra_args``: dict mappting kinds to additional arguments passed to :fun:`oioioi.sioworkers.jobs.run_sioworkers_jobs` (kwargs). Produced ``environ`` keys: * ``test_results``: a dictionary, mapping test names into dictionaries with the following keys: ``result_code`` test status: OK, WA, RE, ... ``result_string`` detailed supervisor information (for example, where the required and returned outputs differ) ``time_used`` total time used, in miliseconds ``mem_used`` memory usage, in KiB ``num_syscalls`` number of syscalls performed ``out_file`` filetracker path to the output file (only if ``env['save_outputs']`` was set) If the dictionary already exists, new test results are appended. .. py:function:: run_tests_end(env, **kwargs) .. py:function:: grade_tests(env, **kwargs) Grades tests using a scoring function. The ``env['test_scorer']``, which is used by this ``Handler``, should be a path to a function which gets test definition (e.g. a ``env['tests'][test_name]`` dict) and test run result (e.g. a ``env['test_results'][test_name]`` dict) and returns a score (instance of some subclass of :class:`~oioioi.contests.scores.ScoreValue`) and a status. Used ``environ`` keys: * ``tests`` * ``test_results`` * ``test_scorer`` Produced ``environ`` keys: * `score`, `max_score` and `status` keys in ``env['test_result']`` .. py:function:: grade_groups(env, **kwargs) Grades ungraded groups using a aggregating function. The ``group_scorer`` key in ``env`` should contain the path to a function which gets a list of test results (wihtout their names) and returns an aggregated score (instance of some subclass of :class:`~oioioi.contests.scores.ScoreValue`). Used ``environ`` keys: * ``tests`` * ``test_results`` * ``group_scorer`` Produced ``environ`` keys: * `score`, `max_score` and `status` keys in ``env['group_results']`` .. py:function:: grade_submission(env, kind='NORMAL', **kwargs) Grades submission with specified kind of tests on a `Job` layer. If ``kind`` is None, all tests will be graded. This `Handler` aggregates score from graded groups and gets submission status from tests results. Used ``environ`` keys: * ``group_results`` * ``test_results`` * ``score_aggregator`` Produced ``environ`` keys: * ``status`` * ``score`` * ``max_score`` .. py:function:: _make_base_report(env, submission, kind) Helper function making: SubmissionReport, ScoreReport, CompilationReport. Used ``environ`` keys: * ``status`` * ``score`` * ``compilation_result`` * ``compilation_message`` * ``submission_id`` * ``max_score`` Alters ``environ`` by adding: * ``report_id``: id of the produced :class:`~oioioi.contests.models.SubmissionReport` Returns: tuple (submission, submission_report) .. py:function:: make_report(env, kind='NORMAL', save_scores=True, **kwargs) Builds entities for tests results in a database. Used ``environ`` keys: * ``tests`` * ``test_results`` * ``group_results`` * ``status`` * ``score`` * ``compilation_result`` * ``compilation_message`` * ``submission_id`` Produced ``environ`` keys: * ``report_id``: id of the produced :class:`~oioioi.contests.models.SubmissionReport` .. py:function:: delete_executable(env, **kwargs) .. py:function:: fill_outfile_in_existing_test_reports(env, **kwargs) Fill output files into existing test reports that are not directly related to present submission. Also change status of UserOutGenStatus object to finished. Used ``environ`` keys: * ``extra_args`` dictionary with ``submission_report`` object * ``test_results`` .. py:function:: insert_existing_submission_link(env, src_submission, **kwargs) Add comment to some existing submission with link to submission view of present submission. Used ``environ`` keys: * ``extra_args`` dictionary with ``submission_report`` object * ``contest_id`` * ``submission_id``