@ -20,10 +20,12 @@ Task is an atomic piece of work executed by worker. There are two basic types of
- **Perform internal operation**. Internal operations comprise commands, which are typically related to file/directory maintenance and other evaluation management stuff. Few important examples:
- Create/delete/move/rename file/directory
- (un)zip/tar/gzip/bzip file(s)
- fetch a file from the file repository (either from worker cache or download it by HTTP GET or through SFTP).
- fetch a file from the file repository (either from worker cache or download it by HTTP GET).
Even though the internal operations may be handled by external executables (`mv`, `tar`, `pkzip`, `wget`, ...), it might be better to keep them inside the worker as it would simplify these operations and their portability among platforms. Furthermore, it is quite easy to implement them using common libraries (e.g., _zlib_, _curl_).
A task may be marked as essential; in such case, failure will immediately cause termination of the whole job. Nice example usage is task with program compilation. Without success it is obvious, that the job is broken and every test will fail.
### Internal tasks
- **Archivate task** can be used for pack and compress a directory. Calling command is `archivate`. Requires two arguments:
@ -64,7 +66,7 @@ Even though the internal operations may be handled by external executables (`mv`
### External tasks
External tasks are arbitrary executables, typically ran inside isolate (with given parameters) and the worker waits until they finish. The exit code determines, whether the task succeeded (0) or failed (anything else). A task may be marked as essential; in such case, failure will immediately cause termination of the whole job.
External tasks are arbitrary executables, typically ran inside isolate (with given parameters) and the worker waits until they finish. The exit code determines, whether the task succeeded (0) or failed (anything else).
- **stdin** -- can be configured to read from existing file or from `/dev/null`.
- **stdout** and **stderr** -- can be individually redirected to a file or discarded. If this output options are specified, than it is possible to upload output files with results by copying them into result directory.
@ -77,20 +79,20 @@ The task results (exit code, time, and memory consumption, etc.) are saved into
Judges are treated as normal external commands, so there is no special task type for them. Binaries are installed alongside with worker executable in standard directories (on both Linux and Windows systems).
Judges should be used for comparision of outputted files from execution tasks and sample outputs fetched from fileserver. Results of this comparision should be at least information if files are same or not. Extension for this is percentual results based on similarity of given files. All of the judges results have to be printed to standard output.
Judges should be used for comparision of outputted files from execution tasks and sample outputs fetched from fileserver. Results of this comparision should be at least information if files are same or not. Extension for this is percentual results based on correctness of given files. All of the judges results have to be printed to standard output.
All packed judges are adopted from old Codex with only very small modifications. ReCodEx judges base directory is in `${JUDGES_DIR}` variable, which can be used in job config file.
#### Judges interface
For future extensibility is **critical** that judges have some shared **interface** of calling and return values.
For future extensibility is critical that judges have some shared interface of calling and return values.
- Parameters: There are two mandatory positional parameters which has to be files for comparision
- Results:
- _comparison OK_
- exitcode: 0
- stdout: there is one line with double value which should be percentage of similarity of two given files
- stdout: there is one line with double value which should be percentage of correctness of quality of two given files
- _error during execution_
- exitcode: 1
- stderr: there should be description of error
@ -117,7 +119,7 @@ Below is list of judges which are packed with ReCodEx project and comply above r
- if `outputFile` is ommited, std. output is used instead.
- if both files are ommited, application uses std. input and output.
- **recodex-judge-shuffle** is for judging shuffled files. This judge compares two text files and returns 0 if they matches (and 1 otherwise). Two files are compared with no regards for whitespace (whitespace acts just like token delimiter).
- **recodex-judge-shuffle** is for judging results with semantics of set, where ordering is not important. This judge compares two text files and returns 0 if they matches (and 1 otherwise). Two files are compared with no regards for whitespace (whitespace acts just like token delimiter).
@ -150,7 +152,7 @@ Here is the list with description of allowed options. Mandatory items are bold,
- **bin** -- the binary itself (full path of external command or name of internal task)
- _args_ -- list of arguments which will be sent into execution unit
- _test-id_ -- ID of the test this task is part of -- must be specified for tasks which the particular test's result depends on
- _type_ -- type of the task, can be omitted, default value is _inner_ -- possible values are: _inner_, _initiation_, _execution_, _evaluation_
- _type_ -- type of the task, can be omitted, default value is _inner_ -- possible values are: _inner_, _initiation_, _execution_, _evaluation_. Each logical test must contain 0 or more _initiation_ tasks, at least one task of type _execution_ (time and memory limits exceeded are presentet to user) and exactly one of type _evaluation_ (typicaly judge). _Inner_ task type is mainly for internal tasks, but can be used for external tasks, which are not part of any test.
- _sandbox_ -- wrapper for external tasks which will run in sandbox, if defined task is automatically external
- **name** -- name of used sandbox
- _stdin_ -- file to which standard input will be redirected, can be omitted
@ -168,7 +170,7 @@ Here is the list with description of allowed options. Mandatory items are bold,
- _disk-files_ -- number of files which can be opened
- _environ-variable_ -- wrapper for map of environmental variables, union with default worker configuration
- _chdir_ -- this will be working directory of executed application
- _bound-directories_ -- list of structures representing directories which will be visible inside sandbox, union with default worker configuration. Contains 3 suboptions: **src** -- source pointing to actual system directory, **dst** -- destination inside sandbox which can have its own filesystem binding and **mode** -- determines connection mode of specified directory, one of values: RW, NOEXEC, FS, MAYBE, DEV
- _bound-directories_ -- list of structures representing directories which will be visible inside sandbox, union with default worker configuration. Contains 3 suboptions: **src** -- source pointing to actual system directory, **dst** -- destination inside sandbox which can have its own filesystem binding and **mode** -- determines connection mode of specified directory, one of values: RW (allow read-write access), NOEXEC (disallow execution of binaries), FS (mount device-less filesystem like `/proc`), MAYBE (silently ignore the rule if the bound directory does not exist), DEV (allow access to character and block devices).
### Configuration example
@ -295,7 +297,7 @@ List of usable variables in job configuration:
## Directories and Files
For each job execution unique directory structure is created. Job is not restricted to use only specified directories (tasks can do whatever is allowed on system), but it is advised to use them inside a job. DEFAULT variable represents worker's working directory specified in its configuration. No variable of this name is defined for use in job YAML configuration, it is used just for this example.
For each job execution unique directory structure is created. Job is not restricted to use only specified directories (tasks can do whatever is allowed on system), but it is advised to use them inside a job. DEFAULT variable represents root of working directory for all workers. This directory is specified worker's configuration and can be the same for multiple instances on the same server. No variable of this name is defined for use in job YAML configuration, it is used just for this example.
List of temporary files for job execution:
@ -319,7 +321,7 @@ List of items from results file. Mandatory items are bold, optional ones italic.
- _error_message_ -- present only if whole execution failed and none of tasks were executed
- **results** -- list of tasks results
- **task-id** -- unique identification of task in scope of this job
- **status** -- three states: OK, FAILED, SKIPPED
- **status** -- three states: OK (execution of task was successful; sandboxed program could be killed, but sandbox exited normally), FAILED (error while executing task), SKIPPED (execution of task was skipped)
- _error_message_ -- defined only in internal tasks on failure
- _sandbox_results_ -- if defined than this task was external and was run in sandbox
- **exitcode** -- integer which executed program gave on exit
@ -327,7 +329,7 @@ List of items from results file. Mandatory items are bold, optional ones italic.
- **wall-time** -- wall time in seconds
- **memory** -- how much memory program used in kilobytes
- **max-rss** -- maximum resident set size used in kilobytes
- **status** -- two letter status code: OK, RE, SG, TO, XX
- **status** -- two letter status code: OK (success), RE (runtime error), SG (program died on signal), TO (timed out), XX (internal error of the sandbox)
- **exitsig** -- description of exit signal
- **killed** -- boolean determining if program exited correctly or was killed
- **message** -- status message on failure
@ -362,7 +364,7 @@ results:
## Scoring
Every assignment consists of tasks. Only some tasks however are part of the evaluation. Those tasks are grouped into **tests**. Each task might have assigned a _test-id_ parameter, as described above. Every test must consist of at least two tasks: execution and evaluation by a judge. The former retrieves information about the execution such as elapsed time and memory consumed, the latter result with a score -- float between 0 and 1. There may be more than one execution tasks, but evaluation task must be exactly one.
Every assignment consists of tasks. Only some tasks however are part of the evaluation. Those tasks are grouped into **tests**. Each task might have assigned a _test-id_ parameter, as described above. Every test must consist of at least one task: _evaluation_ by a judge. There can be zero or multiple tasks of _initiation_ and _execution_ type, _evaluation_ is exactly one. _Execution_ tasks retrieve information about the execution such as elapsed time and memory consumed, _evaluation_ provides a score -- float number between 0 and 1. _Initiation_ tasks are used for example for input data generation.
Total resulting score of the assignment submission is then calculated according to a supplied score config (described below). Total score is also a float between 0 and 1. This number is then multiplied by the maximum of points awarded for the assignment by the teacher assigning the exercise -- not the exercise author.
@ -452,4 +454,4 @@ checks if the student's implementation of core OS mechanisms is correct. These
tests are compiled into the kernel.
Each of these tests could be represented by a stage that compiles the kernel
with the test and then runs it against different configurations of `msim`.
with the test and then runs it against different configurations of `msim`.