master
Teyras 8 years ago
parent db30bf06e4
commit b25c4f5c6a

@ -897,21 +897,21 @@ submission:
```
Basically it means, that the job _hello-world-job_ needs to be run on workers
with capabilities of _group1_ group. Reference files are downloaded from the
default location configured in API (probably `http://localhost:9999/exercises`)
if not stated explicitly otherwise. Job execution log will not be saved to
result archive.
that belong to the `group_1` hardware group . Reference files are downloaded
from the default location configured in API (such as
`http://localhost:9999/exercises`) if not stated explicitly otherwise. Job
execution log will not be saved to result archive.
Next the tasks have to be constructed under _tasks_ section. In this demo job,
every task depends only on previous one. The first task has input file
_source.c_ (if submitted by user) already available in working directory, so
just call the GCC. Compilation is run in sandbox as any other external program
and should have relaxed time and memory limits. In this scenarion, worker
defaults are used. If compilation fails, whole job is immediately terminated
(_fatal-failure_ bit set). Because _bound-directories_ option in sandbox limits
section is mostly common for all tasks, it can be set in worker configuration
instead of job configuration (suppose this for following tasks). For
configuration of workers please contact your administrator.
and should have relaxed time and memory limits. In this scenario, worker
defaults are used. If compilation fails, the whole job is immediately terminated
(because the _fatal-failure_ bit is set). Because _bound-directories_ option in
sandbox limits section is mostly shared between all tasks, it can be set in
worker configuration instead of job configuration (suppose this for following
tasks). For configuration of workers please contact your administrator.
```{.yml}
- task-id: "compilation"
@ -934,24 +934,24 @@ configuration of workers please contact your administrator.
mode: RW
```
The compiled program is executed with time and memory limit set and standard
output redirected to a file. This task depends on _compilation_ task, because
The compiled program is executed with time and memory limit set and the standard
output is redirected to a file. This task depends on _compilation_ task, because
the program cannot be executed without being compiled first. It is important to
mark this task with _execution_ type, so exceeded limits will be reported in
frontend.
Time and memory limits set directly for a task have higher priority than worker
defaults. One important constraint is, that these limits cannot exceed limits
set by workers. Worker defaults are present as a safety for the sake of
possibility that wrong job configuration can block whole worker forever. Worker
default limits should be set reasonably high, like gigabyte of memory and couple
hours of execution time. For exact numbers please contact your administrator.
set by workers. Worker defaults are present as a safety measure so that a
malformed job configuration cannot block the worker forever. Worker default
limits should be reasonably high, like a gigabyte of memory and several hours of
execution time. For exact numbers please contact your administrator.
It is good point to remind here, that if output of a program (both standard and
error) is redirected to a file, the sandbox disk quotas holds for these files as
well as for files created directly by the program. But if outputs are ignored,
they are redirected to `/dev/null` file where arbitrary amount of data can be
written.
It is important to know that if the output of a program (both standard and
error) is redirected to a file, the sandbox disk quotas apply to that file, as
well as the files created directly by the program. In case the outputs are
ignored, they are redirected to `/dev/null`, which means there is no limit on
the output length (as long as the printing fits in the time limit).
```{.yml}
- task-id: "execution_1"
@ -971,8 +971,9 @@ written.
memory: 8192
```
Fetch sample solution from fileserver. Base URL of fileserver is in job header,
so only the name of required file (`sha1sum` in our case) is necessary.
Fetch sample solution from fileserver. Base URL of fileserver is in the header
of the job configuration, so only the name of required file (its `sha1sum` in
our case) is necessary.
```{.yml}
- task-id: "fetch_solution_1"
@ -986,9 +987,10 @@ so only the name of required file (`sha1sum` in our case) is necessary.
- "${SOURCE_DIR}/reference.txt"
```
Comparison of results is quite straightforward. Important is to mark the task as
_evaluation_ type, so the return code represents if the program is correct (0)
or not (other). Worker default limits are used.
Comparison of results is quite straightforward. It is important to set the task
type to _evaluation_, so that the return code is set to 0 if the program is
correct and 1 otherwise. We do not set our own limits, so the default limits are
used.
```{.yml}
- task-id: "judge_1"

Loading…
Cancel
Save