Worker implementation - directories

master
Petr Stefan 8 years ago
parent 538959f2b0
commit f9dfefb021

@ -2761,22 +2761,34 @@ it should always end itself in time.
### Directories and files
For each job execution unique directory structure is created. Job is not
restricted to use only specified directories (tasks can do whatever is allowed
on system), but it is advised to use them inside a job. Following directories
are created under working directory of the worker for a job execution. This
directory is configurable and can be the same for multiple worker instances.
- `downloads/${WORKER_ID}/${JOB_ID}` -- where the downloaded archive is saved
- `submission/${WORKER_ID}/${JOB_ID}` -- decompressed submission is stored here
- `eval/${WORKER_ID}/${JOB_ID}` -- this directory is accessible in job
configuration using variables and all execution should happen here
- `temp/${WORKER_ID}/${JOB_ID}` -- directory where all sort of temporary files
can be stored
- `results/${WORKER_ID}/${JOB_ID}` -- again accessible directory from job
configuration which is used to store all files which will be upload on
fileserver, usually there will be only yaml result file and optionally log,
every other file has to be copied here explicitly from job
During a job execution the worker has to handle several files -- input archive
with submitted sources and job configuration, temporary files generated during
execution or fetched testing inputs and outputs. For each job is created a
separate directory structure which is removed after finishing the job.
The files are stored in local filesystem of the worker computer in a
configurable location. The job is not restricted to use only specified
directories (tasks can do whatever is allowed on the target system), but it is
advised not to write outside them. In addition, sandboxed tasks are usually
restricted to use only a specific (evaluation) directory.
The following directory structure is used for execution. The worker's working
directory (root of the following paths) is shared for multiple instances on the
same computer.
- `downloads/${WORKER_ID}/${JOB_ID}` -- place to store the downloaded archive
with submitted sources and job configuration
- `submission/${WORKER_ID}/${JOB_ID}` -- place to store a decompressed
submission archive
- `eval/${WORKER_ID}/${JOB_ID}` -- place where all the execution should happen
- `temp/${WORKER_ID}/${JOB_ID}` -- place for temporary files
- `results/${WORKER_ID}/${JOB_ID}` -- place to store all files which will be
uploaded on the fileserver, usually only yaml result file and optionally log
file, other files have to be explicitly copied here if requested
Some of the directories are accessible during job execution from within sandbox
through predefined variables. List of these is described in job configuration
appendix.
### Judges
@ -2796,11 +2808,11 @@ implemented:
files for comparision
- Results:
- _comparison OK_
- exitcode: 0
- exitcode: 0
- stdout: there is a single line with a double value which
should be 1.0
- _comparison BAD_
- exitcode: 1
- exitcode: 1
- stdout: there is a single line with a double value which
should be quality percentage of the judged file
- _error during execution_

Loading…
Cancel
Save