@todo: why is there division to internal and external tasks and why it is needed
@todo: why is there division to internal and external tasks and why it is needed
@todo: division to initiation, execution and evaluation why and what for
@todo: in what order should tasks be executed, how to sort them
@todo: in what order should tasks be executed, how to sort them
@todo: how to solve problem with specific worker environment, mention internal job variables
@todo: how to solve problem with specific worker environment, mention internal job variables
@ -574,12 +576,16 @@ synchronization and such.
At this point we have worker with two internal parts listening one and execution one. Implementation of first one is quite straighforward and clear. So lets discuss what should be happening in execution subsystem. Jobs as work units can quite vary and do completely different things, that means configuration and worker has to be prepared for this kind of generality. Configuration and its solution was already discussed above, implementation in worker is then quite straightforward. Worker has internal structures to which loads and which stores metadata given in configuration. Whole job is mapped to job metadata structure and tasks are mapped to either external ones or internal ones (internal commands has to be defined within worker), both are different whether they are executed in sandbox or as internal worker commands.
At this point we have worker with two internal parts listening one and execution one. Implementation of first one is quite straighforward and clear. So lets discuss what should be happening in execution subsystem. Jobs as work units can quite vary and do completely different things, that means configuration and worker has to be prepared for this kind of generality. Configuration and its solution was already discussed above, implementation in worker is then quite straightforward. Worker has internal structures to which loads and which stores metadata given in configuration. Whole job is mapped to job metadata structure and tasks are mapped to either external ones or internal ones (internal commands has to be defined within worker), both are different whether they are executed in sandbox or as internal worker commands.
@todo: task types: initiation, execution, evaluation and inner... mainly describe what is happening if inner task fails
@todo: maybe describe folders within execution and what they can be used for?
@todo: maybe describe folders within execution and what they can be used for?
After successful arrival of job, worker has to prepare new execution environment, then solution archive has to be downloaded from fileserver and extracted. Job configuration is located within these files and loaded into internal structures and executed. After that results are uploaded back to fileserver. These steps are the basic ones which are really necessary for whole execution and have to be executed in this precise order.
After successful arrival of job, worker has to prepare new execution environment, then solution archive has to be downloaded from fileserver and extracted. Job configuration is located within these files and loaded into internal structures and executed. After that results are uploaded back to fileserver. These steps are the basic ones which are really necessary for whole execution and have to be executed in this precise order.
Interesting problem is with supplementary files (inputs, sample outputs). There are two approaches which can be observed. Supplementary files can be downloaded either on the start of the execution or during execution. If the files are downloaded at the beginning execution does not really started at this point and if there are problems with network worker find it right away and can abort execution without executing single task. Slight problems can arise if some of the files needs to have same name (e.g. solution assumes that input is `input.txt`), in this scenario downloaded files cannot be renamed at the beginning but during execution which is somehow impractical and not easily observed. Second solution of this problem when files are downloaded on the fly has quite opposite problem, if there are problems with network worker will find it during execution when for instance almost whole execution is done, this is also not ideal solution if we care about burnt hardware resources. On the other hand using this approach users have quite advanced control of execution flow and know what files exactly are available during execution which is from users perspective probably more appealing then the first solution. Based on that downloading of supplementary files using 'fetch' tasks during execution was chosen and implemented.
Interesting problem is with supplementary files (inputs, sample outputs). There are two approaches which can be observed. Supplementary files can be downloaded either on the start of the execution or during execution. If the files are downloaded at the beginning execution does not really started at this point and if there are problems with network worker find it right away and can abort execution without executing single task. Slight problems can arise if some of the files needs to have same name (e.g. solution assumes that input is `input.txt`), in this scenario downloaded files cannot be renamed at the beginning but during execution which is somehow impractical and not easily observed. Second solution of this problem when files are downloaded on the fly has quite opposite problem, if there are problems with network worker will find it during execution when for instance almost whole execution is done, this is also not ideal solution if we care about burnt hardware resources. On the other hand using this approach users have quite advanced control of execution flow and know what files exactly are available during execution which is from users perspective probably more appealing then the first solution. Based on that downloading of supplementary files using 'fetch' tasks during execution was chosen and implemented.
#### Caching mechanism
As described in fileserver section stored supplementary files have special
As described in fileserver section stored supplementary files have special
filenames which reflects hashes of their content. As such there are no
filenames which reflects hashes of their content. As such there are no
duplicates stored in fileserver. Worker can use feature too and caches these
duplicates stored in fileserver. Worker can use feature too and caches these
@ -588,19 +594,42 @@ system which can download file, store it in cache and after some time of
inactivity delete it. Because there can be multiple worker instances on some
inactivity delete it. Because there can be multiple worker instances on some
particular server it is not efficient to have this system in every worker on its
particular server it is not efficient to have this system in every worker on its
own. So it is feasible to have this feature somehow shared among all workers on
own. So it is feasible to have this feature somehow shared among all workers on
the same machine. Solution would be again having separate service connected
the same machine. Solution may be again having separate service connected
through network with workers which would provide such functionality but this
through network with workers which would provide such functionality but this
would component with another communication for the purpose where it is not
would mean component with another communication for the purpose where it is not
exactly needed. Implemented solution assume worker has access to specified cache
exactly needed. But mainly it would be single-failure component if it would stop
folder, to this folder worker can download supplementary files and copy them
working it is quite problem. So there was chosen another solution which assumes
from here. This means every worker has the possibility to maintain downloads to
worker has access to specified cache folder, to this folder worker can download
cache, but what is worker not able to properly do is deletion of unused files
supplementary files and copy them from here. This means every worker has the
after some time. For that single-purpose component is introduced which is called
\possibility to maintain downloads to cache, but what is worker not able to
'cleaner'. It is simple script executed within cron which is able to delete
properly do is deletion of unused files after some time. For that single-purpose
files which were unused for some time. Together with worker fetching feature
component is introduced which is called 'cleaner'. It is simple script executed
cleaner completes machine specific caching system.
within cron which is able to delete files which were unused for some time.
Together with worker fetching feature cleaner completes machine specific caching
@todo: describe a bit more cleaner functionality and that it is safe and there are no unrecoverable races
system.
Cleaner as mentioned is simple script which is executed regularly as cron job. If there is caching system like it was introduced in paragraph above there are little possibilities how cleaner should be implemented. On various filesystems there is usually support for two particular timestamps, `last access time` and `last modification time`. Files in cache are once downloaded and then just copied, this means that last modification time is set only once on creation of file and last access time should be set every time on copy. This imply last access time is what is needed here. But last modification time is widely used by operating systems, on the other hand last access time is not by default. More on this subject can be found [here](https://en.wikipedia.org/wiki/Stat_%28system_call%29#Criticism_of_atime). For proper cleaner functionality filesystem which is used by worker for caching has to have last access time for files enabled.
Having cleaner as separated component and caching itself handled in worker is kind of blury and is not clearly observable that it works without any race conditions. The goal here is not to have system without races but to have system which can recover from them. Implementation of caching system is based upon atomic operations of underlying filesystem. Follows description of one possible robust implementation. First start with worker implementation:
- worker discovers fetch task which should download supplementary file
- worker takes name of file and tries to copy it from cache folder to its working folder
- if successful then last access time should be rewritten (by filesystem itself) and whole operation is done
- if not successful then file has to be downloaded
- file is downloaded from fileserver to working folder
- downloaded file is then copied to cache
Previous implementation is only within worker, cleaner can anytime intervene and delete files. Implementation in cleaner follows:
- cleaner on its start stores current reference timestamp which will be used for comparision and load configuration values of caching folder and maximal file age
- there is a loop going through all files and even directories in specified cache folder
- last access time of file or folder is detected
- last access time is subtracted from reference timestamp into difference
- difference is compared against specified maximal file age, if difference is greater, file or folder is deleted
Previous description implies that there is gap between detection of last access time and deleting file within cleaner. In the gap there can be worker which will access file and the file is anyway deleted but this is fine, file is deleted but worker has it copied. Another problem can be with two workers downloading the same file, but this is also not a problem file is firstly downloaded to working folder and after that copied to cache. And even if something else unexpectedly fails and because of that fetch task will fail during execution even that should be fine. Because fetch tasks should have 'inner' task type which implies that fail in this task will stop all execution and job will be reassigned to another worker. It should be like the last salvation in case everything else goes wrong.
#### Sandboxing
@todo: sandboxing, what possibilites are out there (linux, Windows), what are general and really needed features, mention isolate, what are isolate features
@todo: sandboxing, what possibilites are out there (linux, Windows), what are general and really needed features, mention isolate, what are isolate features