diff --git a/Worker.md b/Worker.md index 800556c..b5d4e15 100644 --- a/Worker.md +++ b/Worker.md @@ -1,51 +1,54 @@ # Worker ## Description -The **worker's** job is to securely execute submitted assignments and possibly -_evaluate_ results against model solutions provided by submitter. **Worker** is +The worker's job is to securely execute submitted assignments and possibly +evaluate results against model solutions provided by submitter. Worker is logicaly divided into two parts: -- **Listener** - communicates with **Broker** through + +- **Listener** - communicates with broker through [ZeroMQ](http://zeromq.org/). On startup, it introduces itself to the broker. - Then it receives new jobs, passes them to the **Evaluator** part and sends + Then it receives new jobs, passes them to the **evaluator** part and sends back results and progress reports. -- **Evaluator** - gets jobs from the **Listener** part, evaluates them (possibly - in sandbox) and notifies the other part when the evaluation ends. This part - also communicates with **Fileserver**, downloads supplementary files and +- **Evaluator** - gets jobs from the **listener** part, evaluates them (possibly + in sandbox) and notifies the other part when the evaluation ends. **Evaluator** + also communicates with fileserver, downloads supplementary files and uploads detailed results. -These parts run in separate threads that communicate through a ZeroMQ in-process +These parts run in separate threads of the same process and communicate through a ZeroMQ in-process socket. This design allows the worker to keep sending `ping` messages even when it's processing a job. -After receiving an evaluation request, **Worker** has to: +After receiving an evaluation request, worker has to: -- Download the archive containing submitted source files and configuration file -- Download any supplementary files based on the configuration file, such as test - inputs or helper programs (This is done on demand, using a `fetch` command +- download the archive containing submitted source files and configuration file +- download any supplementary files based on the configuration file, such as test + inputs or helper programs (this is done on demand, using a `fetch` command in the assignment configuration) -- Evaluate the submission accordingly to job configuration -- During evaluation progress messages can be sent back to **Broker** -- Upload the results of the evaluation to the **Fileserver** -- Notify **Broker** that the evaluation finished +- evaluate the submission accordingly to job configuration +- during evaluation progress messages can be sent back to broker +- upload the results of the evaluation to the fileserver +- notify broker that the evaluation finished ### Header matching -Every worker belongs to exactly one **hardware group** and has a set of headers. +Every worker belongs to exactly one **hardware group** and has a set of **headers**. These properties help the broker decide which worker is suitable for processing a request. -The **hardware group** is a string identifier used to group worker machines with -similar hardware configuration -- for example "i7-4560-quad-ssd". It's +The hardware group is a string identifier used to group worker machines with +similar hardware configuration, for example "i7-4560-quad-ssd". It's important for assignments where running times are compared to those of reference -solutions -- we have to make sure that both programs run on simmilar hardware. +solutions (we have to make sure that both programs run on simmilar hardware). -The **headers** are a set of key-value pairs that describe the worker's +The headers are a set of key-value pairs that describe the worker capabilities -- which runtime environments are installed, how many threads can the worker run or whether it measures time precisely. -This information is sent to the broker on startup using the `init` command. +These information are sent to the broker on startup using the `init` command. + ## Architecture + Picture below is internal architecture of worker which shows its defined classes with private variables and public functions. [comment]: TODO: FIX THIS: ![Worker class diagram](https://rawgit.com/ReCodEx/wiki/master/images/worker_class_diagram.svg) @@ -54,29 +57,12 @@ Picture below is internal architecture of worker which shows its defined classes ### Dependencies -Worker specific requirements are written in this section. Some parts of this guide may be different for each type of worker, for example Compiler principles worker is not fully covered here. - -**Install ZeroMQ** in version at least 4.0 - -- Debian package is `libzmq3-dev`. -- RedHat packages are `zeromq` and `zeromq-devel`. - -**Install YAML-CPP library** +Worker specific requirements are written in this section. It covers only basic requirements, additional runtimes or tools may be needed depending on type of use. The package names are for CentOS in not specified otherwise. -- Debian packages: `libyaml-cpp0.5v5` and `libyaml-cpp-dev`. -- RedHat packages are `yaml-cpp` and `yaml-cpp-devel`. - -**Install libcurl library** - -- Debian package is `libcurl4-gnutls-dev`. -- RedHat package is `libcurl-devel`. - -**Install libarchive library** (optional) - -Installing this package will only speed up build process, otherwise libarchive is built from source. - -- Debian package is `libarchive-dev`. -- RedHat packages are `libarchive` and `libarchive-devel`. These are probably not available for RHEL. +- ZeroMQ in version at least 4.0, packages `zeromq` and `zeromq-devel` (`libzmq3-dev` on Debian) +- YAML-CPP library, `yaml-cpp` and `yaml-cpp-devel` (`libyaml-cpp0.5v5` and `libyaml-cpp-dev` on Debian) +- libcurl library `libcurl-devel` (`libcurl4-gnutls-dev` on Debian) +- libarchive library as optional dependency. Installing will speed up build process, otherwise libarchive is built from source during installation. Package name is `libarchive` and `libarchive-devel` (`libarchive-dev` on Debian) **Install Isolate from source** @@ -89,12 +75,12 @@ $ git checkout v1.3 $ make # make install && make install-doc ``` -For proper work Isolate depends on several advanced features of the Linux kernel. Make sure that your kernel is compiled with `CONFIG_PID_NS`, `CONFIG_IPC_NS`, `CONFIG_NET_NS`, `CONFIG_CPUSETS`, `CONFIG_CGROUP_CPUACCT`, `CONFIG_MEMCG`. If your machine has swap enabled, also check `CONFIG_MEMCG_SWAP`. Which flags was your kernel compiled with can be found in `/boot` directory, for example in `/boot/config-4.2.6-301.fc23.x86_64` file for kernel version 4.2.6-301. Red Hat distros should have these enabled by default, for Debian you you may want to add the parameters `cgroup_enable=memory swapaccount=1` to the kernel command-line, which can be set using `GRUB_CMDLINE_LINUX_DEFAULT` in `/etc/default/grub`. +For proper work Isolate depends on several advanced features of the Linux kernel. Make sure that your kernel is compiled with `CONFIG_PID_NS`, `CONFIG_IPC_NS`, `CONFIG_NET_NS`, `CONFIG_CPUSETS`, `CONFIG_CGROUP_CPUACCT`, `CONFIG_MEMCG`. If your machine has swap enabled, also check `CONFIG_MEMCG_SWAP`. With which flags was your kernel compiled with can be found in `/boot` directory, file `config-` and version of your kernel. Red Hat based distributions should have these enabled by default, for Debian you you may want to add the parameters `cgroup_enable=memory swapaccount=1` to the kernel command-line, which can be set by adding value `GRUB_CMDLINE_LINUX_DEFAULT` to `/etc/default/grub` file. For better reproducibility of results, some kernel parameters can be tweaked: + - Disable address space randomization. Create file `/etc/sysctl.d/10-recodex.conf` with content `kernel.randomize_va_space=0`. Changes will take effect after restart or run `sysctl kernel.randomize_va_space=0` command. - Disable dynamic CPU frequency scaling. This requires setting the cpufreq scaling governor to _performance_. -_**TODO** - do we really need it and is it worth higher power consumption? [Red Hat setup](https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/7/html/Power_Management_Guide/cpufreq_governors.html), [Debian setup](https://wiki.debian.org/HowTo/CpuFrequencyScaling)_ ### Clone worker source code repository ``` @@ -108,23 +94,34 @@ It's supposed that your current working directory is that one with clonned worke - Prepare environment running `mkdir build && cd build` - Build sources by `cmake ..` following by `make -j#` where '#' symbol refers to number of your CPU threads. - Build binary package by `make package` (may require root permissions). -Note that `rpm` and `deb` packages are build in the same time. You may need to have `rpmbuild` command (usually as `rpmbuild` or `rpm` package) or edit CPACK_GENERATOR variable _CMakeLists.txt_ file in root of source code tree. +Note that `rpm` and `deb` packages are build in the same time. You may need to have `rpmbuild` command (usually as `rpmbuild` or `rpm` package) or edit CPACK_GENERATOR variable in _CMakeLists.txt_ file in root of source code tree. - Install generated package through your package manager (`yum`, `dnf`, `dpkg`). +The worker installation process is composed of following steps: + +- create config file `/etc/recodex/worker/config-1.yml` +- create systemd unit file `/etc/systemd/system/recodex-worker@.service` +- put main binary to `/usr/bin/recodex-worker` +- put judges binaries to `/usr/bin/recodex-judge-normal`, `/usr/bin/recodex-judge-shuffle` and `/usr/bin/recodex-judge-filter` +- create system user and group `recodex` with `/sbin/nologin` shell (if not already existing) +- create log directory `/var/log/recodex` +- set ownership of config (`/etc/recodex`) and log (`/var/log/recodex`) directories to `recodex` user and group + _Note:_ If you don't want to generate binary packages, you can just install the project with `make install` (as root). But installation through your distribution's package manager is preferred way to keep your system clean and manageable in long term horizon. ### Install worker on Windows -From beginning we are determined to support Windows operating system on which some of the workers may run (especially for projects in C# programming language). Support for Windows is quite hard and time consuming and there were several problems during this. To ensure capability of compilation on Windows we set up CI for Windows named [Appveyor](http://www.appveyor.com/). However installation should be easy due to provided installation script. +From beginning we are determined to support Windows operating system on which some of the workers may run (especially for projects in C# programming language). Support for Windows is quite hard and time consuming and there were several problems during the development. To ensure capability of compilation on Windows we set up CI for Windows named [Appveyor](http://www.appveyor.com/). However installation should be easy due to provided installation script. -There are only two additional dependencies needed, **Windows 7 and higher** and **Visual Studio 2015+**. Provided simple installation batch script should do all the work on Windows machine. Officially only VS2015 and 32-bit compilation is supported, because of hardcoded compile options in installation script. If different VS is needed or different platform, script should be changed to appropriate values, it should be quite simple and straightforward. +There are only two additional dependencies needed, **Windows 7 and higher** and **Visual Studio 2015+**. Provided simple installation batch script should do all the work on Windows machine. Officially only VS2015 and 32-bit compilation is supported, because of hardcoded compile options in installation script. If different VS or different platform is needed, the script should be changed to appropriate values, which is simple and straightforward. -Mentioned script is placed in *install* directory alongside supportive scripts for UNIX systems and is named *win-build.cmd*. Provided script will do almost all the work connected with building and dependency resolving (using **NuGet** package manager and **msbuild** building system). Script should be run under 32-bit version of **Developer Command Prompt for VS2015** and from *install* directory. +Mentioned script is placed in *install* directory alongside supportive scripts for UNIX systems and is named *win-build.cmd*. Provided script will do almost all the work connected with building and dependency resolving (using **NuGet** package manager and `msbuild` building system). Script should be run under 32-bit version of _Developer Command Prompt for VS2015_ and from *install* directory. Building and installing of worker is then quite simple, script has command line parameters which can be used to specify what will be done: -* *-build* - Don't have to be specified. Build worker and all its tests, all is saved in *build* folder and subfolders. -* *-clean* - Cleanup of downloaded NuGet packages and built application/libraries. -* *-test* - Build worker and run tests on compiled test cases. -* *-package* - Clickable installation generation using cpack and NSIS. [NSIS](http://nsis.sourceforge.net/) have to be installed on machine to get this to work. + +- *-build* -- It's the default options if none specified. Builds worker and its tests, all is saved in *build* folder and subfolders. +- *-clean* -- Cleanup of downloaded NuGet packages and built application/libraries. +- *-test* -- Build worker and run tests on compiled test cases. +- *-package* -- Generation of clickable installation using cpack and [NSIS](http://nsis.sourceforge.net/) (has to be installed on machine to get this to work). ``` install> win-build.cmd # same as: win-build.cmd -build @@ -137,68 +134,18 @@ All build binaries and cmake temporary files can be found in *build* folder, classically there will be subfolder *Release* which will contain compiled application with all needed dlls. Once if clickable installation binary is created, it can be found in *build* folder named something like -*recodex-worker-VERSION-win32.exe*. - -**Clickable installation feature:** +*recodex-worker-VERSION-win32.exe*. Sample screenshot can be found on following picture. ![NSIS Installation](https://github.com/ReCodEx/wiki/blob/master/images/nsis_installation.png) -### Compilers -For evaluating jobs you have to install tools that fit your needs. Here are some useful tips of different compillers to install. - -**C/C++** - -For compiling C and C++ programs is used `GCC` compiler. Maybe you could install install most of the staff by executing following command, but it's not needed. -``` -# yum group install "Development Tools" -``` -To install the compiler separately, you could install it from the distribution's repositories. -``` -# yum install gcc gcc-c++ make -``` -To get reasonably new version, you may consider installing [Red Hat Developer Toolset 4](https://access.redhat.com/documentation/en-US/Red_Hat_Developer_Toolset/4/html-single/4.0_Release_Notes/index.html), or install these from Fedora repo. In Debian, _testing_ repo can be used. - -**C\#** - -For new versions of **Mono**, we'll use [Xamarin repositories](http://www.mono-project.com/download/#download-lin). For Red Hat based OS: -``` -# rpm --import "http://keyserver.ubuntu.com/pks/lookup?op=get&search=0x3FA7E0328081BFF6A14DA29AA6A19B38D3D831EF" -# yum-config-manager --add-repo http://download.mono-project.com/repo/centos/ -# yum upgrade -# yum install mono-complete -``` -For Debian based OS: -``` -$ sudo apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv-keys 3FA7E0328081BFF6A14DA29AA6A19B38D3D831EF -$ echo "deb http://download.mono-project.com/repo/debian wheezy main" | sudo tee /etc/apt/sources.list.d/mono-xamarin.list -$ sudo apt-get update -$ sudo apt-get install mono-complete -``` - -**Free Pascal** - -Free Pascal compiler 3.0.0 can be downloaded as `rpm` packages from [official website](http://www.freepascal.org/down/x86_64/linux-austria.var). In Debian, this version is [experimental](https://packages.debian.org/experimental/fpc-3.0.0), but seems to be stable enough. - -**Java** -_**TODO**_ - - ## Configuration and usage -Following text describes how to set up and run **worker** program. It's supposed to have required binaries installed. For instructions see [[Installation|Worker#installation]] section. Also, using systemd is recommended for best user experience, but it's not required. Almost all modern Linux distributions are using systemd now. -The **worker** installation program is composed of following steps: -- create config file `/etc/recodex/worker/config-1.yml` -- create _systemd_ unit file `/etc/systemd/system/recodex-worker@.service` -- put main binary to `/usr/bin/recodex-worker` -- put judges binaries to `/usr/bin/recodex-judge-normal`, `/usr/bin/recodex-judge-shuffle` and `/usr/bin/recodex-judge-filter` -- create system user and group `recodex` with `/sbin/nologin` shell (if not existing) -- create log directory `/var/log/recodex` -- set ownership of config (`/etc/recodex`) and log (`/var/log/recodex`) directories to `recodex` user and group +Following text describes how to set up and run **worker** program. It's supposed to have required binaries installed. Also, using systemd is recommended for best user experience, but it's not required. Almost all modern Linux distributions are using systemd now. ### Default worker configuration -Worker should have some default configuration which is applied to worker itself or may be used in given jobs (implicitly if something is missing, or explicitly with special variables). This configuration should be hardcoded and can be rewritten by explicitly declared configuration file. Format of this configuration is yaml like in the job config. +Worker should have some default configuration which is applied to worker itself or may be used in given jobs (implicitly if something is missing, or explicitly with special variables). This configuration should be hardcoded and can be rewritten by explicitly declared configuration file. Format of this configuration is yaml with similar structure to job configuration. #### Configuration items @@ -224,7 +171,7 @@ Mandatory items are bold, optional italic. - _level_ - level of logging, one of `off`, `emerg`, `alert`, `critical`, `err`, `warn`, `notice`, `info` and `debug` - _max-size_ - maximal size of log file before rotating - _rotations_ - number of rotation kept -- _limits_ - default sandbox limits for this worker. All items are described in [[Assignments|Overall architecture#configuration-items]] section. +- _limits_ - default sandbox limits for this worker. All items are described in assignments section in job configuration description. If some limits are not set in job configuration, defaults from worker config will be used. Also, limits in job configuration cannot exceed limits from worker. In such case the worker's defaults will be set as the maximum for the job. #### Example config file @@ -236,7 +183,7 @@ max-broker-liveness: 10 headers: env: - c - - python + - cpp threads: 2 hwgroup: "group1" working-directory: /tmp/recodex @@ -247,7 +194,8 @@ file-managers: file-cache: # only in case that there is cache module cache-dir: "/tmp/recodex/cache" logger: - file: "/var/log/recodex/worker" # w/o suffix - actual names will be worker.log, worker.1.log, ... + file: "/var/log/recodex/worker" # w/o suffix - actual names will + # be worker.log, worker.1.log,... level: "debug" # level of logging max-size: 1048576 # 1 MB; max size of file before log rotation rotations: 3 # number of rotations kept @@ -257,7 +205,7 @@ limits: extra-time: 2 # seconds stack-size: 0 # normal in KB, but 0 means no special limit memory: 50000 # in KB - parallel: 1 # time and memory limits are merged + parallel: 1 disk-size: 50 disk-files: 5 environ-variable: @@ -299,6 +247,7 @@ For further information about using systemd please refer to systemd documentatio ### Adding new worker To add a new worker you need to do a few steps: + - Make up an unique string ID. - Copy default configuration file `/etc/recodex/worker/config-1.yml` to the same directory and name it `config-.yml` - Edit that config file to fit your needs. Note that you must at least change _worker-id_ and _logger file_ values to be unique. @@ -311,26 +260,33 @@ To add a new worker you need to do a few steps: ## Sandboxes ### Isolate -Isolate is used as one and only sandbox for linux-based operating systems. Headquarters of this project can be found at [https://github.com/ioi/isolate](https://github.com/ioi/isolate) and more of its installation and setup can be found in [Installation](#installation) section. + +Isolate is used as one and only sandbox for linux-based operating systems. Headquarters of this project can be found at [GitHub](https://github.com/ioi/isolate) and more of its installation and setup can be found in [installation](#installation) section. // TODO: further desc #### Limit isolate boxes to particular cpu or memory node + New feature in isolate is possibility of limit isolate box to one or more cpu or memory node. This functionality is provided by cpusets kernel mechanism and is now integrated in isolate. It is allowed to set only `cpuset.cpus` and `cpuset.mems` which should be just fine for sandbox purposes. As kernel functionality further description can be found in manual page of cpuset or in linux documentation in section `linux/Documentation/cgroups/cpusets.txt`. As previously stated this settings can be applied for particular isolate boxes and has to be written in isolate configuration. Standard configuration path should be `/usr/local/etc/isolate` but it may depend on your installation process. Configuration of cpuset in there is really simple and is described in example below. ``` -box0.cpus = 0 # assign processor with id 0 to isolate box with id 0 +box0.cpus = 0 # assign processor ID 0 to isolate box with ID 0 box0.mems = 0 # assign memory node with id 0 -# if not set linux by itself will decide where should sandboxed program run -box2.cpus = 1-3 # assign range of processors to isolate box with id 2 +# if not set, linux by itself will decide where should +# the sandboxed programs run at +box2.cpus = 1-3 # assign range of processors to isolate box 2 box2.mems = 4-7 # assign range of memory nodes -box3.cpus = 1,2,3 # assign list of processors to isolate box with id 3 +box3.cpus = 1,2,3 # assign list of processors to isolate box 3 ``` **cpuset.cpus:** Cpus limitation will restrict sandboxed program only to processor threads set in configuration. On hyperthreaded processors this means that all virtual threads are assignable not only the physical ones. Value can be represented by single one, list of values separated by commas or range with hyphen delimiter. **cpuset.mems:** This value is particularly handy on NUMA systems which has several memory nodes. On standard desktop computers this value should always be zero because only one independent memory node is present. As stated in `cpus` limitation there can be single value, list of values separated by comma or range stated with hyphen. +### WrapSharp + +WrapSharp is sandbox for programs in C# written also in C#. We have written it as a proof of concept sandbox for using in Windows environment. However, it's not properly tested and integrated to the worker yet. With just a little bit of effort there can be a running sandbox for C# programs on Windows system. + ## Cleaner