# Worker ## Description **Worker's** main role is securely execute given submission and possibly _evaluate_ results against model solutions provided by submitter. **Worker** is logicaly divided into two parts: - **Listener** - listens and communicates with **Broker** through [ZeroMQ](http://zeromq.org/). It receives new jobs, communicates with **Evaluator** part and sends back results or progress. - **Evaluator** - gets jobs to evaluate from **Listener** part, evaluate them (possibly in sandbox) and get to know to other part that evaluation ended. This part also communicates with **Fileserver**, downloads needed files and uploads detailed results. **Worker** after getting evaluation request has to: - Download the archive containing submitted source files and configuration file - Download any supplementary files based on the configuration file, such as test inputs or helper programs (This is done on demand, using a `fetch` command in the assignment configuration) - Evaluate the submission accordingly to job configuration - During evaluation progress states can be sent back to **Broker** - Upload the results of the evaluation to the **Fileserver** - Notify **Broker** that the evaluation finished ## Architecture Picture below is overall internal architecture of worker which shows its defined classes with private variables and public functions. Vector version of this picture is available [here](https://github.com/ReCodEx/GlobalWiki/raw/master/images/Worker_Internal_Architecture.pdf). ![Internal Worker architecture](https://github.com/ReCodEx/GlobalWiki/blob/master/images/Worker_Internal_Architecture.png) ## Installation ### Dependencies Worker specific requirements are written in this section. Some parts of this guide may be different for each type of worker, for example Compiler principles worker is not fully covered here. **Install libarchive library** (optional) Installing this package will only speed up build process, otherwise libarchive is built from source. - Debian package is `libarchive-dev`. - RedHat packages are `libarchive` and `libarchive-devel`. These are probably not available for RHEL. **Install Isolate from source** First, we need to compile sandbox Isolate from source and install. Assume that we keep source code in `/opt/src` dir. For building man page you need to have package `asciidoc` installed. ``` $ cd /opt/src $ git clone https://github.com/ioi/isolate.git $ cd isolate $ make # make install && make install-doc ``` For proper work Isolate depends on several advanced features of the Linux kernel. Make sure that your kernel is compiled with `CONFIG_PID_NS`, `CONFIG_IPC_NS`, `CONFIG_NET_NS`, `CONFIG_CPUSETS`, `CONFIG_CGROUP_CPUACCT`, `CONFIG_MEMCG`. If your machine has swap enabled, also check `CONFIG_MEMCG_SWAP`. Which flags was your kernel compiled with can be found in `/boot` directory, for example in `/boot/config-4.2.6-301.fc23.x86_64` file for kernel version 4.2.6-301. Red Hat distros should have these enabled by default, for Debian you you may want to add the parameters `cgroup_enable=memory swapaccount=1` to the kernel command-line, which can be set using `GRUB_CMDLINE_LINUX_DEFAULT` in `/etc/default/grub`. For better reproducibility of results, some kernel parameters can be tweaked: - Disable address space randomization. Create file `/etc/sysctl.d/10-recodex.conf` with content `kernel.randomize_va_space=0`. Changes will take effect after restart or run `sysctl kernel.randomize_va_space=0` command. - Disable dynamic CPU frequency scaling. This requires setting the cpufreq scaling governor to _performance_. _**TODO** - do we really need it and is it worth higher power consumption? [Red Hat setup](https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/7/html/Power_Management_Guide/cpufreq_governors.html), [Debian setup](https://wiki.debian.org/HowTo/CpuFrequencyScaling)_ ### Clone worker source code repository ``` $ git clone https://github.com/ReCodEx/worker.git $ git submodule update --init ``` ### Install worker on Linux It's supposed that your current working directory is that one with clonned worker source codes. - Prepare environment running `mkdir build && cd build` - Build sources by `cmake ..` following by `make -j#` where '#' symbol refers to number of your CPU threads. - Build binary package by `make package` (may require root permissions). Note that `rpm` and `deb` packages are build in the same time. You may need to have `rpmbuild` command (usually as `rpmbuild` or `rpm` package) or edit CPACK_GENERATOR variable _CMakeLists.txt_ file in root of source code tree. - Install generated package through your package manager (`yum`, `dnf`, `dpkg`). _Note:_ If you don't want to generate binary packages, you can just install the project with `make install` (as root). But installation through your distribution's package manager is preferred way to keep your system clean and manageable in long term horizon. ### Install worker on Windows From beginning we are determined to support Windows operating system on which some of the workers may run (especially for projects in C# programming language). Support for Windows is quite hard and time consuming and there were several problems during this. To ensure capability of compilation on Windows we set up CI for Windows named [Appveyor](http://www.appveyor.com/). However installation should be easy due to provided installation script. There are only two additional dependencies needed, **Windows 7 and higher** and **Visual Studio 2015+**. Provided simple installation batch script should do all the work on Windows machine. Officially only VS2015 and 32-bit compilation is supported, because of hardcoded compile options in installation script. If different VS is needed or different platform, script should be changed to appropriate values, it should be quite simple and straightforward. Mentioned script is placed in *install* directory alongside supportive scripts for UNIX systems and is named *win-build.cmd*. Provided script will do almost all the work connected with building and dependency resolving (using **NuGet** package manager and **msbuild** building system). Script should be run under 32-bit version of **Developer Command Prompt for VS2015** and from *install* directory. Building and installing of worker is then quite simple, script has command line parameters which can be used to specify what will be done: * *-build* - Don't have to be specified. Build worker and all its tests, all is saved in *build* folder and subfolders. * *-clean* - Cleanup of downloaded NuGet packages and built application/libraries. * *-test* - Build worker and run tests on compiled test cases. * *-package* - Clickable installation generation using cpack and NSIS. [NSIS](http://nsis.sourceforge.net/) have to be installed on machine to get this to work. ``` install> win-build.cmd # same as: win-build.cmd -build install> win-build.cmd -clean install> win-build.cmd -test install> win-build.cmd -package ``` All build binaries and cmake temporary files can be found in *build* folder, classically there will be subfolder *Release* which will contain compiled application with all needed dlls. Once if clickable installation binary is created, it can be found in *build* folder named something like *recodex-worker-${VERSION}-win32.exe*. **Clickable installation feature:** ![NSIS Installation](https://github.com/ReCodEx/GlobalWiki/blob/master/images/nsis_installation.png) ### Compilers For evaluating jobs you have to install tools that fit your needs. Here are some useful tips of different compillers to install. **C/C++** For compiling C and C++ programs is used `GCC` compiler. Maybe you could install install most of the staff by executing following command, but it's not needed. ``` # yum group install "Development Tools" ``` To install the compiler separately, you could install it from the distribution's repositories. ``` # yum install gcc gcc-c++ make ``` To get reasonably new version, you may consider installing [Red Hat Developer Toolset 4](https://access.redhat.com/documentation/en-US/Red_Hat_Developer_Toolset/4/html-single/4.0_Release_Notes/index.html), or install these from Fedora repo. In Debian, _testing_ repo can be used. **C\#** For new versions of **Mono**, we'll use [Xamarin repositories](http://www.mono-project.com/download/#download-lin). For Red Hat based OS: ``` # rpm --import "http://keyserver.ubuntu.com/pks/lookup?op=get&search=0x3FA7E0328081BFF6A14DA29AA6A19B38D3D831EF" # yum-config-manager --add-repo http://download.mono-project.com/repo/centos/ # yum upgrade # yum install mono-complete ``` For Debian based OS: ``` $ sudo apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv-keys 3FA7E0328081BFF6A14DA29AA6A19B38D3D831EF $ echo "deb http://download.mono-project.com/repo/debian wheezy main" | sudo tee /etc/apt/sources.list.d/mono-xamarin.list $ sudo apt-get update $ sudo apt-get install mono-complete ``` **Free Pascal** Free Pascal compiler 3.0.0 can be downloaded as `rpm` packages from [official website](http://www.freepascal.org/down/x86_64/linux-austria.var). In Debian, this version is [expetimental](https://packages.debian.org/experimental/fpc-3.0.0), but seems to be stable enough. **Java** _**TODO**_ ## Configuration and usage Following text describes how to set up and run **worker** program. It's supposed to have required binaries installed. For instructions see [[Installation|Worker#installation]] section. Also, using systemd is recommended for best user experience, but it's not required. Almost all modern Linux distributions are using systemd now. Installation of **worker** program does following step to your computer: - create config file `/etc/recodex/worker/config-1.yml` - create _systemd_ unit file `/etc/systemd/system/recodex-worker@.service` - put main binary to `/usr/bin/recodex-worker` - put judges binaries to `/usr/bin/recodex-judge-normal`, `/usr/bin/recodex-judge-shuffle` and `/usr/bin/recodex-judge-filter` - create system user and group `recodex` with `/sbin/nologin` shell (if not existing) - create log directory `/var/log/recodex` - set ownership of config (`/etc/recodex`) and log (`/var/log/recodex`) directories to `recodex` user and group ### Default worker configuration Worker should have some default configuration which is applied to worker itself or may be used in given jobs (implicitly if something is missing, or explicitly with special variables). This configuration should be hardcoded and can be rewritten by explicitly declared configuration file. Format of this configuration is yaml like in the job config. #### Configuration items Mandatory items are bold, optional italic. - **worker-id** - unique identification of worker at one server. This id is used by _isolate_ sanbox on linux systems, so make sure to meet isolates requirements (default is number from 1 to 999). - **broker-uri** - URI of the broker (hostname, IP address, including port, ...) - _broker-ping-interval_ - time interval how often to send ping messages to broker. Used units are milliseconds. - _max-broker-liveness_ - specifies how many pings in a row can broker miss without making the worker dead. - _headers_ - headers specifies worker's capabilities - _env_ - map of enviromental variables - _threads_ - information about available threads for this worker - **hwgroup** - hardware group of this worker. Hardware group must specify worker hardware and software capabilities and it's main item for broker routing decisions. - _working-directory_ - where will be stored all needed files. Can be the same for multiple workers on one server. - **file-managers** - addresses and credentials to all file managers used (eq. all different frontends using this worker) - **hostname** - URI of file manager - _username_ - username for http authentication (if needed) - _password_ - password for http authentication (if needed) - _file-cache_ - configuration of caching feature - _cache-dir_ - path to caching directory. Can be the same for mutltiple workers. - _logger_ - settings of logging capabilities - _file_ - path to the logging file with name without suffix. `/var/log/recodex/worker` item will produce `worker.log`, `worker.1.log`, ... - _level_ - level of logging, one of `off`, `emerg`, `alert`, `critical`, `err`, `warn`, `notice`, `info` and `debug` - _max-size_ - maximal size of log file before rotating - _rotations_ - number of rotation kept - _limits_ - default sandbox limits for this worker. All items are described on [assignments page](https://github.com/ReCodEx/GlobalWiki/wiki/Assignments-overview#configuration-items). #### Example config file ```{.yml} worker-id: 1 broker-uri: tcp://localhost:9657 broker-ping-interval: 10 # milliseconds max-broker-liveness: 10 headers: env: - c - python threads: 2 hwgroup: "group1" working-directory: /tmp/recodex file-managers: - hostname: "http://localhost:9999" # port is optional username: "" # can be ignored in specific modules password: "" # can be ignored in specific modules file-cache: # only in case that there is cache module cache-dir: "/tmp/recodex/cache" logger: file: "/var/log/recodex/worker" # w/o suffix - actual names will be worker.log, worker.1.log, ... level: "debug" # level of logging max-size: 1048576 # 1 MB; max size of file before log rotation rotations: 3 # number of rotations kept limits: time: 5 # in secs wall-time: 6 # seconds extra-time: 2 # seconds stack-size: 0 # normal in KB, but 0 means no special limit memory: 50000 # in KB parallel: 1 # time and memory limits are merged disk-size: 50 disk-files: 5 environ-variable: ISOLATE_BOX: "/box" ISOLATE_TMP: "/tmp" bound-directories: - src: /tmp/recodex/eval_5 dst: /evaluate mode: RW,NOEXEC ``` ### Running worker For easy and comfortable managing worker is included systemd unit file. It integrates worker nicely into your Linux system and allow you to run worker automaticaly after system startup for example. It's supposed to have more than one worker on every server, so provided unit file is templated. Each instance of worker have unique string identifier, which is used for managing that instance through systemd. By default, only one worker instance is ready to use after installation. It's ID is "1". - Starting worker with id "1" can be done this way: ``` # systemctl start recodex-worker@1.service ``` Check with ``` # systemctl status recodex-worker@1.service ``` if the worker is running. You should see "active (running)" message. - Worker can be stopped or restarted accordigly using `systemctl stop` and `systemctl restart` commands. - If you want to run worker after system startup, run: ``` # systemctl enable recodex-worker@1.service ``` For further information about using systemd please refer to systemd documentation. ### Adding new worker To add a new worker you need to do a few steps: - Get unique string ID. Think of one. - Copy default configuration file `/etc/recodex/worker/config-1.yml` to the same directory and name it `config-.yml` - Edit that config file to fit your needs. Note that you must at least change _worker-id_ and _logger file_ values to be unique. - Run new instance using ``` # systemctl start recodex-worker@.service ``` ## Sandboxes TODO ## Cleaner TODO