From 788894c33c1ded25c6a9c6bf9f6342f281fbcb7e Mon Sep 17 00:00:00 2001 From: Martin Polanka Date: Thu, 22 Feb 2018 15:38:37 +0100 Subject: [PATCH] Move installation and configuration to their respective services --- Installation.md | 559 +--------------------------------------- System-configuration.md | 248 ------------------ 2 files changed, 2 insertions(+), 805 deletions(-) diff --git a/Installation.md b/Installation.md index d9e68ac..d2e5bfc 100644 --- a/Installation.md +++ b/Installation.md @@ -35,6 +35,8 @@ require small tweaks to work properly. ## Ansible installer +DEPRECATED - Ansible installer is no longer working! + For automatic installation is used a set of Ansible scripts. Ansible is one of the best known and used tools for automatic server management. It is required only to have SSH access to the server and Ansible installed on the client @@ -150,355 +152,8 @@ just use component's YAML file instead of _recodex.yml_. Ansible expects to have password-less access to the remote machines. If you have not such setup, use options `--ask-pass` and `--ask-become-pass`. - ## Manual installation -### Worker - -#### Dependencies - -Worker specific requirements are written in this section. It covers only basic -requirements, additional runtimes or tools may be needed depending on type of -use. The package names are for CentOS if not specified otherwise. - -- ZeroMQ in version at least 4.0, packages `zeromq` and `zeromq-devel` - (`libzmq3-dev` on Debian) -- YAML-CPP library, `yaml-cpp` and `yaml-cpp-devel` (`libyaml-cpp0.5v5` and - `libyaml-cpp-dev` on Debian) -- libcurl library `libcurl-devel` (`libcurl4-gnutls-dev` on Debian) -- libarchive library as optional dependency. Installing will speed up build - process, otherwise libarchive is built from source during installation. - Package name is `libarchive` and `libarchive-devel` (`libarchive-dev` on - Debian) - -**Isolate** (only for Linux installations) - -First, we need to compile sandbox Isolate from source and install it. Current -worker is tested against version 1.3, so this version needs to be checked out. -Assume that we keep source code in `/opt/src` dir. For building man page you -need to have package `asciidoc` installed. - -``` -$ cd /opt/src -$ git clone https://github.com/ioi/isolate.git -$ cd isolate -$ git checkout v1.3 -$ make -# make install && make install-doc -``` - -For proper work Isolate depends on several advanced features of the Linux -kernel. Make sure that your kernel is compiled with `CONFIG_PID_NS`, -`CONFIG_IPC_NS`, `CONFIG_NET_NS`, `CONFIG_CPUSETS`, `CONFIG_CGROUP_CPUACCT`, -`CONFIG_MEMCG`. If your machine has swap enabled, also check -`CONFIG_MEMCG_SWAP`. With which flags was your kernel compiled with can be found -in `/boot` directory, file `config-` and version of your kernel. Red Hat based -distributions should have these enabled by default, for Debian you you may want -to add the parameters `cgroup_enable=memory swapaccount=1` to the kernel -command-line, which can be set by adding value `GRUB_CMDLINE_LINUX_DEFAULT` to -`/etc/default/grub` file. - -For better reproducibility of results, some kernel parameters can be tweaked: - -- Disable address space randomization. Create file - `/etc/sysctl.d/10-recodex.conf` with content `kernel.randomize_va_space=0`. - Changes will take effect after restart or run `sysctl - kernel.randomize_va_space=0` command. -- Disable dynamic CPU frequency scaling. This requires setting the cpufreq - scaling governor to _performance_. - -#### Clone worker source code repository - -``` -$ git clone https://github.com/ReCodEx/worker.git -$ git submodule update --init -``` - -#### Install worker on Linux - -It is supposed that your current working directory is that one with cloned -worker source codes. - -- Prepare environment running `mkdir build && cd build` -- Build sources by `cmake ..` following by `make` -- Build binary package by `make package` (may require root permissions). Note - that `rpm` and `deb` packages are build in the same time. You may need to have - `rpmbuild` command (usually as `rpmbuild` or `rpm` package) or edit - CPACK_GENERATOR variable in _CMakeLists.txt_ file in root of source code tree. -- Install generated package through your package manager (`yum`, `dnf`, `dpkg`). - -The worker installation process is composed of following steps: - -- create config file `/etc/recodex/worker/config-1.yml` -- create systemd unit file `/etc/systemd/system/recodex-worker@.service` -- put main binary to `/usr/bin/recodex-worker` -- put judges binaries to `/usr/bin/` directory -- create system user and group `recodex` with `/sbin/nologin` shell (if not - already existing) -- create log directory `/var/log/recodex` -- set ownership of config (`/etc/recodex`) and log (`/var/log/recodex`) - directories to `recodex` user and group - -_Note:_ If you do not want to generate binary packages, you can just install the -project with `make install` (as root). But installation through your -distribution's package manager is preferred way to keep your system clean and -manageable in long term horizon. - -#### Install worker on Windows - -There are basically two main dependencies needed, **Windows 7** or higher and -**Visual Studio 2015+**. Provided simple installation batch script should do all -the work on Windows machine. Officially only VS2015 and 32-bit compilation is -supported, because of hardcoded compile options in installation script. If -different VS or different platform is needed, the script should be changed to -appropriate values. - -Mentioned script is placed in *install* directory alongside supportive scripts -for UNIX systems and is named *win-build.cmd*. Provided script will do almost -all the work connected with building and dependency resolving (using -**NuGet** package manager and `msbuild` building system). Script should be -run under 32-bit version of _Developer Command Prompt for VS2015_ and from -*install* directory. - -Building and installing of worker is then quite simple, script has command line -parameters which can be used to specify what will be done: - -- *-build* -- It is the default options if none specified. Builds worker and its - tests, all is saved in *build* folder and subfolders. -- *-clean* -- Cleanup of downloaded NuGet packages and built - application/libraries. -- *-test* -- Build worker and run tests on compiled test cases. -- *-package* -- Generation of clickable installation using cpack and - [NSIS](http://nsis.sourceforge.net/) (has to be installed on machine to get - this to work). - -``` -install> win-build.cmd # same as: win-build.cmd -build -install> win-build.cmd -clean -install> win-build.cmd -test -install> win-build.cmd -package -``` - -All build binaries and cmake temporary files can be found in *build* folder, -classically there will be subfolder *Release* which will contain compiled -application with all needed dlls. Once if clickable installation binary is -created, it can be found in *build* folder under name -*recodex-worker-VERSION-win32.exe*. Sample screenshot can be found on following -picture. - -![NSIS Installation](https://github.com/ReCodEx/wiki/blob/master/images/nsis_installation.png) - -#### Usage - -A systemd unit file is distributed with the worker to simplify its launch. It -integrates worker nicely into your Linux system and allows you to run it -automatically on system startup. It is possible to have more than one worker on -every server, so the provided unit file is a template. Each instance of the -worker unit has a unique string identifier, which is used for managing that -instance through systemd. By default, only one worker instance is ready to use -after installation and its ID is "1". - -- Starting worker with id "1" can be done this way: -``` -# systemctl start recodex-worker@1.service -``` -Check with -``` -# systemctl status recodex-worker@1.service -``` -if the worker is running. You should see "active (running)" message. - -- Worker can be stopped or restarted accordingly using `systemctl stop` and - `systemctl restart` commands. -- If you want to run worker after system startup, run: -``` -# systemctl enable recodex-worker@1.service -``` -For further information about using systemd please refer to systemd -documentation. - -##### Adding new worker - -To add a new worker you need to do a few steps: - -- Make up an unique string ID. -- Copy default configuration file `/etc/recodex/worker/config-1.yml` to the same - directory and name it `config-.yml` -- Edit that config file to fit your needs. Note that you must at least change - _worker-id_ and _logger file_ values to be unique. -- Run new instance using -``` -# systemctl start recodex-worker@.service -``` - -### Broker - -#### Dependencies - -Broker has similar basic dependencies as worker, for recapitulation: - -- ZeroMQ in version at least 4.0, packages `zeromq` and `zeromq-devel` - (`libzmq3-dev` on Debian) -- YAML-CPP library, `yaml-cpp` and `yaml-cpp-devel` (`libyaml-cpp0.5v5` and - `libyaml-cpp-dev` on Debian) -- libcurl library `libcurl-devel` (`libcurl4-gnutls-dev` on Debian) - -#### Clone broker source code repository - -``` -$ git clone https://github.com/ReCodEx/broker.git -$ git submodule update --init -``` - -#### Installation itself - -Installation of broker program does following step to your computer: - -- create config file `/etc/recodex/broker/config.yml` -- create _systemd_ unit file `/etc/systemd/system/recodex-broker.service` -- put main binary to `/usr/bin/recodex-broker` -- create system user and group `recodex` with nologin shell (if not existing) -- create log directory `/var/log/recodex` -- set ownership of config (`/etc/recodex`) and log (`/var/log/recodex`) - directories to `recodex` user and group - -It is supposed that your current working directory is that one with clonned -worker source codes. - -- Prepare environment running `mkdir build && cd build` -- Build sources by `cmake ..` following by `make` -- Build binary package by `make package` (may require root permissions). Note - that `rpm` and `deb` packages are build in the same time. You may need to have - `rpmbuild` command (usually as `rpmbuild` or `rpm` package) or edit - CPACK_GENERATOR variable _CMakeLists.txt_ file in root of source code tree. -- Install generated package through your package manager (`yum`, `dnf`, `dpkg`). - -_Note:_ If you do not want to generate binary packages, you can just install the -project with `make install` (as root). But installation through your -distribution's package manager is preferred way to keep your system clean and -manageable in long term horizon. - -#### Usage - -Running broker is very similar to the worker setup. There is also provided -systemd unit file for convenient usage. There is only one broker per whole -ReCodEx solution, so there is no need for systemd templates. - -- Running broker can be done by following command: -``` -# systemctl start recodex-broker.service -``` -Check with -``` -# systemctl status recodex-broker.service -``` -if the broker is running. You should see "active (running)" message. - -- Broker can be stopped or restarted accordingly using `systemctl stop` and - `systemctl restart` commands. -- If you want to run broker after system startup, run: -``` -# systemctl enable recodex-broker.service -``` - -For further information about using systemd please refer to systemd -documentation. - -### Fileserver - -To install and use the fileserver, it is necessary to have Python3 with `pip` -package manager installed. It is needed to install the dependencies. From -clonned repository run the following command: - -``` -$ pip install -r requirements.txt -``` - -That is it. Fileserver does not need any special installation. It is possible to -build and install _rpm_ package or install it without packaging the same way as -monitor, but it is only optional. The installation would provide you with script -`recodex-fileserver` in you `PATH`. No systemd unit files are provided, because -of the configuration and usage of fileserver component is much different to our -other Python parts. - -#### Usage - -There are several ways of running the ReCodEx fileserver. We will cover three -typical use cases. - -##### Running in development mode - -For simple development usage, it is possible to run the fileserver in the -command line. Allowed options are described below. - -``` -usage: fileserver.py [--directory WORKING_DIRECTORY] - {runserver,shell} ... -``` - -- **runserver** argument starts the Flask development server (i.e. `app.run()`). - As additional argument can be given a port number. -- **shell** argument instructs Flask to run a Python shell inside application - context. - -Simple development server on port 9999 can be run as - -``` -$ python3 fileserver.py runserver 9999 -``` - -When run like this command, the fileserver creates a temporary directory where -it stores all the files and which is deleted when it exits. - -##### Running as WSGI script in a web server - -If you need features such as HTTP authentication (recommended) or efficient -serving of static files, it is recommended to run the app in a full-fledged web -server (such as Apache or Nginx) using WSGI. Apache configuration can be -generated by `mkconfig.py` script from the repository. - -``` -usage: mkconfig.py apache [-h] [--port PORT] --working-directory - WORKING_DIRECTORY [--htpasswd HTPASSWD] - [--user USER] -``` - -- **port** -- port where the fileserver should listen -- **working_directory** -- directory where the files should be stored -- **htpasswd** -- path to user file for HTTP Basic Authentication -- **user** -- user under which the server should be run - -##### Running using uWSGI - -Another option is to run fileserver as a standalone app via uWSGI service. Setup -is also quite simple, configuration file can be also generated by `mkconfig.py` -script. - -1. (Optional) Create a user for running the fileserver -2. Make sure that your user can access your clone of the repository -3. Run `mkconfig.py` script. - ``` - usage: mkconfig.py uwsgi [-h] [--user USER] [--port PORT] - [--socket SOCKET] - --working-directory WORKING_DIRECTORY - ``` - - - **user** -- user under which the server should be run - - **port** -- port where the fileserver should listen - - **socket** -- path to UNIX socket where the fileserver should listen - - **working_directory** -- directory where the files should be stored - -4. Save the configuration file generated by the script and run it with uWSGI, - either directly or using systemd. This depends heavily on your distribution. -5. To integrate this with another web server, see the [uWSGI - documentation](http://uwsgi-docs.readthedocs.io/en/latest/WebServers.html) - -Note that the ways distributions package uWSGI can vary wildly. In Debian 8 it -is necessary to convert the configuration file to XML and make sure that the -python3 plugin is loaded instead of python. This plugin also uses Python 3.4, -even though the rest of the system uses Python 3.5 - make sure to install -dependencies for the correct version. - ### Monitor For monitor functionality there are some required packages. All of them are @@ -575,216 +230,6 @@ $ recodex-monitor -c /etc/recodex/monitor/config.yml ``` -### Cleaner - -To install and use the cleaner, it is necessary to have Python3 with package -manager `pip` installed. - -- Dependencies of cleaner has to be installed: -``` -$ pip install -r requirements.txt -``` -- RPM distributions can make and install binary package. This can be done like - this: -``` -$ python setup.py bdist_rpm --post-install ./cleaner/install/postinst -``` -- Installing generated package using YUM: -``` -# yum install ./dist/recodex-cleaner--1.noarch.rpm -``` -- Other Linux distributions can install cleaner straight -``` -$ python setup.py install --install-scripts /usr/bin -# ./cleaner/install/postinst -``` -- For Windows installation do following: - - start `cmd` with administrator permissions - - run installation with - ``` - > python setup.py install --install-scripts \ - "C:\Program Files\ReCodEx\cleaner" - ``` - where path specified with `--install-scripts` can be changed - - copy configuration file alongside with installed executable using - ``` - > copy install\config.yml \ - "C:\Program Files\ReCodEx\cleaner\config.yml" - ``` - -#### Usage - -As stated before cleaner should be croned, on linux systems this can be done by -built in `cron` service or if there is `systemd` present cleaner itself provides -`*.timer` file which can be used for croning from `systemd`. On Windows systems -internal scheduler should be used. - -- Running cleaner from command line is fairly simple: -``` -$ recodex-cleaner -c /etc/recodex/cleaner -``` -- Enable cleaner service using systemd: -``` -$ systemctl start recodex-cleaner.timer -``` -- Add cleaner to linux cron service using following configuration line: -``` -0 0 * * * /usr/bin/recodex-cleaner -c /etc/recodex/cleaner/config.yml -``` -- Add cleaner to Windows scheduler service with following command: -``` -> schtasks /create /sc daily /tn "ReCodEx Cleaner" /tr \ - "\"C:\Program Files\ReCodEx\cleaner\recodex-cleaner.exe\" \ - -c \"C:\Program Files\ReCodEx\cleaner\config.yml\"" -``` - -### REST API - -The web API requires a PHP runtime version at least 7. Which one depends on -actual configuration, there is a choice between _mod_php_ inside Apache, -_php-fpm_ with Apache or Nginx proxy or running it as standalone uWSGI script. -It is common that there are some PHP extensions, that have to be installed on -the system. Namely ZeroMQ binding (`php-zmq` package or similar), MySQL module -(`php-mysqlnd` package) and ldap extension module for CAS authentication -(`php-ldap` package). Make sure that the extensions are loaded in your `php.ini` -file (`/etc/php.ini` or files in `/etc/php.d/`). - -The API depends on some other projects and libraries. For managing them -[Composer](https://getcomposer.org/) is used. It can be installed from system -repositories or downloaded from the website, where detailed instructions are as -well. Composer reads `composer.json` file in the project root and installs -dependencies to the `vendor/` subdirectory. To do that, run: - -``` -$ composer install -``` - -#### Database preparation - -When the API is installed and configured (_doctrine_ section is sufficient here) -the database schema can be generated. There is a prepared command to do that -from command line: - -``` -$ php www/index.php orm:schema-tool:update --force -``` - -With API comes some initial values, for example default user roles with proper -permissions. To fill your database with these values there is another command -line command: - -``` -$ php www/index.php db:fill -``` - -Check the outputs of both commands for errors. If there are any, try to clean -temporary API cache in `temp/cache/` directory and repeat the action. - -#### Webserver configuration - -The simplest way to get started is to start the built-in PHP server in the root -directory of your project: - -``` -$ php -S localhost:4000 -t www -``` - -Then visit `http://localhost:4000` in your browser to see the welcome page of -API project. - -For Apache or Nginx, setup a virtual host to point to the `www/` directory of -the project and you should be ready to go. It is **critical** that whole `app/`, -`log/` and `temp/` directories are not accessible directly via a web browser -(see [security warning](https://nette.org/security-warning)). Also it is -**highly recommended** to set up a HTTPS certificate for public access to the -API. - -#### Troubleshooting - -In case of any issues first remove the Nette cache directory `temp/cache/` and -try again. This solves most of the errors. If it does not help, examine API logs -from `log/` directory of the API source or logs of your webserver. - -### Web application - -Web application requires [NodeJS](https://nodejs.org/en/) server as its runtime -environment. This runtime is needed for executing JavaScript code on server and -sending the pre-render parts of pages to clients, so the final rendering in -browsers is a lot quicker and the page is accessible to search engines for -indexing. - -But some functionality is better in other full fledged web servers like *Apache* -or *Nginx*, so the common practice is to use a tandem of both. *NodeJS* takes -care of basic functionality of the app while the other server (Apache) is set as -reverse proxy and providing additional functionality like SSL encryption, load -balancing or caching of static files. The recommended setup contains both NodeJS -and one of Apache and Nginx web servers for the reasons discussed above. - -Stable versions of 4th and 6th series of NodeJS server are sufficient, using at -least 6th series is highly recommended. Please check the most recent version of -the packages in your distribution's repositories, there are often outdated ones. -However, there are some third party repositories for all main Linux -distributions. - -The app depends on several libraries and components, all of them are listed in -`package.json` file in source repository. For managing dependencies is used node -package manager (`npm`), which can come with NodeJS installation otherwise can -be installed separately. To fetch and install all dependencies run: - -``` -$ npm install -``` - -For easy production usage there is an additional package for managing NodeJS -processes, `pm2`. This tool can run your application as a daemon, monitor -occupied resources, gather logs and provide simple console interface for -managing state of the app. To install it globally into your system run: - -``` -# npm install pm2 -g -``` - -#### Usage - -The application can be run in two modes, development and production. Development -mode uses only client rendering and tracks code changes with rebuilds of the -application in real time. In production mode the compilation (transpile to _ES5_ -standard using *Babel* and bundle into single file using *webpack*) has to be -done separately prior to running. The scripts for compilation are provided as -additional `npm` commands. - -- Development mode can be use for local testing of the app. This mode uses - webpack dev server, so all code runs on a client, there is no server side - rendering available. Starting is simple command, default address is - http://localhost:8080. -``` -$ npm run dev -``` -- Production mode is mostly used on the servers. It provides all features such - as server side rendering. This can be run via: -``` -$ npm run build -$ npm start -``` - -Both modes can be configured to use different ports or set base address of used -API server. This can be configured in `.env` file in root of the repository. -There is `.env-sample` file which can be just copied and altered. - -The production mode can be run also as a demon controlled by `pm2` tool. First -the web application has to be built and then the server javascript file can run -as a daemon. - -``` -$ npm run build -$ pm2 start bin/server.js -``` - -The `pm2` tool has several options, most notably _status_, _stop_, _restart_ and -_logs_. Further description is available on project -[website](http://pm2.keymetrics.io). - - ## Security One of the most important aspects of ReCodEx instance is security. It is crucial diff --git a/System-configuration.md b/System-configuration.md index eb877c0..bd7b59c 100644 --- a/System-configuration.md +++ b/System-configuration.md @@ -3,210 +3,6 @@ This section describes configuration of ReCodEx components. Bold items in lists describing the values are mandatory, italic ones are optional. -## Worker - -Worker have a default configuration which is applied to worker itself or is used -in given jobs (implicitly if something is missing, or explicitly with special -variables). This configuration is hardcoded in worker sources and can be -rewritten by explicitly declared configuration file. Format of this -configuration is YAML file with similar structure as job configuration. The -default location is `/etc/recodex/worker/config-.yml` where `N` is identifier -of the particular worker instance. - -### Configuration items - -- **worker-id** -- unique identification of worker at one server. This id is - used by _isolate_ sandbox on linux systems, so make sure to meet requirements - of the isolate (default is number from 1 to 999). -- _worker-description_ -- human readable description of this worker -- **broker-uri** -- URI of the broker (hostname, IP address, including port, - ...) -- _broker-ping-interval_ -- time interval how often to send ping messages to - broker. Used units are milliseconds. -- _max-broker-liveness_ -- specifies how many pings in a row can broker miss - without making the worker dead. -- _headers_ -- map of headers specifies worker's capabilities - - _env_ -- list of environmental variables which are sent to broker in init - command - - _threads_ -- information about available threads for this worker -- **hwgroup** -- hardware group of this worker. Hardware group must specify - worker hardware and software capabilities and it is main item for broker - routing decisions. -- _working-directory_ -- where will be stored all needed files. Can be the same - for multiple workers on one server. -- **file-managers** -- addresses and credentials to all file managers used (eq. - all different frontends using this worker) - - **hostname** -- URI of file manager - - _username_ -- username for http authentication (if needed) - - _password_ -- password for http authentication (if needed) -- _file-cache_ -- configuration of caching feature - - _cache-dir_ -- path to caching directory. Can be the same for multiple - workers. -- _logger_ -- settings of logging capabilities - - _file_ -- path to the logging file with name without suffix. - `/var/log/recodex/worker` item will produce `worker.log`, `worker.1.log`, - ... - - _level_ -- level of logging, one of `off`, `emerg`, `alert`, `critical`, - `err`, `warn`, `notice`, `info` and `debug` - - _max-size_ -- maximal size of log file before rotating in bytes - - _rotations_ -- number of rotation kept -- _limits_ -- default sandbox limits for this worker. All items are described in - assignments section in job configuration description. If some limits are not - set in job configuration, defaults from worker config will be used. In such - case the worker's defaults will be set as the maximum for the job. Also, - limits in job configuration cannot exceed limits from worker. -- **max-output-length** -- used for `tasks.{task}.sandbox.output` option, defined - in bytes, applied to both stdout and stderr and is not divided, both will get - this value -- **cleanup-submission** -- if set to true, then files produced during evaluation - of submission will be deleted at the end, extra caution is advised because this - setup can cause extensive disk usage - -### Example config file - -```{.yml} -worker-id: 1 -broker-uri: tcp://localhost:9657 -broker-ping-interval: 10 # milliseconds -max-broker-liveness: 10 -headers: - env: - - c - - cpp - threads: 2 -hwgroup: "group1" -working-directory: /tmp/recodex -file-managers: - - hostname: "http://localhost:9999" # port is optional - username: "" # can be ignored in specific modules - password: "" # can be ignored in specific modules -file-cache: # only in case that there is cache module - cache-dir: "/tmp/recodex/cache" -logger: - file: "/var/log/recodex/worker" # w/o suffix - actual names will - # be worker.log, worker.1.log,... - level: "debug" # level of logging - max-size: 1048576 # 1 MB; max size of file before log rotation - rotations: 3 # number of rotations kept -limits: - time: 5 # in secs - wall-time: 6 # seconds - extra-time: 2 # seconds - stack-size: 0 # normal in KB, but 0 means no special limit - memory: 50000 # in KB - parallel: 1 - disk-size: 50 - disk-files: 5 - environ-variable: - ISOLATE_BOX: "/box" - ISOLATE_TMP: "/tmp" - bound-directories: - - src: /tmp/recodex/eval_5 - dst: /evaluate - mode: RW,NOEXEC -``` - -### Isolate sandbox - -New feature of the 1.3 version is a possibility of limit Isolate box to one or -more CPU or memory nodes. This functionality is provided by _cpusets_ kernel -mechanism and is now integrated into Isolate. It is allowed to set only -`cpuset.cpus` and `cpuset.mems` which should be just fine for sandbox purposes. -As a kernel functionality further description can be found in manual page of -_cpuset_ or in Linux documentation in section -`linux/Documentation/cgroups/cpusets.txt`. As previously stated this settings -can be applied for particular Isolate boxes and has to be written in Isolate -configuration. Standard configuration path should be `/usr/local/etc/isolate` -but it may depend on your installation process. Configuration of _cpuset_ in -there is described in example below. - -``` -box0.cpus = 0 # assign processor with ID 0 to isolate box with ID 0 -box0.mems = 0 # assign memory node with ID 0 -# if not set, linux by itself will decide where should -# the sandboxed programs run at -box2.cpus = 1-3 # assign range of processors to isolate box 2 -box2.mems = 4-7 # assign range of memory nodes -box3.cpus = 1,2,3 # assign list of processors to isolate box 3 -``` - -- _cpuset.cpus:_ Cpus limitation will restrict sandboxed program only to - processor threads set in configuration. On hyperthreaded processors this means - that all virtual threads are assignable, not only the physical ones. Value can - be represented by single number, list of numbers separated by commas or range - with hyphen delimiter. -- _cpuset.mems:_ This value is particularly handy on NUMA systems which has - several memory nodes. On standard desktop computers this value should always - be zero because only one independent memory node is present. As stated in - `cpus` limitation there can be single value, list of values separated by comma - or range stated with hyphen. - -## Broker - -The default location for broker configuration file is -`/etc/recodex/broker/config.yml`. - -### Configuration items - -- _clients_ -- specifies address and port to bind for clients (frontend - instance) - - _address_ -- hostname or IP address as string (`*` for any) - - _port_ -- desired port -- _workers_ -- specifies address and port to bind for workers - - _address_ -- hostname or IP address as string (`*` for any) - - _port_ -- desired port - - _max_liveness_ -- maximum amount of pings the worker can fail to send - before it is considered disconnected - - _max_request_failures_ -- maximum number of times a job can fail (due to - e.g. worker disconnect or a network error when downloading something from - the fileserver) and be assigned again -- _monitor_ -- settings of monitor service connection - - _address_ -- IP address of running monitor service - - _port_ -- desired port -- _notifier_ -- details of connection which is used in case of errors and good - to know states - - _address_ -- address where frontend API runs - - _port_ -- desired port - - _username_ -- username which can be used for HTTP authentication - - _password_ -- password which can be used for HTTP authentication -- _logger_ -- settings of logging capabilities - - _file_ -- path to the logging file with name without suffix. - `/var/log/recodex/broker` item will produce `broker.log`, `broker.1.log`, - ... - - _level_ -- level of logging, one of `off`, `emerg`, `alert`, `critical`, - `err`, `warn`, `notice`, `info` and `debug` - - _max-size_ -- maximal size of log file before rotating - - _rotations_ -- number of rotation kept - -### Example config file - -```{.yml} -# Address and port for clients (frontend) -clients: - address: "*" - port: 9658 -# Address and port for workers -workers: - address: "*" - port: 9657 - max_liveness: 10 - max_request_failures: 3 -monitor: - address: "127.0.0.1" - port: 7894 -notifier: - address: "127.0.0.1" - port: 8080 - username: "" - password: "" -logger: - file: "/var/log/recodex/broker" # w/o suffix - actual names will be - # broker.log, broker.1.log, ... - level: "debug" # level of logging - max-size: 1048576 # 1 MB; max size of file before log rotation - rotations: 3 # number of rotations kept -``` - ## Monitor Configuration file is located in directory `/etc/recodex/monitor/` by default. @@ -251,24 +47,6 @@ logger: ... ``` -## Cleaner - -The default location for cleaner configuration file is -`/etc/recodex/cleaner/config.yml`. - -### Configuration items - -- **cache-dir** -- directory which cleaner manages -- **file-age** -- file age in seconds which is considered as outdated and will - be marked for deletion - -### Example configuration - -```{.yml} -cache-dir: "/tmp" -file-age: "3600" # in seconds -``` - ## REST API The API can be configured in `config.neon` and `config.local.neon` files in @@ -422,32 +200,6 @@ doctrine: dbname: "recodex-api" ``` -## Web application - -The location for configuration of the web application is in root of the project -source tree. The name have to be `.env` and can be created by copying template -`.env-example` file. - -### Configurable items - -Description of configurable options. Bold are required values, optional ones are -in italics. - -- **NODE_ENV** -- mode of the server -- **API_BASE** -- base address of API server, including port and API version -- **PORT** -- port where the app is listening -- _WEBPACK_DEV_SERVER_PORT_ -- port for webpack dev server when running in - development mode. Default one is 8081, this option might be useful when this - port is necessary for some other service. - -### Example configuration file - -```{.ini} -NODE_ENV=production -API_BASE=https://recodex.projekty.ms.mff.cuni.cz:4000/v1 -PORT=8080 -``` -