More typos

master
Petr Stefan 8 years ago
parent d12fde67dc
commit 884b43e528

@ -6,7 +6,7 @@ annotated with data provided by ORM framework. In the following text there is a
brief description of each such entity. There may be some additional database brief description of each such entity. There may be some additional database
tables for many-to-many relations between entities, but these tables are not tables for many-to-many relations between entities, but these tables are not
discussed in detail because there are only two columns with keys of both discussed in detail because there are only two columns with keys of both
related tables in each one. Graphical database schema is added as attachement related tables in each one. Graphical database schema is added as attachment
to this documentation. to this documentation.
## Assignment ## Assignment
@ -28,7 +28,7 @@ assigned. Other columns are:
- _maxPointsBeforeFirstDeadline_, _maxPointsBeforeSecondDeadline_ -- maximal - _maxPointsBeforeFirstDeadline_, _maxPointsBeforeSecondDeadline_ -- maximal
amount of points for correct solutions for first (respectively second) amount of points for correct solutions for first (respectively second)
deadline deadline
- _scoreCalculator_ -- name of used score score calculator or NULL for API's - _scoreCalculator_ -- name of used score score calculator or NULL for the
default one default one
- _canViewLimitRatios_ -- flag if student can view percentage of used time and - _canViewLimitRatios_ -- flag if student can view percentage of used time and
memory limits for each test memory limits for each test
@ -71,7 +71,7 @@ inheritance mechanism, but it's currently not being used.
- _version_ -- version of the exercise; each edit increases this counter - _version_ -- version of the exercise; each edit increases this counter
- _createdAt_ -- date and time of creation - _createdAt_ -- date and time of creation
- _updatedAt_ -- date and time of last modification - _updatedAt_ -- date and time of last modification
- _difficulty_ -- description of difficulty, curretly used ones are easy, - _difficulty_ -- description of difficulty, currently used ones are easy,
medium and hard medium and hard
- _isPublic_ -- flag if the exercise is publicly visible to all supervisors or - _isPublic_ -- flag if the exercise is publicly visible to all supervisors or
only to author only to author
@ -93,7 +93,7 @@ ExternalLogin entity provides mapping between username in external login
service and local user account. service and local user account.
- _user_ -- key to local user entity - _user_ -- key to local user entity
- _authService_ -- identifier which extrnal service is the user using - _authService_ -- identifier which external service is the user using
- _externalId_ -- username in that particular external instance - _externalId_ -- username in that particular external instance
## ForgottenPassword ## ForgottenPassword
@ -167,7 +167,7 @@ System administrator should keep the data up-to-date with backend state.
## Instance ## Instance
Instance entity represents separation of multiple organisations using the same Instance entity represents separation of multiple organizations using the same
API server. Each group or user belong to exactly one instance. User permissions API server. Each group or user belong to exactly one instance. User permissions
are only valid in his instance. are only valid in his instance.
@ -175,10 +175,10 @@ are only valid in his instance.
- _name_ -- visible name of the instance - _name_ -- visible name of the instance
- _description_ -- Markdown styled textual description - _description_ -- Markdown styled textual description
- _isOpen_ -- flag if new user can register themselves in this instance - _isOpen_ -- flag if new user can register themselves in this instance
- _isAllowed_ -- flag if the instance cen be used or is disabled for all users - _isAllowed_ -- flag if the instance can be used or is disabled for all users
- _createdAt_ -- date and time of instance creation - _createdAt_ -- date and time of instance creation
- _updatedAt_ -- date and time of last modification - _updatedAt_ -- date and time of last modification
- _needsLicence_ -- flag if this instance needs a velid license - _needsLicence_ -- flag if this instance needs a valid license
- _deletedAt_ -- instance cannot be directly deleted due to possibility of many - _deletedAt_ -- instance cannot be directly deleted due to possibility of many
dependencies, but instead it can be hidden from users by setting this field dependencies, but instead it can be hidden from users by setting this field
to date and time of deletion (active instances have NULL value) to date and time of deletion (active instances have NULL value)
@ -202,7 +202,7 @@ description to users. The text can be Markdown styled.
- _locale_ -- language code of locale this text is written in (for example - _locale_ -- language code of locale this text is written in (for example
_en_) _en_)
- _text_ -- text content - _text_ -- text content
- _createdFrom_ -- key refering to parent localized text; when editing, new - _createdFrom_ -- key referring to parent localized text; when editing, new
version is created as a child of the original one version is created as a child of the original one
- _createdAt_ -- date and time of creation - _createdAt_ -- date and time of creation
@ -211,7 +211,7 @@ description to users. The text can be Markdown styled.
Login entity holds credentials for logging into ReCodEx system using internal Login entity holds credentials for logging into ReCodEx system using internal
authentication. authentication.
- _user_ -- key refering to user entity - _user_ -- key referring to user entity
- _username_ -- mail of the user - _username_ -- mail of the user
- _passwordHash_ -- hashed password, generated by Nette - _passwordHash_ -- hashed password, generated by Nette
[Passwords::hash()](https://api.nette.org/2.3/source-Security.Passwords.php.html#27) [Passwords::hash()](https://api.nette.org/2.3/source-Security.Passwords.php.html#27)
@ -236,7 +236,7 @@ invocation. These permission are gained to user roles in this entity.
ReferenceExerciseSolution contains additional data for reference solution than ReferenceExerciseSolution contains additional data for reference solution than
normal solution created by student submit. normal solution created by student submit.
- _exercise_, _solution_ -- keys refering to exercise this entity is bound to - _exercise_, _solution_ -- keys referring to exercise this entity is bound to
and solution entity used for submitting and solution entity used for submitting
- _uploadedAt_ -- date and time of creation - _uploadedAt_ -- date and time of creation
- _description_ -- textual description of used algorithm or other note about - _description_ -- textual description of used algorithm or other note about
@ -249,7 +249,7 @@ evalution than normal evaluation created on student solution evaluation.
- _referenceSolution_ -- key to solution which this evaluation belongs to - _referenceSolution_ -- key to solution which this evaluation belongs to
- _hwGroup_ -- hardware group which was used during execution on worker - _hwGroup_ -- hardware group which was used during execution on worker
- _resultsUrl_ -- filserver address from where results of reference evaluation - _resultsUrl_ -- fileserver address from where results of reference evaluation
can be internally accessed can be internally accessed
- _evaluation_ -- associated general evaluation key - _evaluation_ -- associated general evaluation key
@ -293,7 +293,7 @@ the new version. On student submission one runtime config is chosen based on
files extensions and corresponding job configuration is used for evaluation in files extensions and corresponding job configuration is used for evaluation in
the backend. the backend.
- _name_ -- human readable identificator of runtime configuration - _name_ -- human readable identifier of runtime configuration
- _runtimeEnvironment_ -- corresponding runtime environment - _runtimeEnvironment_ -- corresponding runtime environment
- _jobConfigFilePath_ -- path to job configuration which is stored at API - _jobConfigFilePath_ -- path to job configuration which is stored at API
server server
@ -468,3 +468,9 @@ application.
application application
- _defaultLanguage_ -- default language which will be used in web application - _defaultLanguage_ -- default language which will be used in web application
- _openedSidebar_ -- left sidebar will be opened after login - _openedSidebar_ -- left sidebar will be opened after login
<!---
// vim: set formatoptions=tqn flp+=\\\|^\\*\\s* textwidth=80 colorcolumn=+1:
-->

@ -12,7 +12,7 @@ manual installation of all components which should work on most of the Linux
distributions. distributions.
The distribution of our choice is CentOS, currently in version 7. It is a well The distribution of our choice is CentOS, currently in version 7. It is a well
known server distribution, derived from enterprise distrubution from Red Hat, so known server distribution, derived from enterprise distribution from Red Hat, so
it is very stable and widely used system with long term support. There are it is very stable and widely used system with long term support. There are
[EPEL](https://fedoraproject.org/wiki/EPEL) additional repositories from Fedora [EPEL](https://fedoraproject.org/wiki/EPEL) additional repositories from Fedora
project, which adds newer versions of some packages into CentOS, which allows us project, which adds newer versions of some packages into CentOS, which allows us
@ -37,7 +37,7 @@ require small tweaks to work properly.
For automatic installation is used a set of Ansible scripts. Ansible is one of For automatic installation is used a set of Ansible scripts. Ansible is one of
the best known and used tools for automatic server management. It is required the best known and used tools for automatic server management. It is required
only to have SSH access to the server and ansible installed on the client only to have SSH access to the server and Ansible installed on the client
machine. For further reading is supposed basic Ansible knowledge. For more info machine. For further reading is supposed basic Ansible knowledge. For more info
check their [documentation](http://docs.ansible.com/ansible/intro.html). check their [documentation](http://docs.ansible.com/ansible/intro.html).
@ -49,7 +49,7 @@ edit two files -- set addresses of hosts and values of some variables.
### Hosts configuration ### Hosts configuration
First, it is needed to set IP addresses of your computers. Common practise is to First, it is needed to set IP addresses of your computers. Common practice is to
have multiple files with definitions, one for development, another for have multiple files with definitions, one for development, another for
production for example. Example configuration is in _development_ file. Each production for example. Example configuration is in _development_ file. Each
component of ReCodEx project can be installed on different server. Hosts can be component of ReCodEx project can be installed on different server. Hosts can be
@ -113,8 +113,8 @@ key-value pair per line, separated by colon. Values with brief description:
- _broker_notifier_port_ -- Port to above, should be the same as for API itself - _broker_notifier_port_ -- Port to above, should be the same as for API itself
(_webapi_public_port_) (_webapi_public_port_)
- _broker_notifier_username_ -- Username for HTTP Authentication for reports - _broker_notifier_username_ -- Username for HTTP Authentication for reports
- _broker_notifier_password_ -- Password for HTTP Authentication for reporst - _broker_notifier_password_ -- Password for HTTP Authentication for reports
- _monitor_websocket_addr_ -- Address, where websocket connection from monitor - _monitor_websocket_addr_ -- Address, where WebSocket connection from monitor
will be available will be available
- _monitor_websocket_port_ -- Port to above. - _monitor_websocket_port_ -- Port to above.
- _monitor_firewall_websocket_ -- Open above port in firewall, "yes" or "no". - _monitor_firewall_websocket_ -- Open above port in firewall, "yes" or "no".
@ -216,7 +216,7 @@ $ git submodule update --init
#### Install worker on Linux #### Install worker on Linux
It is supposed that your current working directory is that one with clonned It is supposed that your current working directory is that one with cloned
worker source codes. worker source codes.
- Prepare environment running `mkdir build && cd build` - Prepare environment running `mkdir build && cd build`
@ -293,7 +293,7 @@ picture.
A systemd unit file is distributed with the worker to simplify its launch. It A systemd unit file is distributed with the worker to simplify its launch. It
integrates worker nicely into your Linux system and allows you to run it integrates worker nicely into your Linux system and allows you to run it
automatically on system startup. It is possible to have more than one worker on automatically on system startup. It is possible to have more than one worker on
every server, so the provided unit file is templated. Each instance of the every server, so the provided unit file is a template. Each instance of the
worker unit has a unique string identifier, which is used for managing that worker unit has a unique string identifier, which is used for managing that
instance through systemd. By default, only one worker instance is ready to use instance through systemd. By default, only one worker instance is ready to use
after installation and its ID is "1". after installation and its ID is "1".
@ -308,7 +308,7 @@ Check with
``` ```
if the worker is running. You should see "active (running)" message. if the worker is running. You should see "active (running)" message.
- Worker can be stopped or restarted accordigly using `systemctl stop` and - Worker can be stopped or restarted accordingly using `systemctl stop` and
`systemctl restart` commands. `systemctl restart` commands.
- If you want to run worker after system startup, run: - If you want to run worker after system startup, run:
``` ```
@ -394,7 +394,7 @@ Check with
``` ```
if the broker is running. You should see "active (running)" message. if the broker is running. You should see "active (running)" message.
- Broker can be stopped or restarted accordigly using `systemctl stop` and - Broker can be stopped or restarted accordingly using `systemctl stop` and
`systemctl restart` commands. `systemctl restart` commands.
- If you want to run broker after system startup, run: - If you want to run broker after system startup, run:
``` ```
@ -614,9 +614,9 @@ $ python setup.py install --install-scripts /usr/bin
#### Usage #### Usage
As stated before cleaner should be cronned, on linux systems this can be done by As stated before cleaner should be croned, on linux systems this can be done by
built in `cron` service or if there is `systemd` present cleaner itself provides built in `cron` service or if there is `systemd` present cleaner itself provides
`*.timer` file which can be used for cronning from `systemd`. On Windows systems `*.timer` file which can be used for croning from `systemd`. On Windows systems
internal scheduler should be used. internal scheduler should be used.
- Running cleaner from command line is fairly simple: - Running cleaner from command line is fairly simple:
@ -631,7 +631,7 @@ $ systemctl start recodex-cleaner.timer
``` ```
0 0 * * * /usr/bin/recodex-cleaner -c /etc/recodex/cleaner/config.yml 0 0 * * * /usr/bin/recodex-cleaner -c /etc/recodex/cleaner/config.yml
``` ```
- Add cleaner to Windows cheduler service with following command: - Add cleaner to Windows scheduler service with following command:
``` ```
> schtasks /create /sc daily /tn "ReCodEx Cleaner" /tr \ > schtasks /create /sc daily /tn "ReCodEx Cleaner" /tr \
"\"C:\Program Files\ReCodEx\cleaner\recodex-cleaner.exe\" \ "\"C:\Program Files\ReCodEx\cleaner\recodex-cleaner.exe\" \
@ -738,7 +738,7 @@ $ npm install
For easy production usage there is an additional package for managing NodeJS For easy production usage there is an additional package for managing NodeJS
processes, `pm2`. This tool can run your application as a daemon, monitor processes, `pm2`. This tool can run your application as a daemon, monitor
occupied resources, gather logs and provide simple console interface for occupied resources, gather logs and provide simple console interface for
managing app's state. To install it globally into your system run: managing state of the app. To install it globally into your system run:
``` ```
# npm install pm2 -g # npm install pm2 -g
@ -771,7 +771,7 @@ Both modes can be configured to use different ports or set base address of used
API server. This can be configured in `.env` file in root of the repository. API server. This can be configured in `.env` file in root of the repository.
There is `.env-sample` file which can be just copied and altered. There is `.env-sample` file which can be just copied and altered.
The production mode can be run also as a demon controled by `pm2` tool. First The production mode can be run also as a demon controlled by `pm2` tool. First
the web application has to be built and then the server javascript file can run the web application has to be built and then the server javascript file can run
as a daemon. as a daemon.

@ -28,7 +28,7 @@ List of usable variables in a job configuration:
- **JOB_ID** -- identification of this job - **JOB_ID** -- identification of this job
- **SOURCE_DIR** -- directory where source codes of the job are stored - **SOURCE_DIR** -- directory where source codes of the job are stored
- **EVAL_DIR** -- evaluation directory which should point inside the sandbox. - **EVAL_DIR** -- evaluation directory which should point inside the sandbox.
Note, that some existing directory must be bound inside sanbox under Note, that some existing directory must be bound inside sandbox under
**EVAL_DIR** name using _bound-directories_ directive inside limits section. **EVAL_DIR** name using _bound-directories_ directive inside limits section.
- **RESULT_DIR** -- results from the job can be copied here, but only with - **RESULT_DIR** -- results from the job can be copied here, but only with
internal copy task internal copy task
@ -82,10 +82,10 @@ test will fail anyway.
- path and name of the archive to extract - path and name of the archive to extract
- directory, where the archive will be extracted - directory, where the archive will be extracted
- **Fetch task** will get a file. It can be downloaded from remote fileserver or - **Fetch task** will get a file. It can be downloaded from remote fileserver or
just copied from local cache if available. Calling comand is `fetch` with two just copied from local cache if available. Calling command is `fetch` with
arguments: two arguments:
- name of the requested file without path (file sources are set up in worker - name of the requested file without path (file sources are set up in worker
configuratin file) configuration file)
- path and name on the destination. Providing a different destination name - path and name on the destination. Providing a different destination name
can be used for easy rename. can be used for easy rename.
- **Copy task** can copy files and directories. Detailed info can be found on - **Copy task** can copy files and directories. Detailed info can be found on
@ -93,7 +93,7 @@ test will fail anyway.
[boost::filesystem::copy](http://www.boost.org/doc/libs/1_60_0/libs/filesystem/doc/reference.html#copy). [boost::filesystem::copy](http://www.boost.org/doc/libs/1_60_0/libs/filesystem/doc/reference.html#copy).
Calling command is `cp` and require two arguments: Calling command is `cp` and require two arguments:
- path and name of source target - path and name of source target
- path and name of destination targer - path and name of destination target
- **Make directory task** can create arbitrary number of directories. Calling - **Make directory task** can create arbitrary number of directories. Calling
command is `mkdir` and requires at least one argument. For each provided command is `mkdir` and requires at least one argument. For each provided
argument will be called argument will be called
@ -160,8 +160,8 @@ in the job config file.
``` ```
Usage: recodex-judge-filter [inputFile [outputFile]] Usage: recodex-judge-filter [inputFile [outputFile]]
``` ```
- if `outputFile` is ommited, std. output is used instead. - if `outputFile` is omitted, std. output is used instead.
- if both files are ommited, application uses std. input and output. - if both files are omitted, application uses std. input and output.
- **recodex-judge-shuffle** is for judging results with semantics of a set, - **recodex-judge-shuffle** is for judging results with semantics of a set,
where ordering is not important. Two files are compared with no regards for where ordering is not important. Two files are compared with no regards for
@ -170,8 +170,8 @@ in the job config file.
Usage: recodex-judge-shuffle [-[n][i][r]] <file1> <file2> Usage: recodex-judge-shuffle [-[n][i][r]] <file1> <file2>
``` ```
- `-n` ignore newlines (newline is considered only a whitespace) - `-n` ignore newlines (newline is considered only a whitespace)
- `-i` ignore items order on the row (tokens on each row may be permutated) - `-i` ignore items order on the row (tokens on each row may be permuted)
- `-r` ignore order of rows (rows may be permutated); this option has no - `-r` ignore order of rows (rows may be permuted); this option has no
effect when `-n` is used effect when `-n` is used
## Configuration items ## Configuration items
@ -200,7 +200,7 @@ in the job config file.
- _type_ -- type of the task, can be omitted, default value is _inner_ -- - _type_ -- type of the task, can be omitted, default value is _inner_ --
possible values are: _inner_, _initiation_, _execution_, _evaluation_. possible values are: _inner_, _initiation_, _execution_, _evaluation_.
Each logical test must contain 0 or more _initiation_ tasks, at least one Each logical test must contain 0 or more _initiation_ tasks, at least one
task of type _execution_ (time and memory limits exceeded are presentet to task of type _execution_ (time and memory limits exceeded are presented to
user) and exactly one of type _evaluation_ (typicaly judge). _Inner_ task user) and exactly one of type _evaluation_ (typicaly judge). _Inner_ task
type is mainly for internal tasks, but can be used for external tasks, type is mainly for internal tasks, but can be used for external tasks,
which are not part of any test. which are not part of any test.
@ -208,11 +208,11 @@ in the job config file.
defined task is automatically external defined task is automatically external
- **name** -- name of used sandbox - **name** -- name of used sandbox
- _stdin_ -- file to which standard input will be redirected, can be - _stdin_ -- file to which standard input will be redirected, can be
omitted; job variables can be used, usualy `${EVAL_DIR}` omitted; job variables can be used, usually `${EVAL_DIR}`
- _stdout_ -- file to which standard output will be redirected, can be - _stdout_ -- file to which standard output will be redirected, can be
omitted; job variables can be used, usualy `${EVAL_DIR}` omitted; job variables can be used, usually `${EVAL_DIR}`
- _stderr_ -- file to which error output will be redirected, can be - _stderr_ -- file to which error output will be redirected, can be
omitted; job variables can be used, usualy `${EVAL_DIR}` omitted; job variables can be used, usually `${EVAL_DIR}`
- _limits_ -- list of limits which can be passed to sandbox, can be - _limits_ -- list of limits which can be passed to sandbox, can be
omitted, in that case defaults will be used omitted, in that case defaults will be used
- **hw-group-id** -- determines specific limits for specific - **hw-group-id** -- determines specific limits for specific
@ -236,7 +236,7 @@ in the job config file.
configuration. Contains 3 suboptions: **src** -- source pointing configuration. Contains 3 suboptions: **src** -- source pointing
to actual system directory (absolute path), **dst** -- destination to actual system directory (absolute path), **dst** -- destination
inside sandbox which can have its own filesystem binding (absolute inside sandbox which can have its own filesystem binding (absolute
path inside sanboxed directory structure) and **mode** -- path inside sandboxed directory structure) and **mode** --
determines connection mode of specified directory, one of values: determines connection mode of specified directory, one of values:
RW (allow read-write access), NOEXEC (disallow execution of RW (allow read-write access), NOEXEC (disallow execution of
binaries), FS (mount device-less filesystem like `/proc`), MAYBE binaries), FS (mount device-less filesystem like `/proc`), MAYBE

@ -16,8 +16,8 @@ of the particular worker instance.
### Configuration items ### Configuration items
- **worker-id** -- unique identification of worker at one server. This id is - **worker-id** -- unique identification of worker at one server. This id is
used by _isolate_ sanbox on linux systems, so make sure to meet isolate's used by _isolate_ sandbox on linux systems, so make sure to meet requirements
requirements (default is number from 1 to 999). of the isolate (default is number from 1 to 999).
- _worker-description_ -- human readable description of this worker - _worker-description_ -- human readable description of this worker
- **broker-uri** -- URI of the broker (hostname, IP address, including port, - **broker-uri** -- URI of the broker (hostname, IP address, including port,
...) ...)
@ -26,7 +26,7 @@ of the particular worker instance.
- _max-broker-liveness_ -- specifies how many pings in a row can broker miss - _max-broker-liveness_ -- specifies how many pings in a row can broker miss
without making the worker dead. without making the worker dead.
- _headers_ -- map of headers specifies worker's capabilities - _headers_ -- map of headers specifies worker's capabilities
- _env_ -- list of enviromental variables which are sent to broker in init - _env_ -- list of environmental variables which are sent to broker in init
command command
- _threads_ -- information about available threads for this worker - _threads_ -- information about available threads for this worker
- **hwgroup** -- hardware group of this worker. Hardware group must specify - **hwgroup** -- hardware group of this worker. Hardware group must specify
@ -211,11 +211,11 @@ It is in YAML format as all of the other configurations.
Description of configurable items, bold ones are required, italics ones are Description of configurable items, bold ones are required, italics ones are
optional. optional.
- _websocket_uri_ -- URI where is the endpoint of websocket connection. Must be - _websocket_uri_ -- URI where is the endpoint of WebSocket connection. Must be
visible to the clients (directly or through public proxy) visible to the clients (directly or through public proxy)
- string representation of IP address or a hostname - string representation of IP address or a hostname
- port number - port number
- _zeromq_uri_ -- URI where is the endpoint of zeromq connection from broker. - _zeromq_uri_ -- URI where is the endpoint of ZeroMQ connection from broker.
Could be hidden from public internet. Could be hidden from public internet.
- string representation of IP address or a hostname - string representation of IP address or a hostname
- port number - port number

Loading…
Cancel
Save