32 KiB
Installation
Installation of whole ReCodEx solution is a very complex process. It is recommended to have good unix skills with basic knowledge of project architecture.
There are a lot of different GNU/Linux distributions with different package management, naming convention and version release policies. So it is impossible to cover all of the possible variants. We picked one distribution, which is fully supported by automatic installation script, for others there are brief information about installation in every project component's own chapter.
Distribution of our choice is CentOS, currently in version 7. It is a well known server distribution, derived from enterprise distrubution from Red Hat, so it is very stable and widely used system with long term support. There are EPEL additional repositories from Fedora project, which adds newer versions of some packages into CentOS, which allows us to use current environment. Also, rpm packages are much easier to build (for example from Python sources) and maintain.
The big rival of CentOS in server distributions field is Debian. We are running
one instance of ReCodEx on Debian too. You need to use testing repositories to
use some decent package versions. It is easy to mess your system easily, so
create file /etc/apt/apt.conf
with content of APT::Default-Release "stable";
. After you add testing repos to /etc/apt/sources.list
, you can
install packages from there like $ sudo apt-get -t testing install gcc
.
Some components are also capable of running in Windows environment. However setting up Windows OS is a little bit of pain and it is not supposed to run ReCodEx in this way. Only worker component may be needed to run on Windows, so we are providing clickable installer including dependencies. Just for info, all components should be able to run on Windows, only broker was not tested and may require small tweaks to properly work.
Ansible installer
For automatic installation is used set of Ansible scripts. Ansible is one of the best known and used tools for automatic server management. It is required only to have SSH access to the server and ansible installed on the client machine. For further reading is supposed basic Ansible knowledge. For more info check their documentation.
All Ansible scripts are located in utils repository, installation directory. Ansible files are pretty self-describing, they can be also use as template for installation to different systems. Before installation itself it is required to edit two files -- set addresses of hosts and values of some variables.
Hosts configuration
First, it is needed to set IP addresses of your computers. Common practise is to have multiple files with definitions, one for development, another for production for example. Example configuration is in development file. Each component of ReCodEx project can be installed on different server. Hosts can be specified as hostnames or IP addresses, optionally with port of SSH after colon.
Shorten example of hosts config:
[workers]
127.0.0.1:22
[broker]
127.0.0.1:22
[all:children]
workers
broker
Variables
Configurable variables are saved in group_vars/all.yml file. Syntax is basic key-value pair per line, separated by colon. Values with brief description:
- source_dir -- Directory, where to store all sources from GitHub. Defaults
/opt/recodex
. - mysql_root_password -- Password of root user of MySQL database. Will be set
after installation and saved to
/root/.my.cnf
file. - mysql_recodex_username -- MySQL username for ReCodEx API access.
- mysql_recodex_password -- Password for the user above.
- admin_email -- Email of administrator. Used when configuring Apache webserver.
- recodex_hostname -- Hostname where the API and web app will be accessible. For example "recodex.projekty.ms.mff.cuni.cz".
- webapp_node_addr -- IP address of NodeJS server running web app. Defaults to "127.0.0.1" and should not be changed.
- webapp_node_port -- Port to above.
- webapp_public_addr -- Public address, where web server for web app will listen. Defaults to "*".
- webapp_public_port -- Port to above.
- webapp_firewall -- Open port for web app in firewall, values "yes" or "no".
- webapi_public_endpoint -- Public URL when the API will be running, for example "https://recodex.projekty.ms.mff.cuni.cz:4000/v1".
- webapi_public_addr -- Public address, where web server for API will listen. Defaults to "*".
- webapi_public_port -- Port to above.
- webapi_firewall -- Open port for API in firewall, values "yes" or "no".
- database_firewall -- Open port for database in firewall, values "yes" or "no".
- broker_to_webapi_addr -- Address, where API can reach broker. Private one is recommended.
- broker_to_webapi_port -- Port to above.
- broker_firewall_api -- Open above port in firewall, "yes" or "no".
- broker_to_workers_addr -- Address, where workers can reach broker. Private one is recommended.
- broker_to_workers_port -- Port to above.
- broker_firewall_workers -- Open above port in firewall, "yes" or "no".
- broker_notifier_address -- URL (on API), where broker will send notifications, for example "https://recodex.projekty.ms.mff.cuni.cz/v1/broker-reports".
- broker_notifier_port -- Port to above, should be the same as for API itself (webapi_public_port)
- broker_notifier_username -- Username for HTTP Authentication for reports
- broker_notifier_password -- Password for HTTP Authentication for reporst
- monitor_websocket_addr -- Address, where websocket connection from monitor will be available
- monitor_websocket_port -- Port to above.
- monitor_firewall_websocket -- Open above port in firewall, "yes" or "no".
- monitor_zeromq_addr -- Address, where monitor will be available on ZeroMQ socket for broker to receive reports.
- monitor_zeromq_port -- Port to above.
- monitor_firewall_zeromq -- Open above port in firewall, "yes" or "no".
- fileserver_addr -- Address, where fileserver will serve files.
- fileserver_port -- Port to above.
- fileserver_firewall -- Open above port in firewall, "yes" or "no".
- fileserver_username -- Username for HTTP Authentication for access the fileserver.
- fileserver_password -- Password for HTTP Authentication for access the fileserver.
- worker_cache_dir -- File cache storage for workers. Defaults to "/tmp/recodex/cache".
- worker_cache_age -- How long hold fetched files in worker cache, in seconds.
- isolate_version -- Git tag of Isolate version worker depends on.
Installation itself
With your computers installed with CentOS and configuration modified it is time to run the installation.
$ ansible-playbook -i development recodex.yml
This command installs all components of ReCodEx onto machines listed in development file. It is possible to install only specified parts of project, just use component's YAML file instead of recodex.yml.
Ansible expects to have password-less access to the remote machines. If you have
not such setup, use options --ask-pass
and --ask-become-pass
.
Manual installation
Worker
Dependencies
Worker specific requirements are written in this section. It covers only basic requirements, additional runtimes or tools may be needed depending on type of use. The package names are for CentOS if not specified otherwise.
- ZeroMQ in version at least 4.0, packages
zeromq
andzeromq-devel
(libzmq3-dev
on Debian) - YAML-CPP library,
yaml-cpp
andyaml-cpp-devel
(libyaml-cpp0.5v5
andlibyaml-cpp-dev
on Debian) - libcurl library
libcurl-devel
(libcurl4-gnutls-dev
on Debian) - libarchive library as optional dependency. Installing will speed up build
process, otherwise libarchive is built from source during installation.
Package name is
libarchive
andlibarchive-devel
(libarchive-dev
on Debian)
Install Isolate from source (only for Linux installations)
First, we need to compile sandbox Isolate from source and install it. Current
worker is tested against version 1.3, so this version needs to be checked out.
Assume that we keep source code in /opt/src
dir. For building man page you
need to have package asciidoc
installed.
$ cd /opt/src
$ git clone https://github.com/ioi/isolate.git
$ cd isolate
$ git checkout v1.3
$ make
# make install && make install-doc
For proper work Isolate depends on several advanced features of the Linux
kernel. Make sure that your kernel is compiled with CONFIG_PID_NS
,
CONFIG_IPC_NS
, CONFIG_NET_NS
, CONFIG_CPUSETS
, CONFIG_CGROUP_CPUACCT
,
CONFIG_MEMCG
. If your machine has swap enabled, also check
CONFIG_MEMCG_SWAP
. With which flags was your kernel compiled with can be found
in /boot
directory, file config-
and version of your kernel. Red Hat based
distributions should have these enabled by default, for Debian you you may want
to add the parameters cgroup_enable=memory swapaccount=1
to the kernel
command-line, which can be set by adding value GRUB_CMDLINE_LINUX_DEFAULT
to
/etc/default/grub
file.
For better reproducibility of results, some kernel parameters can be tweaked:
- Disable address space randomization. Create file
/etc/sysctl.d/10-recodex.conf
with contentkernel.randomize_va_space=0
. Changes will take effect after restart or runsysctl kernel.randomize_va_space=0
command. - Disable dynamic CPU frequency scaling. This requires setting the cpufreq scaling governor to performance.
Clone worker source code repository
$ git clone https://github.com/ReCodEx/worker.git
$ git submodule update --init
Install worker on Linux
It is supposed that your current working directory is that one with clonned worker source codes.
- Prepare environment running
mkdir build && cd build
- Build sources by
cmake ..
following bymake
- Build binary package by
make package
(may require root permissions). Note thatrpm
anddeb
packages are build in the same time. You may need to haverpmbuild
command (usually asrpmbuild
orrpm
package) or edit CPACK_GENERATOR variable in CMakeLists.txt file in root of source code tree. - Install generated package through your package manager (
yum
,dnf
,dpkg
).
The worker installation process is composed of following steps:
- create config file
/etc/recodex/worker/config-1.yml
- create systemd unit file
/etc/systemd/system/recodex-worker@.service
- put main binary to
/usr/bin/recodex-worker
- put judges binaries to
/usr/bin/
directory - create system user and group
recodex
with/sbin/nologin
shell (if not already existing) - create log directory
/var/log/recodex
- set ownership of config (
/etc/recodex
) and log (/var/log/recodex
) directories torecodex
user and group
Note: If you do not want to generate binary packages, you can just install the
project with make install
(as root). But installation through your
distribution's package manager is preferred way to keep your system clean and
manageable in long term horizon.
Install worker on Windows
From beginning we are determined to support Windows operating system on which some of the workers may run (especially for projects in C# programming language). Support for Windows is quite hard and time consuming and there were several problems during the development. To ensure capability of compilation on Windows we set up CI for Windows named Appveyor. However installation should be easy due to provided installation script.
There are only two additional dependencies needed, Windows 7 and higher and Visual Studio 2015+. Provided simple installation batch script should do all the work on Windows machine. Officially only VS2015 and 32-bit compilation is supported, because of hardcoded compile options in installation script. If different VS or different platform is needed, the script should be changed to appropriate values, which is simple and straightforward.
Mentioned script is placed in install directory alongside supportive scripts
for UNIX systems and is named win-build.cmd. Provided script will do almost
all the work connected with building and dependency resolving (using
NuGet package manager and msbuild
building system). Script should be
run under 32-bit version of Developer Command Prompt for VS2015 and from
install directory.
Building and installing of worker is then quite simple, script has command line parameters which can be used to specify what will be done:
- -build -- It is the default options if none specified. Builds worker and its tests, all is saved in build folder and subfolders.
- -clean -- Cleanup of downloaded NuGet packages and built application/libraries.
- -test -- Build worker and run tests on compiled test cases.
- -package -- Generation of clickable installation using cpack and NSIS (has to be installed on machine to get this to work).
install> win-build.cmd # same as: win-build.cmd -build
install> win-build.cmd -clean
install> win-build.cmd -test
install> win-build.cmd -package
All build binaries and cmake temporary files can be found in build folder, classically there will be subfolder Release which will contain compiled application with all needed dlls. Once if clickable installation binary is created, it can be found in build folder under name recodex-worker-VERSION-win32.exe. Sample screenshot can be found on following picture.
Usage
A systemd unit file is distributed with the worker to simplify its launch. It integrates worker nicely into your Linux system and allows you to run it automatically on system startup. It is possible to have more than one worker on every server, so the provided unit file is templated. Each instance of the worker unit has a unique string identifier, which is used for managing that instance through systemd. By default, only one worker instance is ready to use after installation and its ID is "1".
- Starting worker with id "1" can be done this way:
# systemctl start recodex-worker@1.service
Check with
# systemctl status recodex-worker@1.service
if the worker is running. You should see "active (running)" message.
- Worker can be stopped or restarted accordigly using
systemctl stop
andsystemctl restart
commands. - If you want to run worker after system startup, run:
# systemctl enable recodex-worker@1.service
For further information about using systemd please refer to systemd documentation.
Adding new worker
To add a new worker you need to do a few steps:
- Make up an unique string ID.
- Copy default configuration file
/etc/recodex/worker/config-1.yml
to the same directory and name itconfig-<your_unique_ID>.yml
- Edit that config file to fit your needs. Note that you must at least change worker-id and logger file values to be unique.
- Run new instance using
# systemctl start recodex-worker@<your_unique_ID>.service
Broker
Dependencies
Broker has similar basic dependencies as worker, for recapitulation:
- ZeroMQ in version at least 4.0, packages
zeromq
andzeromq-devel
(libzmq3-dev
on Debian) - YAML-CPP library,
yaml-cpp
andyaml-cpp-devel
(libyaml-cpp0.5v5
andlibyaml-cpp-dev
on Debian) - libcurl library
libcurl-devel
(libcurl4-gnutls-dev
on Debian)
Clone broker source code repository
$ git clone https://github.com/ReCodEx/broker.git
$ git submodule update --init
Install broker
Installation of broker program does following step to your computer:
- create config file
/etc/recodex/broker/config.yml
- create systemd unit file
/etc/systemd/system/recodex-broker.service
- put main binary to
/usr/bin/recodex-broker
- create system user and group
recodex
with nologin shell (if not existing) - create log directory
/var/log/recodex
- set ownership of config (
/etc/recodex
) and log (/var/log/recodex
) directories torecodex
user and group
It is supposed that your current working directory is that one with clonned worker source codes.
- Prepare environment running
mkdir build && cd build
- Build sources by
cmake ..
following bymake
- Build binary package by
make package
(may require root permissions). Note thatrpm
anddeb
packages are build in the same time. You may need to haverpmbuild
command (usually asrpmbuild
orrpm
package) or edit CPACK_GENERATOR variable CMakeLists.txt file in root of source code tree. - Install generated package through your package manager (
yum
,dnf
,dpkg
).
Note: If you do not want to generate binary packages, you can just install the
project with make install
(as root). But installation through your
distribution's package manager is preferred way to keep your system clean and
manageable in long term horizon.
Usage
Running broker is very similar to the worker setup. There is also provided systemd unit file for convenient usage. There is only one broker per whole ReCodEx solution, so there is no need for systemd templates.
- Running broker can be done by following command:
# systemctl start recodex-broker.service
Check with
# systemctl status recodex-broker.service
if the broker is running. You should see "active (running)" message.
- Broker can be stopped or restarted accordigly using
systemctl stop
andsystemctl restart
commands. - If you want to run broker after system startup, run:
# systemctl enable recodex-broker.service
For further information about using systemd please refer to systemd documentation.
Fileserver
To install and use the fileserver, it is necessary to have Python3 with pip
package manager installed. It is needed to install the dependencies. From
clonned repository run the following command:
$ pip install -r requirements.txt
That is it. Fileserver does not need any special installation. It is possible to
build and install rpm package or install it without packaging the same way as
monitor, but it is only optional. The installation would provide you with script
recodex-fileserver
in you PATH
. No systemd unit files are provided, because
of the configuration and usage of fileserver component is much different to our
other Python parts.
Configuration and usage
There are several ways of running the ReCodEx fileserver. We will cover three typical use cases.
Running in development mode
For simple development usage, it is possible to run the fileserver in the command line. Allowed options are described below.
usage: fileserver.py [--directory WORKING_DIRECTORY]
{runserver,shell} ...
- runserver argument starts the Flask development server (i.e.
app.run()
). As additional argument can be given a port number. - shell argument instructs Flask to run a Python shell inside application context.
Simple development server on port 9999 can be run as
$ python3 fileserver.py runserver 9999
When run like this command, the fileserver creates a temporary directory where it stores all the files and which is deleted when it exits.
Running as WSGI script in a web server
If you need features such as HTTP authentication (recommended) or efficient
serving of static files, it is recommended to run the app in a full-fledged web
server (such as Apache or Nginx) using WSGI. Apache configuration can be
generated by mkconfig.py
script from the repository.
usage: mkconfig.py apache [-h] [--port PORT] --working-directory
WORKING_DIRECTORY [--htpasswd HTPASSWD]
[--user USER]
- port -- port where the fileserver should listen
- working_directory -- directory where the files should be stored
- htpasswd -- path to user file for HTTP Basic Authentication
- user -- user under which the server should be run
Running using uWSGI
Another option is to run fileserver as a standalone app via uWSGI service. Setup
is also quite simple, configuration file can be also generated by mkconfig.py
script.
-
(Optional) Create a user for running the fileserver
-
Make sure that your user can access your clone of the repository
-
Run
mkconfig.py
script.usage: mkconfig.py uwsgi [-h] [--user USER] [--port PORT] [--socket SOCKET] --working-directory WORKING_DIRECTORY
- user -- user under which the server should be run
- port -- port where the fileserver should listen
- socket -- path to UNIX socket where the fileserver should listen
- working_directory -- directory where the files should be stored
-
Save the configuration file generated by the script and run it with uWSGI, either directly or using systemd. This depends heavily on your distribution.
-
To integrate this with another web server, see the uWSGI documentation
Note that the ways distributions package uWSGI can vary wildly. In Debian 8 it is necessary to convert the configuration file to XML and make sure that the python3 plugin is loaded instead of python. This plugin also uses Python 3.4, even though the rest of the system uses Python 3.5 - make sure to install dependencies for the correct version.
Monitor
For monitor functionality there are some required packages. All of them are
listed in requirements.txt file in the repository and can be installed by
pip
package manager as
$ pip install -r requirements.txt
Description of dependencies:
- zmq -- binding to ZeroMQ framework
- websockets -- framework for communication over WebSockets
- asyncio -- library for fast asynchronous operations
- pyyaml -- parsing YAML configuration files
- argparse -- parsing command line arguments
Installation will provide you following files:
/usr/bin/recodex-monitor
-- simple startup script located in PATH/etc/recodex/monitor/config.yml
-- configuration file/etc/systemd/system/recodex-monitor.service
-- systemd startup script- code files will be installed in location depending on your system settings,
mostly into
/usr/lib/python3.5/site-packages/monitor/
or similar
Systemd script runs monitor binary as specific recodex user, so in postinst
script user and group of this name are created. Also, ownership of configuration
file will be granted to that user.
- RPM distributions can make and install binary package. This can be done like
this:
- run command
to generate binary$ python3 setup.py bdist_rpm --post-install ./install/postints
.rpm
package or download precompiled one from releases tab of monitor GitHub repository (it is architecture independent package)- install package using
# yum install ./dist/recodex-monitor-<version>-1.noarch.rpm
- Other Linux distributions can install cleaner straight
$ python3 setup.py install --install-scripts /usr/bin # ./install/postinst
Usage
Preferred way to start monitor as a service is via systemd as the other parts of ReCodEx solution.
- Running monitor is fairly simple:
# systemctl start recodex-monitor.service
- Current state can be obtained by
# systemctl status recodex-monitor.service
You should see green Active (running).
- Setting up monitor to be started on system startup:
# systemctl enable recodex-monitor.service
Alternatively monitor can be started directly from command line with specifying path to configuration file. Note that this command will not start monitor as a daemon.
$ recodex-monitor -c /etc/recodex/monitor/config.yml
Cleaner
To install and use the cleaner, it is necessary to have Python3 with package
manager pip
installed.
- Dependencies of cleaner has to be installed:
$ pip install -r requirements.txt
- RPM distributions can make and install binary package. This can be done like this:
$ python setup.py bdist_rpm --post-install ./cleaner/install/postinst
#### Usage
As stated before cleaner should be cronned, on linux systems this can be done by
built in `cron` service or if there is `systemd` present cleaner itself provides
`*.timer` file which can be used for cronning from `systemd`. On Windows systems
internal scheduler should be used.
- Running cleaner from command line is fairly simple:
$ recodex-cleaner -c /etc/recodex/cleaner
- Enable cleaner service using systemd:
$ systemctl start recodex-cleaner.timer
- Add cleaner to linux cron service using following configuration line:
0 0 * * * /usr/bin/recodex-cleaner -c /etc/recodex/cleaner/config.yml
- Add cleaner to Windows cheduler service with following command:
schtasks /create /sc daily /tn "ReCodEx Cleaner" /tr
""C:\Program Files\ReCodEx\cleaner\recodex-cleaner.exe"
-c "C:\Program Files\ReCodEx\cleaner\config.yml""
# yum install ./dist/recodex-cleaner-<version>-1.noarch.rpm
- Other Linux distributions can install cleaner straight
$ python setup.py install --install-scripts /usr/bin
# ./cleaner/install/postinst
- For Windows installation do following:
- start
cmd
with administrator permissions - run installation with
where path specified with> python setup.py install --install-scripts \ "C:\Program Files\ReCodEx\cleaner"
--install-scripts
can be changed- copy configuration file alongside with installed executable using
> copy install\config.yml \ "C:\Program Files\ReCodEx\cleaner\config.yml"
- start
REST API
The web API requires a PHP runtime version at least 7. Which one depends on
actual configuration, there is a choice between mod_php inside Apache,
php-fpm with Apache or Nginx proxy or running it as standalone uWSGI script.
It is common that there are some PHP extensions, that have to be installed on
the system. Namely ZeroMQ binding (php-zmq
package or similar), MySQL module
(php-mysqlnd
package) and ldap extension module for CAS authentication
(php-ldap
package). Make sure that the extensions are loaded in your php.ini
file (/etc/php.ini
or files in /etc/php.d/
).
The API depends on some other projects and libraries. For managing them
Composer is used. It can be installed from system
repositories or downloaded from the website, where detailed instructions are as
well. Composer reads composer.json
file in the project root and installs
dependencies to the vendor/
subdirectory. To do that, run:
$ composer install
Database preparation
When the API is installed and configured (doctrine section is sufficient here) the database schema can be generated. There is a prepared command to do that from command line:
$ php www/index.php orm:schema-tool:update --force
With API comes some initial values, for example default user roles with proper permissions. To fill your database with these values there is another command line command:
$ php www/index.php db:fill
Check the outputs of both commands for errors. If there are any, try to clean
temporary API cache in temp/cache/
directory and repeat the action.
Webserver configuration
The simplest way to get started is to start the built-in PHP server in the root directory of your project:
$ php -S localhost:4000 -t www
Then visit http://localhost:4000
in your browser to see the welcome page of API project.
For Apache or Nginx, setup a virtual host to point to the www/
directory of
the project and you should be ready to go. It is critical that whole app/
,
log/
and temp/
directories are not accessible directly via a web browser
(see security warning). Also it is
highly recommended to set up a HTTPS certificate for public access to the
API.
Troubleshooting
In case of any issues first remove the Nette cache directory temp/cache/
and
try again. This solves most of the errors. If it does not help, examine API logs
from log/
directory of the API source or logs of your webserver.
Web application
Web application requires NodeJS server as its runtime environment. This runtime is needed for executing JavaScript code on server and sending the pre-render parts of pages to clients, so the final rendering in browsers is a lot quicker and the page is accessible to search engines for indexing.
But some functionality is better in other full fledged web servers like Apache or Nginx, so the common practice is to use a tandem of both. NodeJS takes care of basic functionality of the app while the other server (Apache) is set as reverse proxy and providing additional functionality like SSL encryption, load balancing or caching of static files. The recommended setup contains both NodeJS and one of Apache and Nginx web servers for the reasons discussed above.
Stable versions of 4th and 6th series of NodeJS server are sufficient, using at least 6th series is highly recommended. Please check the most recent version of the packages in your distribution's repositories, there are often outdated ones. However, there are some third party repositories for all main Linux distributions.
The app depends on several libraries and components, all of them are listed in
package.json
file in source repository. For managing dependencies is used node
package manager (npm
), which can come with NodeJS installation otherwise can
be installed separately. To fetch and install all dependencies run:
$ npm install
For easy production usage there is an additional package for managing NodeJS
processes, pm2
. This tool can run your application as a daemon, monitor
occupied resources, gather logs and provide simple console interface for
managing app's state. To install it globally into your system run:
# npm install pm2 -g
Usage
The application can be run in two modes, development and production. Development
mode uses only client rendering and tracks code changes with rebuilds of the
application in real time. In production mode the compilation (transpile to ES5
standard using Babel and bundle into single file using webpack) has to be
done separately prior to running. The scripts for compilation are provided as
additional npm
commands.
- Development mode can be use for local testing of the app. This mode uses webpack dev server, so all code runs on a client, there is no server side rendering available. Starting is simple command, default address is http://localhost:8080.
$ npm run dev
- Production mode is mostly used on the servers. It provides all features such as server side rendering. This can be run via:
$ npm run build
$ npm start
Both modes can be configured to use different ports or set base address of used
API server. This can be configured in .env
file in root of the repository.
There is .env-sample
file which can be just copied and altered.
The production mode can be run also as a demon controled by pm2
tool. First
the web application has to be built and then the server javascript file can run
as a daemon.
$ npm run build
$ pm2 start bin/server.js
The pm2
tool has several options, most notably status, stop, restart and
logs. Further description is available on project
website.
Security
One of the most important aspects of ReCodEx instance is security. It is crucial to keep gathered data safe and not to allow unauthorized users modify restricted pieces of information. Here is a small list of recommendations to keep running ReCodEx instance safe.
- Secure MySQL installation. The installation script does not do any security
actions, so please run at least
mysql_secure_installation
script on database computer. - Get HTTPS certificate and set it in Apache for web application and API. Monitor should be proxied through the web server too with valid certificate. You can get free DV certificate from Let's Encrypt. Do not forget to set up automatic renewing!
- Hide broker, workers and fileserver behind firewall, private subnet or IPsec tunnel. They are not required to be reached from public internet, so it is better keep them isolated.
- Keep your server updated and well configured. For automatic installation of
security updates on CentOS system refer to
yum-cron
package. Configure SSH and Apache to use only strong ciphers, some recommendations can be found here. - Do not put actually used credentials on web, for example do not commit your passwords (in Ansible variables file) on GitHub.
- Regularly check logs for anomalies.