Advanced Installation of OTOBO with Docker Compose
This section provides some more technical insights into what is happening under the hood.
::: version List of Docker containers; 10+ Container otobo_web_1
: OTOBO web server on internal port 5000.
Container otobo_daemon_1
: OTOBO daemon. The OTOBO daemon is started and checked periodically.
Container otobo_db_1
: Runs the MariaDB database on internal port 3306.
Container otobo_elastic_1
: Elasticsearch on internal ports 9200 and 9300.
Container otobo_redis_1
: Runs Redis as a caching service.
Optional Container otobo_nginx_1
: Runs nginx as a reverse proxy for providing HTTPS support. :::
::: version List of Docker containers; 11 Container otobo-web-1
: OTOBO web server on internal port 5000.
Container otobo-daemon-1
: OTOBO daemon. The OTOBO daemon is started and checked periodically.
Container otobo-db-1
: Runs the MariaDB database on internal port 3306.
Container otobo-elastic-1
: Elasticsearch on internal ports 9200 and 9300.
Container otobo-redis-1
: Runs Redis as a caching service.
Optional Container otobo-nginx-1
: Runs nginx as a reverse proxy for providing HTTPS support. :::
Overview of the Docker Volumes
The Docker volumes are created on the host for persistent data. These allow starting and stopping the services without data loss. Note that containers are ephemeral and only data in the volumes is permanent. otobo_opt_otobo
: contains /opt/otobo in the web and daemon containers.
otobo_mariadb_data
: contains /var/lib/mysql in the db container.
otobo_elasticsearch_data
: contains /usr/share/elasticsearch/data in the elastic container.
otobo_redis_data
: contains data for the redis container.
otobo_nginx_ssl
: contains the TLS files, certificate and private key, must be initialized manually.
Docker Environment Variables
In the instructions, we only performed a minimal configuration. But the .env file allows setting more variables. Here is a short list of the most important environment variables. Note that the base images support more environment variables.
MariaDB Settings
OTOBO_DB_ROOT_PASSWORD
: The root password for MariaDB. This setting is required to run the db service.
Elasticsearch Settings
Elasticsearch requires some settings for production environments. Please read https://www.elastic.co/guide/en/elasticsearch/reference/7.8/docker.html#docker-prod-prerequisites for detailed information.
OTOBO_Elasticsearch_ES_JAVA_OPTS
: Example setting: OTOBO_Elasticsearch_ES_JAVA_OPTS=-Xms512m -Xmx512m Please adjust this value for production environments up to 4g.
Web Server Settings
OTOBO_WEB_HTTP_PORT
: Set this value if the HTTP port should differ from the default port 80. If HTTPS is enabled, the HTTP port will be redirected to HTTPS.
Nginx Web Proxy Settings
: These settings are used when HTTPS is enabled.
OTOBO_WEB_HTTP_PORT
: Set this value if the HTTP port should differ from the default port 80. Will be redirected to HTTPS.
OTOBO_WEB_HTTPS_PORT
: Set this value if the HTTPS port should differ from the default port 443.
OTOBO_NGINX_SSL_CERTIFICATE
: SSL certificate for the Nginx web proxy. Example: OTOBO_NGINX_SSL_CERTIFICATE=/etc/nginx/ssl/acme.crt
OTOBO_NGINX_SSL_CERTIFICATE_KEY
: SSL key for the Nginx web proxy. Example: OTOBO_NGINX_SSL_CERTIFICATE_KEY=/etc/nginx/ssl/acme.key
Nginx Web Proxy Settings for Kerberos
: These settings are used by Nginx when Kerberos is used for single sign-on.
OTOBO_NGINX_KERBEROS_KEYTAB
: Kerberos keytab file. The default is /etc/krb5.keytab.
OTOBO_NGINX_KERBEROS_CONFIG
: Kerberos configuration file. The default is /etc/krb5.conf, usually generated from krb5.conf.template.
OTOBO_NGINX_KERBEROS_SERVICE_NAME
: Kerberos Service Name. It is not clear where this setting is actually used.
OTOBO_NGINX_KERBEROS_REALM
: Kerberos REALM. Used in /etc/krb5.conf.
OTOBO_NGINX_KERBEROS_KDC
: Kerberos kdc / AD Controller. Used in /etc/krb5.conf.
OTOBO_NGINX_KERBEROS_ADMIN_SERVER
: Kerberos admin server. Used in /etc/krb5.conf.
OTOBO_NGINX_KERBEROS_DEFAULT_DOMAIN
: Kerberos default domain. Used in /etc/krb5.conf.
NGINX_ENVSUBST_TEMPLATE_DIR
: Provides a custom Nginx configuration template directory. Offers additional flexibility.
Docker Compose Settings
: These settings are used directly by Docker Compose.
COMPOSE_PROJECT_NAME
: The project name is used as a prefix for the volumes and containers. By default, this prefix is set to otobo
, resulting in container names like otobo_web_1
and otobo_db_1
. Change this name if you want to run more than one instance of OTOBO on the same server.
COMPOSE_PATH_SEPARATOR
: Separator for the value of COMPOSE_FILE
COMPOSE_FILE
: Use docker-compose/otobo-base.yml as a base and add the desired extension files. E.g. docker-compose/otobo-override-http.yml or docker-compose/otobo-override-https.yml.
OTOBO_IMAGE_OTOBO, OTOBO_IMAGE_OTOBO_ELASTICSEARCH, OTOBO_IMAGE_OTOBO_NGINX, \...
: Used for specifying alternative Docker images. Useful for testing local builds or for using updated versions of the images.
Custom Configuration of the Nginx Web Proxy
The otobo_nginx_1
container provides HTTPS support by running Nginx as a reverse proxy. The Docker image running in the container consists of the official Nginx Docker image, https://hub.docker.com/_/nginx, along with an OTOBO-specific configuration of Nginx. The default OTOBO-specific configuration can be found inside the Docker image at /etc/nginx/template/otobo_nginx.conf.template. In fact, this is just a template for the final configuration. There is a process provided by the base Nginx image that replaces the macros in the template with the corresponding environment variables. This process is executed when the container starts. In the default template file, the following macros are used:
OTOBO_NGINX_SSL_CERTIFICATE
: For configuring SSL.
OTOBO_NGINX_SSL_CERTIFICATE_KEY
: For configuring SSL.
OTOBO_NGINX_WEB_HOST
: The internally used HTTP host.
OTOBO_NGINX_WEB_PORT
: The internally used HTTP port.
See step [4.] for using this configuration option to set up the SSL certificate.
If the default macros are not sufficient, customization can go further. This can be achieved by replacing the default configuration template with a customized version. It is a best practice not to simply change the configuration in the running container. In these example commands, we use the existing template as a starting point.
cd /opt/otobo-docker
mkdir nginx
docker cp otobo_nginx_1:/etc/nginx/conf.d/otobo_nginx.conf /opt/otobo-docker/nginx/otobo_nginx.conf
docker exec otobo_nginx_1 rm /etc/nginx/conf.d/otobo_nginx.conf
Add the following line to the file docker-compose/otobo-override-https.yml:
volumes:
- /opt/otobo-docker/nginx/otobo_nginx.conf:/opt/otobo-docker/nginx/otobo_nginx.conf
For this, you can use your favorite text editor with Putty or WinSCP.
nano docker-compose/otobo-override-https.yml
Now you can edit the file /opt/otobo-docker/nginx/otobo_nginx.conf. To apply the changes, the containers must be restarted:
docker-compose up -d --build
Single Sign-On with Kerberos Support in Nginx
To enable authentication with Kerberos, please base your .env file on the example file .docker_compose_env_https_kerberos. This activates the special configuration in docker-compose/otobo-override-https-kerberos.yml. This Docker Compose configuration file selects an Nginx image that supports Kerberos. It also passes some Kerberos-specific settings as environment values to the running Nginx container. These settings are listed above. As usual, the values for these settings can be specified in the .env file. Most of these settings are used as replacement values for the template https://github.com/RotherOSS/otobo/blob/rel-10_1/scripts/nginx/kerberos/templates/krb5.conf.template. The replacement takes place during the container start. In the running container, the customized configuration is then available in /etc/krb5.conf. Providing a user-specific /etc/krb5.conf file is still possible. This can be done by mounting a volume that overwrites /etc/krb5.conf in the container. This can be achieved by setting OTOBO_NGINX_KERBEROS_CONFIG in the .env file and enabling the mount directive in docker-compose/otobo-override-https-kerberos.yml. /etc/krb5.keytab is always installation-specific and must therefore always be mounted from the host system. Kerberos SSO Installation Guidesso-kerberos
Choosing Non-Standard Ports
By default, ports 443 and 80 serve HTTPS and HTTP respectively. There may be cases where one or both of these ports are already in use by other services. In these cases, the default ports can be overridden by specifying OTOBO_WEB_HTTP_PORT
and OTOBO_WEB_HTTPS_PORT
in the .env file.
Skipping the Start of Certain Services
The current Docker Compose configuration starts five, or six if HTTPS is enabled, services. However, there are valid use cases where one or more of these services are not needed. The main example is when the database is to be run not as a Docker service but as an external database.
docker-compose up -d web nginx daemon redis elastic
Of course, the same goal can also be achieved by editing the docker-compose/otobo-base.yml file and removing the relevant service definitions.
Customizing OTOBO Docker Compose
Instead of editing the files under docker-compose/ and risking overwriting your own options with the next update of the otobo-docker folder, it is advisable to create an extra YAML file in which the specific services are overridden with additional options. A common example would be to make the database container accessible from the outside via port 3306. To do this, you could create an extra Docker Compose file that looks like this:
cat custom_db.yml
services:
db:
ports:
- "0.0.0.0:3306:3306"
Now we need to tell docker-compose to include our new file. To do this, you need to add your YAML file to the COMPOSE_FILE variable in the .env file, for example:
COMPOSE_FILE=docker-compose/otobo-base.yml:docker-compose/otobo-override-http.yml:custom_db.yml
Now we can use docker-compose to recreate our containers:
docker-compose stop
docker-compose up -d
With this procedure, you can customize any service or volume.
Customizing the OTOBO Docker Image
Many customizations can be made in the external volume otobo_opt_otobo, which corresponds to the /opt/otobo directory in the Docker image. This works, for example, for local Perl modules that can be installed in /opt/otobo/local. Here is an example that installs the not very useful CPAN module Acme::123
.
docker exec -it ${COMPOSE_PROJECT_NAME:=otobo}_web_1 bash
pwd
/opt/otobo
cpanm -l local Acme::123
--> Working on Acme::123
Fetching http://www.cpan.org/authors/id/N/NA/NATHANM/Acme-123-0.04.zip ... OK Configuring Acme-123-0.04 ... OK Building and testing Acme-123-0.04 ... OK Successfully installed Acme-123-0.04 1 distribution installed
The beauty of this approach is that the Docker image itself does not need to be modified. Installing additional Debian packages is a bit trickier. One approach is to create a custom Dockerfile and use the OTOBO image as the base image. Another approach is to create a modified image directly from a running container. This can be done with the docker commit
command, https://docs.docker.com/engine/reference/commandline/commit/. A nice description of this process can be found at https://phoenixnap.com/kb/how-to-commit-changes-to-docker-image. For the latter approach, there are two hurdles to overcome. First, the otobo image runs by default as the user otobo with UID 1000. The problem is that the user otobo is not authorized to install system packages. So the first part of the solution is to use the --user root
option when starting the image. The second hurdle is that the default entrypoint script /opt/otobo_install/entrypoint.sh exits immediately when called as root. The reasoning behind this design decision is that unintentional execution as root should be avoided. The second part of the solution is therefore to specify a different entrypoint script that does not care who the caller is. This brings us to the following example commands, where we add otobo fortune cookies: Pull a tagged OTOBO image if we don't have it yet, and check if the image already provides fortune cookies:
docker run rotheross/otobo:rel-10_0_10 /usr/games/fortune
/opt/otobo_install/entrypoint.sh: line 57: /usr/games/fortune: No such file or directory
Add fortune cookies to a named container running the original OTOBO image. This is done in an interactive session as user root:
docker run -it --user root --entrypoint /bin/bash --name otobo_orig rotheross/otobo:rel-10_0_10
apt update
apt install fortunes
exit
docker ps -a | head
Create an image from the stopped container and give it a name. Note that the default user and entrypoint script must be restored: Finally, we can check it:
docker run otobo_with_fortune_cookies /usr/games/fortune
A platitude is simply a truth repeated till people get tired of hearing it. -- Stanley Baldwin
The modified image can be set in your .env file and then used for fun and profit.
Building Local Images
INFO
Building Docker images locally is usually only necessary during development. Other use cases are when more recent base images should be used for an installation or when additional functionalities need to be added to the images.
The Dockerfiles needed to build Docker images locally are part of the git repository https://github.com/RotherOSS/otobo:
- otobo.web.dockerfile
- otobo.nginx.dockerfile
- otobo.elasticsearch.dockerfile The script for the actual building of the images is bin/docker/build_docker_images.sh.
cd /opt
git clone https://github.com/RotherOSS/otobo.git
Switch to the desired branch. e.g.
git checkout rel-10_0_11
cd otobo
Change the Dockerfiles if necessary
bin/docker/build_docker_images.sh
docker image ls
The locally built Docker images are tagged as local-<OTOBO_VERSION>
, where the version is taken from the RELEASE file. After the local images have been built, one can return to the docker-compose directory. The local images are declared by setting OTOBO_IMAGE_OTOBO
, OTOBO_IMAGE_OTOBO_ELASTICSEARCH
, OTOBO_IMAGE_OTOBO_NGINX
in .env.
Automatic Installation
Instead of going through the process via http://yourIPorFQDN/otobo/installer.pl, one can take a shortcut. This is useful for running the test suite on a fresh installation.
WARNING
docker-compose down -v
will remove all previous setups and data.
docker-compose down -v
docker-compose up --detach
docker-compose stop daemon
docker-compose exec web bash\
-c "rm -f Kernel/Config/Files/ZZZAAuto.pm ; bin/docker/quick_setup.pl --db-password otobo_root"
docker-compose exec web bash\
-c "bin/docker/run_test_suite.sh"
docker-compose start daemon
List of Useful Commands
Docker
docker system prune -a
System cleanup (removes all unused images, containers, volumes, networks)docker version
Shows the version- and many more useful Docker commands.
Docker Compose
docker-compose config
Checks and displays the configurationdocker-compose ps
Shows the running containers- and more useful Docker Compose commands.