Skip to content

OTOBO Performance Optimization & Scaling

OTOBO Performance

A high-performance ticket system is the foundation for excellent support. OTOBO (especially from version 11) offers numerous levers to remain lightning-fast even with high ticket volumes (> 1 million articles).

[!TIP] Quick Win: Integrate OpenTicketAI. AI-supported classification eliminates the need for manually moving tickets from the “Raw” queue. This not only reduces the workload for agents but also prevents performance degradation caused by overloaded “catch-all” queues.


The index determines how quickly list views (Dashboard, Queue View) are loaded.

1.1 RuntimeDB (Standard for small instances)

Section titled “1.1 RuntimeDB (Standard for small instances)”
  • Module: Kernel::System::Ticket::IndexAccelerator::RuntimeDB
  • Behavior: Queries are performed live on the ticket table.
  • Limit: Performant up to approx. 50,000 open tickets. Beyond that, the load increases noticeably with every dashboard refresh.
  • Module: Kernel::System::Ticket::IndexAccelerator::StaticDB
  • Advantage: Uses a dedicated ticket_index table. Write operations are minimally slower, but read access (lists) is faster by factors and remains constant.
  • Setup:
    1. Change in the System Configuration.
    2. Build the initial index:
      Terminal window
      /opt/otobo/bin/otobo.Console.pl Maint::Ticket::QueueIndexRebuild

While the internal SQL index is sufficient for small quantities, there is no way around Elasticsearch for large datasets.

2.1 Elasticsearch Tuning (OTOBO Docker context)

Section titled “2.1 Elasticsearch Tuning (OTOBO Docker context)”

If you run OTOBO in Docker (recommended for installation), pay attention to the JVM settings in your docker-compose.override.yml:

services:
elasticsearch:
environment:
- ES_JAVA_OPTS=-Xms4g -Xmx4g
  • Heap Size: Set the heap to a maximum of 50% of the available RAM (max. 31GB).
  • Disk Watermarks: Elasticsearch switches indices to read-only when disk space runs low (< 5% free). Monitoring is essential here.

If you do not use Elasticsearch, limit the scope:

  • WordCountMax: Limit to 500-1000 words (Ticket::SearchIndex::Attribute.WordCountMax).
  • Archived Tickets: Exclude these from the search index to keep the DB size compact.

By default, OTOBO stores attachments in the database (ArticleStorageDB).

  • From 10,000 tickets or large attachments: You must switch to ArticleStorageFS.
  • Advantage: The database remains small, and backups/dumps run significantly faster.
Terminal window
/opt/otobo/bin/otobo.Console.pl Admin::Article::StorageSwitch --target ArticleStorageFS

Archive closed tickets that are older than 6-12 months. Archived tickets are no longer indexed by default, which massively speeds up daily work.


Use Redis as a caching backend instead of the file system. Redis keeps configurations and sessions in RAM.

  • SysConfig: Set Cache::Module to Redis.
  • Expert Tip: Use Redis::Fast for even lower latencies in high-traffic environments.

If you are not using Docker, move the temp directory to RAM:

Terminal window
mount -t tmpfs -o size=4G tmpfs /opt/otobo/var/tmp

Accelerates the generation of PDF reports and temporary article files.


For OTOBO 11, we recommend the following parameters (with 16GB+ RAM):

[mysqld]
innodb_buffer_pool_size = 8G # approx. 50-60% of RAM
innodb_log_file_size = 1G # important for performance during mass updates
innodb_flush_log_at_trx_commit = 2 # compromise between speed and data security
max_connections = 500

[!IMPORTANT] OTOBO 11 uses more efficient queries but benefits extremely from fast I/O. Use exclusively NVMe-SSDs.


OTOBO 11 is natively optimized for Docker.

  • Container Images: The official images are already tuned for performance (Perl-level optimizations).
  • Log Rotation: From version 11.0.11, OTOBO has integrated log rotation for otobo.log. External scripts are no longer strictly necessary for this log file.
  • Auto-Tuning: The containers often detect available resources automatically; however, limits (mem_limit) should still be set in Docker Compose.

No optimization without data. We recommend the Prometheus stack:

  1. Metrics: Capture OTOBO metrics via API.
  2. Grafana: Dashboard for:
    • Request latency (SLA tracking)
    • Queue fill level (detect bottlenecks)
    • DB lock wait times
  3. Alerting: Notification if the cache hit rate falls below 90% or Elasticsearch shards turn “yellow”.

Performance optimization in OTOBO is an interplay of hardware, process conversion (AI), and technical fine-tuning.

Start by switching to Elasticsearch, implement archiving, and relieve your agents with OpenTicketAI. This way, your OTOBO system remains responsive and stable even as your company grows.