Performance Wiki

Information about performance tuning for bigger installations, please publish new perfomance suggestions in the "Performance DB" and best practises in the "Best Practises" DB.

Tabs

Uni Freiburg

Date: 05.12.2012
Documented by: Marko Glaubitz
Revised: Marko Glaubitz, 08.09.2015

Background Data

ILIAS Software Version: 5.0.4
Date of Birth: 15.10.2012
Accounts Active by 05/14: ~30.000
# of installations: 2
 
Started pilot operation in 10/2012 from SVN trunk with 4.3 beta.
Went live and official for summer term 2013.

Servers

Virtualisation Layer: VMWare ESX 5.5
  • 1 reverse proxy:
    • OS: Ubuntu
    • 1 vCPU, 2GB RAM
    • nginx 1.4.4 with "least_conn" load balancing
  • 1 database server
    • OS: OpenSuSE
    • 8 vCPU's, 64 GB RAM
    • MySQL 5.5.28
    • second VM as stand-by
  • 8 web servers
    • OS: OpenSuSE
    • 2 vCPU's, 8GB RAM each
    • Apache 2.4.47
  • 2 utility servers
    • OS: OpenSuSE
    • 8 vCPU's, 4GB RAM each
    • 1x Lucene/Chat/Etherpad Lite; 1x NFS export host for data directories (tenant data, www data)

Software

Our ILIAS main installation resides on one of the utility servers and is rsync'd to the webslaves after every update from GIT. The data directory within the ILIAS installation is symlinked to an NFS share provided by the other utility server (backend storage is a high performance FC SAN). User authentication is handled by Shibboleth (IdP of the university library service of the University of Freiburg, first installation only) and LDAP (second installation only).

Reverse Proxy / Shibboleth

As we have no session stickiness with nginx, there needs to be some kind of common session database for Shibboleth authentication to work. We're running the Shibboleth daemon on one of the utilty servers for that matter ("Shared Process"). We also tried MySQL session handling ("Shared Database"), but got reports about strange delays when logging in.

MySQL Config

This is an excerpt from our current my.cnf. Two remarks on the config below:
 
  • We are running exclusively InnoDB tables, and NO MyISAM tables.
  • Our tmpdir (for on temp tables created on disk) has been put in a ramdisk.

key_buffer_size = 128M

max_allowed_packet = 128M     

table_definition_cache = 9000 
table_open_cache = 9000        

read_rnd_buffer_size = 2M    
join-buffer-size = 2M 

tmp_table_size = 128M
max_heap_table_size = 128M

query_cache_size= 64M    
query_cache_limit = 4M
query_cache_type = 1       
query_cache_min_res_unit = 4096
low-priority-updates = 1

max_connections = 1000   
thread_cache_size = 128       

innodb_file_per_table = 1          
innodb_open_files = 40000      
innodb_buffer_pool_instances = 6
innodb_buffer_pool_size = 24G   
innodb_log_file_size = 2047M     
innodb_log_buffer_size = 512M  
innodb_flush_log_at_trx_commit = 0
innodb_concurrency_tickets = 5000
innodb_write_io_threads = 8

Last edited: 10. Dec 2015, 11:13, Glaubitz, Marko [mglaubitz]


Comments

  • MG

    Glaubitz, Marko [mglaubitz]

    Thanks for pointing this out - what had topped up our memory to 64 GB for the database server :)

    Created on8. Sep 2015
  • De

    Deleted Account

    Dear Marko

    I have a question regarding your innodb buffer pool size.
    In the configuration you've posted the buffer pool has a size of 32GB.
    Why did you chose a value twice as high as the physical memory on your system?

    Kind regards
    Christian

    Created on3. Sep 2015