I see these errors on the reverse proxy:
AH01102: error reading status line from remote server 10.0.7.2:80
AH00898: Error reading from remote server returned by /projects/dc-ducklings-volunteer-activities/issues.atom
In the container I see those:
[ 2021-11-16 05:13:05.9810 2586983/7f84aaffd700 age/Cor/Con/InternalUtils.cpp:112 ]: [Client 12-6864] Sending 502 response: application did not send a complete response
App 2831951 stdout:
[ 2021-11-16 05:13:06.9367 2586983/7f8500a1c700 age/Cor/Con/InternalUtils.cpp:112 ]: [Client 2-6865] Sending 502 response: application did not send a complete response
[ 2021-11-16 05:13:07.2635 2586983/7f84eb7fe700 age/Cor/CoreMain.cpp:819 ]: Checking whether to disconnect long-running connections for process 2831931, application redmine_md
And those too:
[ 2021-11-16 01:06:24.8866 2586983/7f84e82b4700 age/Cor/Spa/SmartSpawner.h:740 ]: The application preloader seems to have crashed, restarting it and trying again...
I just found out the source of the problem:
[Tue Nov 16 06:17:13 2021] oom-kill:constraint=CONSTRAINT_MEMCG,nodemask=(null),cpuset=lxc.payload.redmine,mems_allowed=0-1,oom_memcg=/lxc.payload.redmine,task_memcg=/lxc.payload.redmine/system.slice/apache2.service,task=ruby,pid=2171217,uid=1065569
[Tue Nov 16 06:17:13 2021] Memory cgroup out of memory: Killed process 2171217 (ruby) total-vm:414356kB, anon-rss:129448kB, file-rss:8932kB, shmem-rss:0kB, UID:1065569 pgtables:444kB oom_score_adj:0
[Tue Nov 16 06:17:13 2021] oom_reaper: reaped process 2171217 (ruby), now anon-rss:0kB, file-rss:0kB, shmem-rss:0kB
I increased the memory limit for the container (the default stays at 512MB).
Current usage (after increase):
# lxc info redmine
…
Memory usage:
Memory (current): 620.11MiB
…
But it was limited to 512MB. Now we have a better understanding of how (too) much memory is being used by this service ;-P.