Skip to main content
Version: 1.14

V1.13 to V1.14

Steps to migrate REGARDS from version 1.13 to 1.14

  • Download last playbook version and move your inventory inside the new playbook
  • Edit your inventory file group_vars/all/main.yml :
# Replace 
version: 1.13.1
# Into
version: 1.14.3
info

When this guide has been writen, last version was 1.14.3. Check if there is an updated version available here

  • Add a property group_docker_use_log_concentrator inside your inventory file group_vars/regards_nodes/main.yml:
[...]
group_docker_network_name: 'regards_{{ group_stack_name }}'
group_docker_network_ip_network: '10.11.7.'

group_docker_registry: ghcr.io/regardsoss

# Add here-under new property
group_docker_use_log_concentrator: True

[...]
info

This property means that you have re-install swarm (you will run ansible-playbook -i [...] setup-vm.yml [...] at the end of this guide) and now, the log configuration is managed by Docker and not described on services when deployed.

  • Ensure properties enable_resource_limits, inside your inventory file group_vars/regards_nodes/main.yml, exists and equals to true
group_docker_cots_configuration:
enable_healthcheck: true

group_config_mservices:
enable_healthcheck: true
Resource limits are now mandatory!

If you don't provide any resource limit to REGARDS containers, they won't have any memory limit. Before 1.14, they were limited to some amount of memory, but now they use containers limits.
See Breaking Changes section to get more info about this change

  • Remove deprecated logging services elasticsearch_logs, kibana_logs, and fluent from your stack:
# Before
group_docker_cots:
[..]
elasticsearch_logs:
[..]
kibana_logs:
[..]
fluent:
[..]

# After
# Everything related these components is removed
group_docker_cots:
[..]
  • Remove group_docker_cots_configuration.elasticsearch_logs as you don't use the elasticsearch_logs service anymore

  • Optional: Follow the logging stack guide to add Grafana, Loki, Prometheus and many other services to get started with the new powerful logging and monitoring stack

  • Optional: Add following properties inside your inventory file group_vars/docker_nodes/main.yml, at the end of the file:

yum_proxy: "{{ global_proxy.url | default(None) }}"
python_version: 3
[...] # add to the end of the file:

# Logging Driver
log_driver_type: fluentd
log_driver_opts:
fluentd-address : 127.0.0.1:24224
fluentd-async : "true"
fluentd-retry-wait : 15s
fluentd-sub-second-precision: "true"
cache-max-size: 10m
cache-max-file: 10
info

Do this step only if you want to use the logging stack. The default value of log_driver_type is json-file

  • Shutdown REGARDS using the playbook
ansible-playbook -i [...] regards-shutdown.yml [...]
  • Deconnects swarm nodes
ansible-playbook -i [...] delete-swarm.yml [...]
  • Reinstall swarm nodes
ansible-playbook -i [...] setup-vm.yml [...]
info

You must do the setup-vm.yml step whether you use the logging stack or not

  • Update REGARDS using the playbook
ansible-playbook -i [...] regards.yml [...]

Breaking change

REGARDS container were booting with a lot of RAM and CPU reservations using the xmx and xms properties to fix the amount of RAM.

Starting with 1.14, the amount of RAM is now a ratio from the container limit (using -XX:MaxRAMPercentage=75).

It allows you to adapt the quantity of memory allowed by a service inside your inventory.

ServiceRAM version < 1.14RAM version >= 1.14
dam1920mb1280mb
notifier1920mb1280mb
fem1920mb1024mb
catalog1536mb1024mb
ingest3840mb1536mb
storage2560mb1024mb
order1536mb1280mb
worker_manager2560mb1024mb

If you want to boot your app without reducing memory, you need to provide a memory limit equivalent to values previously there, like this:


# Stack
group_docker_mservices:
dam:
memoryLimit: 1920mb
notifier:
memoryLimit: 1920mb
fem:
memoryLimit: 1920mb
catalog:
memoryLimit: 1536mb
ingest:
memoryLimit: 3840mb
storage:
memoryLimit: 2560mb
order:
memoryLimit: 1536mb
worker_manager:
memoryLimit: 2560mb
How to tune memory limit?

The new logging stack gives you the amount of memory used by a microservice on live, helping you to fix the right amount of memory limit for your environment.

Deprecation

  • Disabling healthcheck and resource limits will removed on 1.15
group_docker_cots_configuration:
enable_healthcheck: false
enable_resource_limits: false

group_config_mservices:
enable_healthcheck: false
enable_resource_limits: false
  • The previous logging stack will removed on 1.15
group_docker_cots:
elasticsearch:
tag: "{{ group_docker_tags.cots }}"
elasticsearch_logs:
tag: "{{ group_docker_tags.cots }}"
kibana_logs:
tag: "{{ group_docker_tags.cots }}"
http: 5602
fluent:
tag: "{{ group_docker_tags.cots }}"
  • A good configuration of Docker will be required on 1.15 to specify the logging configuration directly on the Docker daemon side.
    For now, you can still omit the group_docker_use_log_concentrator or provide the false value to define the configuration on the service.
group_docker_use_log_concentrator: false
info

When group_docker_use_log_concentrator: True, it means you have defined a log driver inside /etc/docker/daemon.json:

{
[...]
# logging stack activated
"log-driver": "fluentd",
"log-opts": {
"fluentd-async": "true",
"cache-max-size": "10m",
"cache-max-file": "10",
"fluentd-retry-wait": "15s",
"fluentd-address": "127.0.0.1:24224"
},
[OR]
# no logging stack
"log-driver": "json-file",
"log-opts": {
"max-size": "10m",
"max-file": "10"
},
[...]
}

On the other hand, inside Docker swarm stack files, there is no need to override the logging property to avoid massive disk usage, as this issue is handled directly by the Docker daemon.

# A stack file, ie /opt/regards/regards/stack/regards-stack.yml
version: '3.9'
services:
rs-front:
[...]
logging:
options:
max-size: "10m"
max-file: "10"
  • Property active_mail_on_admin_instance will be removed on 1.15
group_config_mservices:
active_mail_on_admin_instance: true
  • Property securised from frontend service will be removed on 1.15
group_docker_mservices:
[...]
front:
[...]
securised: true