Acrolinx streams the logs for all of your pods to your local machine with journald. journald is a system service that collects and stores log data. It creates and maintains structured, indexed journals based on the data that it receives. This makes it easier for you to gather necessary log information and identify important log messages.
This feature is only used for Standard Stack and it's optional. You can make changes to it via the Helm chart.
The journald logging system will work without custom configurations. You can also use the following parameters to configure the log retention section. Learn more about this.
Tip
The prefix System
applies when you store the files on a persistent file system, more specifically /var/log/journal
. The prefix Runtime
applies when you store the files on a volatile in-memory file system, more specifically /run/log/journal
.
Parameter |
Purpose |
---|---|
|
Set the maximum disk space that the journal can use. |
|
Set how much disk space systemd-journald leaves free for other uses. |
|
Set the maximum size for individual journal files. |
|
Set the maximum number of individual journal files you want to keep. If there are more than this, journald will only delete archived files until it reaches this limit. journald won't delete active files. |
In the Helm parameters, the default setting for operational.logToJournald
is true
. If you already have a preferred logging solution, set operational.logToJournald
to false
.
You can use the journalctl
command to view the logs that journald collects. You’ll need to run this command with your privileged user.
Before you start, check to make sure that your privileged user can access the group systemd-journal.
If not, give access with the following:
sudo usermod -a -G systemd-journal <privileged user>
After you run the command, you'll need to sign out to finish the process of creating the user.
The logs for all pods are copied to journald with the identifier acrolinx
.
The support package script executes the following command to retrieve and pack the logs. If it doesn't do this on its own, you can execute the script manually with a privileged user.
journalctl --output-fields=ACROLINX_CONTAINER,ACROLINX_POD,ACROLINX_FILENAME,MESSAGE --identifier=acrolinx -o json | gzip -9 > journald.gz
After you execute the command, you can send the zipped file to the Acrolinx Support team for assistance.
In some cases, you’ll want to access the logs yourself. You might do this if you get an “out of memory” message, for example. Before you get started, it’s helpful to install the jq
tool. This will receive the output of the commands you’ll use and will extract the parts of the JSON output that are useful.
Tip
New to Unix pipelines? Learn how to use them with journalctl.
Keep reading for more information on when you might want to look into the logs.
If your core server won't start, have a look at the core server logs. For example:
journalctl --identifier=acrolinx -o json| grep '"ACROLINX_CONTAINER":"[core-server]"' | jq -r '[.ACROLINX_POD,.MESSAGE] | @tsv'
You can also adjust this command to look at the logs for your language or analytics servers. Just replace "core-server" with your preferred server.
If one of your servers runs out of memory, you'll see an out of memory message. However, you might not be sure where this applies. To identify the affected server, you can start with a search for the exceptions that occurred within all of your servers in the last hour. For example:
journalctl --identifier=acrolinx -o json --since "1 hour ago" | grep -i exception | jq -r '[.ACROLINX_POD,.ACROLINX_CONTAINER,.MESSAGE] | @tsv'
Then, you might drill down further to find out where the exceptions occurred.
journalctl --identifier=acrolinx -o json --since "1 hour ago" | grep -i exception | jq -r '[.ACROLINX_POD,.ACROLINX_CONTAINER] | @tsv' | uniq
Once you’ve identified where the exception occurred, you it's possible to look into a single container. For example:
journalctl --identifier=acrolinx -o json --since "1 hour ago" | grep -i acrolinx_acrolinx-core-server-0_d59d7736-5790-4dbe-a12e-1a9e14a3db2a | grep '"ACROLINX_CONTAINER":"[server name]"' | jq -r '.MESSAGE'