At about Aug 26 00:50:01, a bunch of pages fired for db2088:
PROBLEM - Host db2088 is DOWN: PING CRITICAL - Packet loss = 100% 7:55 PM RECOVERY - Host db2088 is UP: PING OK - Packet loss = 0%, RTA = 36.18 ms 7:57 PM PROBLEM - MariaDB Slave IO: s2 on db2088 is CRITICAL: CRITICAL slave_io_state could not connect 7:57 PM PROBLEM - MariaDB Slave SQL: s1 on db2088 is CRITICAL: CRITICAL slave_sql_state could not connect 7:57 PM PROBLEM - MariaDB Slave SQL: s2 on db2088 is CRITICAL: CRITICAL slave_sql_state could not connect 7:58 PM PROBLEM - MariaDB read only s2 on db2088 is CRITICAL: Could not connect to localhost:3312 7:58 PM PROBLEM - MariaDB read only s1 on db2088 is CRITICAL: Could not connect to localhost:3311 7:58 PM PROBLEM - mysqld processes on db2088 is CRITICAL: PROCS CRITICAL: 0 processes with command name mysqld 7:58 PM PROBLEM - MariaDB Slave IO: s1 on db2088 is CRITICAL: CRITICAL slave_io_state could not connect 8:05 PM PROBLEM - MariaDB Slave Lag: s1 on db2088 is CRITICAL: CRITICAL slave_sql_lag could not connect 8:05 PM PROBLEM - MariaDB Slave Lag: s2 on db2088 is CRITICAL: CRITICAL slave_sql_lag could not connect
It seems to have come back up, but it's in a very strange state. In particular, the syslog contains no messages at all post-crash. If this is standard systemd behavior, it's new to me.
In theory this box needs its mariadb services started, but we should figure out how broken the box is first.