OOM Protection Using Automatic Query Killing
Pinot's built in heap usage monitoring and OOM protection
Pinot has implemented a mechanism to monitor the total JVM heap size and per query memory allocation approximation for server.
Support for Single-Stage Queries: https://github.com/apache/pinot/pull/9727
Support for Multi-Stage Queries (available in 1.3.0) : https://github.com/apache/pinot/pull/13598
The feature is OFF by default. When enabled, this mechanism can help to protect the servers and brokers from OOM caused by expensive queries (e.g. distinctcount + group by on high cardinality columns). Upon an immediate risk of heap depletion, this mechanism will kick in and kill from the most expensive query(s).
The feature has two components on each broker and server:
Statistics framework that tracks resource usage for each query thread.
Query killing mechanism.
Usage
Enable Thread Statistics Collection
Debug APIs
Once memory sampling has been enabled, the following DEBUG APIs can be used to check memory usage on a broker or server. Note that there are no APIs that aggregate usage across all servers and brokers for a query.
/debug/query/resourceUsage
Returns resource usage aggregated by queryId
/debug/threads/resourceUsage
Returns resource usage of a thread and the queryId of the task.
Enable Query Killing Mechanism
The statistics framework also starts a watcher task. The watcher task takes decisions on killing queries.
By default the watcher task does not take any actions.
queries_killed meter tracks the number of queries killed.
The killing mechanism is enabled with the following config:
The watcher task can be in 3 modes depending on the level of heap usage:
Normal
Critical
Panic
The thresholds for these levels is defined by the following configs:
The watcher task runs periodically. The frequency of the watcher task can be configured with:
However under stress, the task can run faster so that it can react to increase in heap usage faster. The watcher task has to be configured with
a threshold when to shift to higher frequency
the frequency expressed as a ratio of the default frequency.
Configuration to control which queries are chosen as victims
In panic mode, all queries are killed.
In critical mode, queries below a certain threshold (expressed as a ratio of total heap memory) are not killed.
Once the watcher task kills a few queries, it will trigger a GC to reclaim memory. The configuration is:
Configuration
Here are the configurations that can be commonly applied to server/broker:
Config | Default | Description |
---|---|---|
pinot.broker.instance.enableThreadAllocatedBytesMeasurement pinot.server.instance.enableThreadAllocatedBytesMeasurement | false | Use true if one intend to enable this feature to kill queries by bytes allocated |
pinot.server.instance.enableThreadCpuTimeMeasurement pinot.server.instance.enableThreadCpuTimeMeasurement | false | Use true if one intend to enable this feature to kill queries by cpu time |
pinot.query.scheduler.accounting.factory.name |
| Use |
pinot.query.scheduler.accounting.enable.thread.memory.sampling | false | Account for threads' memory usage of a query, works only for hotspot jvm. If enabled, the killing decision will be based on memory allocated. |
pinot.query.scheduler.accounting.enable.thread.cpu.sampling | false | Account for threads' cpu time of a query. If memory sampling is disabled/unavailable, the killing decision will be based on CPU time. If both are disabled, the framework will not able to pick the most expensive query. |
pinot.query.scheduler.accounting.oom.enable.killing.query | false | Whether the framework will actually commit to kill queries. If disabled, only error message will be logged. |
pinot.query.scheduler.accounting.publishing.jvm.heap.usage | false | Whether the framework periodically publishes the heap usage to Pinot metrics. |
pinot.query.scheduler.accounting.oom.panic.heap.usage.ratio | 0.99 | When the heap usage exceeds this ratio, the frame work will kill all the queries. This can be set to be >1 to prevent a full killing from happening. |
pinot.query.scheduler.accounting.oom.critical.heap.usage.ratio | 0.96 | When the heap usage exceeds this ratio, the frame work will kill the most expensive query. |
pinot.query.scheduler.accounting.oom.alarming.heap.usage.ratio | 0.75 | When the heap usage exceeds this ratio, the framework will run more frequently to gather stats and prepare to kill queries timely. |
pinot.query.scheduler.accounting.sleep.ms | 30ms | The periodical task for query killing wakes up every 30ms |
pinot.query.scheduler.accounting.sleep.time.denominator | 3 (corresponding to 10ms sleep time at alarming level heap usage) | When the heap usage exceeds this alarming level, the sleep time will be
|
pinot.query.scheduler.accounting.min.memory.footprint.to.kill.ratio | 0.025 | If a query allocates memory below this ratio of total heap size (Xmx) it will not be killed. This is to prevent aggressive killing when the heap memory is not mainly allocated for queries |
pinot.query.scheduler.accounting.gc.backoff.count | 5 | When the framework consecutively kills this many expensive queries it will explicitly trigger gc to reclaim the memory. Should consider use -XX:+ExplicitGCInvokesConcurrent to avoid STW for some gc algorithms. |
pinot.query.scheduler.accounting.oom.critical.heap.usage.ratio.delta.after.gc | 0.15 | if after gc the heap usage is still above this, kill the most expensive query use this to prevent heap size oscillation and repeatedly triggering gc |
Last updated