We present a series of adaptations in low probability distributions scenarios to detect and track multiple moving objects of interest. We investigate the benefits of the linearization of the loss trajectory1 in training neural networks, mainly addressing the lack of auto-differentiation in MOTA2 evaluations, and observe what characteristics can support parallelism3 and differential computation and to what extent these observations contributes to our objectives. Using benchmarks from DeepMOT4 and CenterNet,5 we highlight the use of sparsemax activations by mounting a finite number of independent, asynchronous detectors to augment performance and gain from compounded accuracy.∗ Empirical results show optimistic gains when applying parallelization on low-powered, low-latency embedded systems in cases where automatic differentiation is available.
|