Deconvolutional Density Network: Modeling Free-Form Conditional Distributions
DOI:
https://doi.org/10.1609/aaai.v36i6.20567Keywords:
Machine Learning (ML)Abstract
Conditional density estimation (CDE) is the task of estimating the probability of an event conditioned on some inputs. A neural network (NN) can also be used to compute the output distribution for continuous-domain, which can be viewed as an extension of regression task. Nevertheless, it is difficult to explicitly approximate a distribution without knowing the information of its general form a priori. In order to fit an arbitrary conditional distribution, discretizing the continuous domain into bins is an effective strategy, as long as we have sufficiently narrow bins and very large data. However, collecting enough data is often hard to reach and falls far short of that ideal in many circumstances, especially in multivariate CDE for the curse of dimensionality. In this paper, we demonstrate the benefits of modeling free-form conditional distributions using a deconvolution-based neural net framework, coping with data deficiency problems in discretization. It has the advantage of being flexible but also takes advantage of the hierarchical smoothness offered by the deconvolution layers. We compare our method to a number of other density-estimation approaches and show that our Deconvolutional Density Network (DDN) outperforms the competing methods on many univariate and multivariate tasks. The code of DDN is available at https://github.com/NBICLAB/DDNDownloads
Published
2022-06-28
How to Cite
Chen, B., Islam, M., Gao, J., & Wang, L. (2022). Deconvolutional Density Network: Modeling Free-Form Conditional Distributions. Proceedings of the AAAI Conference on Artificial Intelligence, 36(6), 6183-6192. https://doi.org/10.1609/aaai.v36i6.20567
Issue
Section
AAAI Technical Track on Machine Learning I