Sep 8, 2017 · In this work, we propose the Simple Recurrent Unit (SRU), a light recurrent unit that balances model capacity and scalability.
In this work, we propose the Simple Recurrent Unit (SRU), a light recurrent unit that balances model capacity and scalability. SRU is designed to provide ...
SRU, or Simple Recurrent Unit, is a recurrent neural unit with a light form of recurrence. SRU exhibits the same level of parallelism as convolution and feed- ...
A light recurrent unit that balances model capacity and scalability, designed to provide expressive recurrence, enable highly parallelized implementation.
SRU is a recurrent unit that can run over 10 times faster than cuDNN LSTM, without loss of accuracy tested on many tasks.
In this work, we propose the Simple Recurrent Unit (SRU), a light recurrent unit that balances model capacity and scalability. SRU is designed to provide ...
The Simple Recurrent Unit (SRU) first proposed by Lei (2018) [34] is a light recurrent unit that balances model capacity and scalability. The SRU is designed to ...
Common recurrent neural architectures scale poorly due to the intrinsic difficulty in parallelizing their state computations. In this work, we propose the ...
Recurrent neural networks and its many variants have been widely used in language modeling, text generation, machine translation, speech recognition and so ...