Grantextractor: Accurate grant support information extraction from biomedical fulltext based on bi-lstm-crf
S Dai, Y Ding, Z Zhang, W Zuo… - … /ACM transactions on …, 2019 - ieeexplore.ieee.org
IEEE/ACM transactions on computational biology and bioinformatics, 2019•ieeexplore.ieee.org
Grant support (GS) in the MEDLINE database refers to funding agencies and contract
numbers. It is important for funding organizations to track their funding outcomes from the GS
information. As such, how to accurately and automatically extract funding information from
biomedical literature is challenging. In this paper, we present a pipeline system called
GrantExtractor that is able to accurately extract GS information from fulltext biomedical
literature. GrantExtractor effectively integrates several advanced machine learning …
numbers. It is important for funding organizations to track their funding outcomes from the GS
information. As such, how to accurately and automatically extract funding information from
biomedical literature is challenging. In this paper, we present a pipeline system called
GrantExtractor that is able to accurately extract GS information from fulltext biomedical
literature. GrantExtractor effectively integrates several advanced machine learning …
Grant support (GS) in the MEDLINE database refers to funding agencies and contract numbers. It is important for funding organizations to track their funding outcomes from the GS information. As such, how to accurately and automatically extract funding information from biomedical literature is challenging. In this paper, we present a pipeline system called GrantExtractor that is able to accurately extract GS information from fulltext biomedical literature. GrantExtractor effectively integrates several advanced machine learning techniques. In particular, we use a sentence classifier to identify funding sentences from articles first. A bi-directional LSTM and the CRF layer (BiLSTM-CRF), and pattern matching are then used to extract entities of grant numbers and agencies from these identified funding sentences. After removing noisy numbers by a multi-class model, we finally match each grant number with its corresponding agency. Experimental results on benchmark datasets have demonstrated that GrantExtractor clearly outperforms all baseline methods. It is further evident that GrantExtractor won the first place in Task 5C of 2017 BioASQ challenge, with achieving the Micro-recall of 0.9526 for 22,610 articles. Moreover, GrantExtractor has achieved the Micro F-measure score as high as 0.90 in extracting grant pairs.
ieeexplore.ieee.org
Showing the best result for this search. See all results