European Journal of Emerging Data Science and Machine Learning
A-Z Journals

Comparative Efficacy of Transformer and Recurrent Neural Networks in Automated Blood Clot Detection from Clinical Text

Authors
  • Dr. Tomas H. Eriksson

    Department of Computer Science and Engineering, Lund University, Sweden
    Author
  • Reem F. Al-Sharif

    Department of Health Informatics, American University of Beirut, Lebanon
    Author
Keywords:
Blood Clot Detection, Clinical Text Analysis, Natural Language Processing (NLP), Transformer Models
Abstract

The accurate and timely identification of medical conditions from electronic health records (EHRs) is crucial for patient care, research, and public health surveillance. Blood clot detection, specifically, presents a significant challenge due to the nuanced, often implicit, mentions within unstructured clinical text. This study presents a comparative analysis of advanced neural network architectures—Bidirectional Encoder Representations from Transformers (BERT), Robustly Optimized BERT Pretraining Approach (RoBERTa), Text-to-Text Transfer Transformer (T5), and Recurrent Neural Networks (RNNs)—for their efficacy in identifying thrombus-related information from clinical narratives. Leveraging their distinct strengths in natural language understanding, we evaluate these models on a proprietary dataset of de-identified clinical notes, focusing on precision, recall, and F1-score. Our findings indicate that Transformer-based models, particularly those pre-trained on biomedical corpora, significantly outperform traditional RNNs, demonstrating superior ability to capture complex contextual dependencies vital for nuanced clinical concept extraction.

Downloads
Download data is not yet available.
References

Huang, K., Altosaar, J. and Ranganath, R., 2019. Clinicalbert: Modeling clinical notes and predicting hospital readmission. arXiv preprint arXiv:1904.05342.

Lee, J., Yoon, W., Kim, S., Kim, D., Kim, S., So, C.H. and Kang, J., 2020. BioBERT: a pre-trained biomedical language representation model for biomedical text mining. Bioinformatics, 36(4), pp.1234-1240.

Si, Y., Wang, J., Xu, H. and Roberts, K., 2019. Enhancing clinical concept extraction with contextual embeddings. Journal of the American Medical Informatics Association, 26(11), pp.1297-1304.

Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W. and Liu, P.J., 2020. Exploring the limits of transfer learning with a unified text-to-text transformer. Journal of machine learning research, 21(140), pp.1-67.

Devlin, J., Chang, M.W., Lee, K. and Toutanova, K., 2019, June. Bert: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 conference of the North American chapter of the association for computational linguistics: human language technologies, volume 1 (long and short papers) (pp. 4171-4186).

Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L. and Stoyanov, V., 2019. Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692.

Li, P. and Huang, H., 2016. Clinical information extraction via convolutional neural network. arXiv preprint arXiv:1603.09381.

Deo, R.C., 2015. Machine learning in medicine. Circulation, 132(20), pp.1920-1930.

Giorgi, J.M. and Bader, G.D., 2018. Transfer learning for biomedical named entity recognition with neural networks. Bioinformatics, 34(23), pp.4087-4094.

Habibi, M., Weber, L., Neves, M., Wiegandt, D.L. and Leser, U., 2017. Deep learning with word embeddings improves biomedical named entity recognition. Bioinformatics, 33(14), pp.i37-i48.

Wang, X., Zhang, Y., Ren, X., Zhang, Y., Zitnik, M., Shang, J., Langlotz, C. and Han, J., 2019. Cross-type biomedical named entity recognition with deep multi-task learning. Bioinformatics, 35(10), pp.1745-1752.

Bhasuran,B. and Natarajan,J. (2018) Automatic extraction of gene-disease associations from literature using joint ensemble learning. PLoS One, 13, e0200699.

Lim, S. and Kang, J., 2018. Chemical–gene relation extraction using recursive neural network. Database, 2018, p.bay060.

Wiese, G., Weissenborn, D. and Neves, M., 2017. Neural domain adaptation for biomedical question answering. arXiv preprint arXiv:1706.03610.

Downloads
Published
2024-12-20
Section
Articles
License

All articles published by The Parthenon Frontiers and its associated journals are distributed under the terms of the Creative Commons Attribution (CC BY 4.0) International License unless otherwise stated. 

Authors retain full copyright of their published work. By submitting their manuscript, authors agree to grant The Parthenon Frontiers a non-exclusive license to publish, archive, and distribute the article worldwide. Authors are free to:

  • Share their article on personal websites, institutional repositories, or social media platforms.

  • Reuse their content in future works, presentations, or educational materials, provided proper citation of the original publication.

How to Cite

Comparative Efficacy of Transformer and Recurrent Neural Networks in Automated Blood Clot Detection from Clinical Text. (2024). European Journal of Emerging Data Science and Machine Learning, 1(01), 42-54. https://parthenonfrontiers.com/index.php/ejedsml/article/view/71

Similar Articles

You may also start an advanced similarity search for this article.