A Handbook of Computational Linguistics: Artificial Intelligence in Natural Language Processing

Learning Techniques for Natural Language Processing: An Overview

Author(s): Shahina Anjum* and Sunil Kumar Yadav

Pp: 38-60 (23)

DOI: 10.2174/9789815238488124020005

* (Excluding Mailing and Handling)

Abstract

Natural Language Processing, also called as NLP, is a fast-growing arena that comprises the development of algorithms and models to make it possible for machines to comprehend, translate, and develop human language. There are several uses for NLP, including automatic translation, sentiment analysis, text summarization, and speech recognition, and chatbot development. This chapter presents an overview of learning techniques used in NLP, including supervised, unsupervised, and reinforcement learning methods coming under machine learning. The chapter also discusses several popular learning techniques in NLP, such as Support Vector Machines (SVM) and Bayesian Networks, which are usually helpful in text classification, Neural Networks, and Deep Learning Models, which also incorporate Transformers, Recurrent Neural Networks, and Convolutional Neural Networks. It also covers traditional techniques such as Hidden Markov, N-gram, and Probabilistic Graphical Models. Some recent advancements in NLP, such as Transfer Learning, Domain Adaptation, and Multi-Task Learning, are also considered. Moreover, the chapter focuses on challenges and considerations in NLP learning techniques, including data pre-processing, feature extraction, model evaluation, and dealing with limited data and domain-specific challenges.

Related Journals
Related Books
© 2024 Bentham Science Publishers | Privacy Policy