It becomes increasingly important to be able to handle large amounts of data more efficiently, as anyone could need or generate a lot of information at any given time. However, distinguishing between relevant and non-relevant information quickly, as well as responding to newly obtained data of interest adequately, remains a cumbersome task. Therefore, a lot of research aiming to alleviate and support the increasing need of information by means of Natural Language Processing (NLP) has been conducted during the last decades. This paper reviews the state of the art of approaches on information extraction from text. A distinction is made between statistic-based approaches, pattern-based approaches, and hybrid approaches to NLP. It is concluded that it depends on the user's need which method suits best, as each approach to natural language processing has its own advantages and disadvantages.
|Number of pages||2|
|Publication status||Published - 25 Jan 2010|
|Event||Tenth Dutch-Belgian Information Retrieval Workshop (DIR 2010) - Nijmegen, The Netherlands|
Duration: 25 Jan 2010 → 25 Jan 2010
|Conference||Tenth Dutch-Belgian Information Retrieval Workshop (DIR 2010)|
|City||Nijmegen, The Netherlands|
|Period||25/01/10 → 25/01/10|