News Release

Exploring Different Users for Text Mining and Analysis

August 29, 2017

The newest development in big data analysis is centered on breakthroughs in text mining: basically, computer programs are now able to comb through vast amounts of written content to discover new information. These systems are able to effectively categorize fragments of text, create links between the extracted materials, and formulate new ideas based on large-scale data analysis. While traditional data sources – like transaction records or consumer surveys – still have value in making business decisions, text mining and analysis exponentially increases the amount of potentially valuable information.

Hudson has been aware of developments in this field and we’ve developed our own ways to maximize the potential of text mining and analysis to better serve our clients. We’re keeping apprised of text mining’s latest techniques and methodology and we consider this element of big data analysis to be a crucial component of the services we provide to customers.

These processes arose from a confluence of recent innovations in patterning learning, artificial intelligence, and linguistic modeling. With an overwhelmingly large set of data – which is often unorganized, poorly structured, and sometimes fraught with grammatical or spelling errors – the primary objective becomes discovering “high-quality” information amidst the noise. Depending on the objectives of the analysis, the definition of “high-quality” might be slightly variable, but often focuses on the level of relevance and novelty of the text.

There are different methods for breaking the text into smaller pieces of information that can be easily categorized, which is the first step in the analytical process. There are more specific ways to dissect the text, depending on the type of information that’s being sought, but generally the steps follow a common template: first, the text must be extracted from the source and “cleaned” (meaning that alternate spellings and grammar structures are normalized to promote consistency in the material), then the sentences are split and different parts of speech labeled and categorized, before being parsed for relevant keywords, search terms, and particular patterns of information.

As the technology has become more sophisticated, data scientists have sought more applications for text mining and analysis. One common application for this methodology has been to use automated processes to discover links between different articles in research publications; advances in data visualization software has made it easier to trace connections between cited data across different articles, publications, and even disciplines. Text mining services also can help researchers quantify the impact of their research contributions without the tedious legwork previously necessary to discover that type of information.

There’s immense potential for text mining research papers, beyond simply tracking publication counts and citation data – though those areas can be better explored through text mining to give more meaningful information than simple counting statistics. For example, these processes can provide automatic summarization of articles to help researchers quickly determine pertinence, can discover hidden evidence through analyzing relationships not explicitly outlined in the text, and can offer a more comprehensive form of plagiarism detection.

Text mining services can also be applied to patent identification and acquisition processes. Instead of devoting employee man-hours to the research process, text mining can automate searches of patent portfolios for relevant information and can help target specific patents for acquisition. While this is just a small part of the entire process, it makes it quicker, more efficient, and promotes more sensible resource allocation – if research can be automated (and now it can), why would firms continue the laborious practices of the past?

Various other tasks can be accomplished through text mining and analysis. Sentiment reviews help companies better understand consumer reactions to products or services. Automated content analysis can evaluate tone, preferences, and bias far better than time-consuming methods of the past. Social media content can be broken down into a valuable resource for companies attempting to better understand public sentiment and consumer preference.

Certainly, as machine learning becomes more sophisticated, companies will be able to further extend the scope of these applications and have better, more precise automatic readings of text sources. Hudson is committed to using this technology in our efforts to help clients not only better optimize the massive amount of untapped text data found on the Internet, but also to help them understand specific consumer behaviors and potential areas for growth.