Welcome to the Qinsun Instruments Co., LTD! Set to the home page | Collect this site
The service hotline

Search


Related Articles

Product Photo

Contact Us

Qinsun Instruments Co., LTD!
Address:NO.258 Banting Road., Jiuting Town, Songjiang District, Shanghai
Tel:021-67801892
Phone:13671843966
E-mail:info@standard-groups.com
Web:http://www.qinsun-lab.com

Your location: Home > Related Articles > How did natural language processing technology evolve from AI systems to NLP?

How did natural language processing technology evolve from AI systems to NLP?

Author:QINSUN Released in:2024-01 Click:62

The history of natural language processing is a story full of twists and turns. It started with futile research, went through years of fruitful work, and finally ended in an era where we were still trying to find the limits of the field. Today, let's explore the development of this branch of AI science together.

The Origin of Natural Language Processing (NLP) - How was this idea born?

Natural language processing originated in the late 1940s when the first AI system was built. They must process natural language and recognize words in order to understand human commands. In 1950, Alan Turing published a paper describing the first machine translation algorithm. The algorithm process focuses on the morphology, syntax, and semantics of programming languages. The title of the paper is "Computers and Intelligence". Turing has written more research papers on natural language, but his work in this area has not continued.

In 1959, he wrote a paper on computable numbers. Introduced the idea of artificial intelligence to solve problems that humans cannot solve on their own. This algorithm processes information and performs tasks beyond human capabilities or time limits, such as playing chess at lightning speed.

The Birth of Natural Language Processing (NLP) - Who Made It Possible?

In 1956, John McCarthy published a report describing how to communicate with AI systems using natural language. In 1957, he coined the term "artificial intelligence". In 1958, he published a paper describing the SOLO natural language sentence processing program.

In 1959, Frank Rosenblatt created the first perceptron (neural network). These networks aim to process information and solve problems in pattern recognition or classification tasks. In 1962, after Marvin Minsky and Seymour Papert wrote their successful book "Perceptron," these artificial neurons were widely used.

In 1966, an artificial intelligence company called General Automation Incorporated was established, focusing on natural language processing and pattern recognition.

What changes have occurred in the evolution of natural language processing (NLP)?

Over time, different analytical methods have gradually developed. Scientists from the University of Edinburgh and Cornell University created a computational model in 1964. The first computer program that could communicate with people was ELIZA, which was created by Joseph Weizenbaum of the Massachusetts Institute of Technology in 1966.

In 1966, the first Computer Speech and Language Processing Professional Conference was held. In 1967, a Russian machine translation program was available for English speaking scientists to read Soviet scientific discoveries.

The Development of Natural Language Processing (NLP) - How Has It Evolved?

It wasn't until 1979 that another big step was taken, and it was in that year that the first simple English "chatbot" was born.

In 1984, IBM's new product, Chatterbox, was able to communicate with people in natural language. It used an early version of the conversation management system to filter out uninteresting conversations for users.

Later, in 1987, Robert Schank created a program called PARRY that was able to have conversations with psychiatrists, but could not answer questions about his own life.

In 1990, ELIZA and Parry were considered insignificant examples of artificial intelligence because they used simple pattern matching techniques that could not truly think or understand natural language like humans. We still cannot create a chatbot that can convincingly pass the Turing test.

In 1994, statistical machine translation made a significant breakthrough in natural language processing, allowing machines to read 400 times faster than humans, but still not as fast as human translation.

A few years later, in 1997, natural language processing made a significant breakthrough by introducing an algorithm for parsing and understanding speech, which is known as one of the significant achievements in the field of artificial intelligence.

In 2006, Google launched a translation feature that does not require human intervention. This feature uses statistical machine learning to translate words from over 60 languages into other languages by reading millions of texts. In the following years, the algorithm has been improved, and now Google Translate can translate over 100 languages.

In 2010, IBM announced the development of a system called Watson, which can understand problems in natural language and then use artificial intelligence to provide answers based on information provided by Wikipedia.

In 2013, Microsoft launched a chatbot called Tay. It was created to learn from interactions with humans on Twitter and other platforms, in order to engage people online, but not long after, the robot began publishing offensive content, causing it to shut down after 16 hours of existence.

Now, in 2021, the hype of machine learning has reached its peak.

What are the limitations of Natural Language Processing (NLP)?

One of them is to improve natural language processing in interactive dialogue systems, which includes knowledge-based dialogue and dialogue proxies, such as Siri or Alexa - the assistants we use every day. However, there is still a long way to go before they can react like humans.

Another limitation is that most machine learning algorithms are not intended for real-time situations such as chatbots, but rather for offline processing of datasets with a large number of input variables and training datasets - which means there is still no way to predict future events or every possible scenario.

What do we want to achieve through natural language processing (NLP)?

Scientists hope to create algorithms that can understand the meaning and intent of sentences, and use as few words as possible. They plan to create an algorithm that can grasp the meaning and intention of sentences in order to extract information from them. That's why there are still no limitations to the goals we want to achieve through natural language processing, as long as it supports activities in human daily life. They said that developing NLP (Natural Language Processing) is very helpful for humans in their daily lives. There are some threats behind the development of NLP, but there are also many opportunities.

Natural language processing helps people speak and read more fluently in their daily lives, and enables them to type faster than they can write sentences on a keyboard. But one of the main threats is that some experts say developing natural language processing will make humans unemployed because they will be replaced by machines.

However, some people also say that natural language processing will bring new jobs and opportunities to humanity because it is too complex. This means that as long as the development of NLP supports activities in human daily life, we may be able to find the boundary between limitations and freedom through this technology.

Prev:

Next: