Editor’s note: This article is from the micro-channel public number “academic headlines” (ID: SciTouTiao), Author: Zhou Zi Yuan-36 Krypton release authorized.

The latest wave of AI advancement, which combines machine learning and big data, has provided us with tools that can respond to verbal commands and self-driving cars that can recognize objects in front of the road.But I have to say that the common sense of these so-called “smart” products is basically zero.
Amazon’s intelligent assistant Alexa and Apple’s intelligent assistant Siri can refer to Wikipedia to obtain information about a certain plant, but they don’t know what happens when the plant is placed in the dark; intelligent programs that can recognize road obstacles ahead, usuallyNor can it be understood why avoiding crowds is more important than avoiding traffic jams.
For artificial intelligence to become as smart as humans, common sense reasoning is one of its essential abilities.But how to make artificial intelligence understand common sense has been a problem that has puzzled artificial intelligence for more than 50 years.New York University professor Ernest Davis (Ernest Davis) has been studying the common sense problem of artificial intelligence for decades.He believes that understanding common sense is essential for advancing robotics.Machines need to master basic concepts such as time, causality, and social interaction to show true wisdom.And this is the biggest obstacle we are currently facing.
Common sense issues are a major blind spot for artificial intelligence
The term “common sense” not only refers to a kind of knowledge, but can also refer to views on this kind of knowledge, not a specific subject area, but a widely reusable background knowledge that almost everyone should have.For example, people go to restaurants to eat food rather than just order and pay; throwing matches on a pile of firewood means someone is trying to light a fire.The implicit nature of most common-sense knowledge makes this kind of knowledge difficult to express explicitly.
Although early researchers believed that it is possible to build a knowledge base by recording all the facts of the real world as the first step to realize automated common sense reasoning.However, this approach is far more difficult to achieve than it sounds.No matter how rich and colorful knowledge is collected in the knowledge base, it is inevitable that it is impossible to capture the ambiguities and overlapping relationships that often occur in human common sense reasoning.David Ferrucci, a former principal researcher at the IBM Watson computer system, is now explaining a children’s story to the newly invented machine.In this story, Fernando and Zoey bought some plants.Fernando placed his plant on the window sill, and Zoe threw the plant in her dark room.
A few days later, Fernando ’s plants grew lush, but Zoe ’s leaves had turned brown.After Zoe transferred the plant to the window sill, the leaves began to regenerate.A question appeared on the screen in front of Ferrucci: “Did Fernando put the plant on the window sill because he wanted to make the plant healthier? Does it make sense? The windows full of sunlight are well lit and the plants can stay healthy.”This problem is part of the artificial intelligence system created by Ferrucci to learn how the world works.For us, we can easily understand the reason why Fernando puts plants on the windowsill.But for AI systems, this is difficult to grasp.
Because when reading texts, humans can make common-sense reasoning, which supports the understanding of narrative, which is composed of events with logic, causality, and so on.For a machine to have this ability just like humans, it is necessary to acquire relevant common sense indefinitely, the more accurate the better.Ferrucci and his new company, Elemental Cognition, hope to solve the major blind spots in modern artificial intelligence by teaching machines to acquire and apply daily knowledge to communicate with humans, reason, and observe the surrounding environment.Researchers can answer Fernando plant questions by clicking the “Yes” button on the screen.
On a server somewhere, an AI program called CLARA adds this information to the fact and concept library to learn this artificial common sense.Like a child who is always curious, CLARA keeps asking Ferrucci about plant stories, trying to “understand” why things are displayed in this way.”Can we make machines really understand what they read?” Ferrucci said, “It’s very difficult, but it’s exactly what element cognition wants to achieve.”
The process of AI understanding common sense
Although the field of artificial intelligence has been studying common sense issues for a long time, progress has been surprisingly slow.At the beginning, the researchers tried to translate common sense into computer language-logic.Researchers believe that if all the unwritten rules of human common sense can be written in computer language, then the computer can use these common sense to reason like arithmetic.However, this method relies on labor and is not scalable.Michael Witbrock, an artificial intelligence researcher at the University of Auckland in New Zealand, said that the amount of knowledge that can be conveniently expressed in logical form is limited in principle, and it turns out that this method is very difficult to implement.
Another path to common sense is to use neural networks for deep learning.The researchers designed such artificial intelligence systems to simulate the interconnected neuron layers in the biological brain and learn patterns without requiring programmers to specify in advance.Over the past decade or so, increasingly complex neural networks trained with large amounts of data have transformed research in the fields of computer vision and natural language processing.However, although neural networks have strong intelligence and flexibility (achieving autonomous driving, beating world-class players in chess and Go), these systems still make many common-sense mistakes that are ridiculous (sometimes evenIs fatal).
In 2011, Watson Computer parsed a large number of texts and found the answer to the question of the quiz show “Dangerous Edge”, but there are still many limitations in understanding common sense.Subsequently, deep learning in the field of artificial intelligence began to emerge.By teaching computers to recognize faces, transcribe speech and provide them with a large amount of data to perform other operations, deep learning has been widely used, and in recent years has made new breakthroughs in language understanding.At present, it is possible to generate a question answer or a coherent text model through a specific artificial neural network.Google, Baidu, Microsoft, and Open AI have all created more complex language processing models.
Taking CLARA as an example, the goal is to combine deep learning with the way to build knowledge into the machine through clear logic rules, mainly using statistical methods to identify concepts such as nouns and verbs in sentences.Knowledge about specific topics comes from Amazon Mechanical Turkers, which will then be built into CLARA’s database.CLARA then combined the facts it gave with the deep learning language model to generate its own common sense.In addition, CLARA can also collect common sense through interaction with users.If there are disagreements, it can ask which statement is the most accurate.
CLARA is not the only artificial intelligence that can understand common sense.Yejin Choi, a professor at the University of Washington, a researcher at the Allen Institute for Artificial Intelligence, and collaborators recently proposed the automatic construction of a commonsense knowledge base model COMET (Commonsense Transformers), which combines two distinct artificial intelligence methods, symbolic reasoning and deep learning.
Compared with pure deep learning language models, COMET has a lower frequency of understanding errors when talking or answering questions.Because COMET is the opposite of many traditional knowledge bases that use normative templates to store knowledge, the common sense knowledge base only stores loosely structured open knowledge descriptions.By referring to the Transformer context-aware language model, the seed knowledge training set is selected for pre-training in the ATOMIC and ConceptNet knowledge bases, so that the model can automatically build a common knowledge knowledge base, given the head entity and relationship, and generate the tail entity.
Despite the challenges of common sense modeling, Yejin Choi’s survey shows that when transferring implicit knowledge from deep pre-trained language models to common sense graphs to generate explicit knowledge, the results are promising.The empirical results of the research show that COMET can produce high-quality new knowledge recognized by human beings, and its highest one-bit accuracy can reach 77.5% (ATOMIC) and 91.7% (ConceptNet), which is close to human performance.Using the common knowledge generation model COMET to automatically build a common knowledge knowledge base may be a reasonable alternative to knowledge extraction to build a knowledge base.
“If I live in a world where there is no one else (can talk), I can still have common sense-I can still understand how the world works, and have a sense of what I should and should not seeExpectations. “Ellie Pavlick, a computer scientist at Brown University, said he is currently studying how to teach them common sense by interacting with artificial intelligence systems in virtual reality.For Pavlick, COMET represents “really exciting progress, but what is missing is the actual reference aspect.”The word “apple” is not necessarily the real apple, this meaning must exist in some form, not the language itself.”
Nazneen Rajani, a senior research scientist at Salesforce, is pursuing a similar goal, but she believes that the full potential of neural language models is far from being developed.She is studying whether neural language models can learn to reason about common sense situations involving basic physics. For example, a jar with a ball knocked over usually causes the ball to fall out.”The real world is really complicated,” Rajani said, “but natural language is like a low-dimensional proxy that reflects how the real world works. Neural networks can predict the next word through text hints, but this should not be theirThey can learn more complicated things. “With the continuous breakthroughs in the research of AI understanding common sense, perhaps soon, the artificial intelligence assistants around us will become more and more intelligent and understanding.
Reference materials:
1.https: // // / 3.

Tags : 36 krypton startingAutopilotBlack technologycapital marketcareducationEntrepreneurshipForefrontgameGo publicInformationInternet celebrity economyInternet of ThingsInternet of VehiclesinvestmentNew businessNew economyOccupationreal estatestockTechnologythe Internetunmanned