close
AI

Published by Facebook: Ending High Numbers Saved, AI Helps You Solve Equations

b01123ea-3852-11ea-8ba7-aaaa00151f43.jpg

Editor’s note: This article comes from the WeChat public account “Xin Zhi Yuan” (ID: AI_era), editor: Daming, 36 氪 Authorized release Source: Turing TOPIA Recently, Facebook AI announced the establishment of the first use of symbolic reasoning to solve advanced mathematical equationsAI system, with precision rolling Mathematica and Matla.By developing a new method to represent complex mathematical expressions as a language, and then replacing the solution with a sequence-to-sequence neural network translation problem, the researchers established a solution to the integration problem and first- and second-order differential equations.All aspects are superior to traditional computing systems.Previously, this type of problem was considered to be beyond the scope of deep learning models because solving complex equations requires precision rather than approximation.Neural networks are good at learning how to succeed through approximations, such as recognizing that a particular pixel pattern may be an image of a dog, or that a sentence’s features in one language match the features in another language.Solving complex equations also requires the ability to process symbolic data, such as the letters in the formula b-4ac = 7.These variables cannot be directly added, multiplied or split, and only traditional pattern matching or statistical analysis can be used. Neural networks are limited to extremely simple mathematical problems.Facebook AI says that the solution they propose is a completely new way to look at sentences in a complex equation, so they are able to take full advantage of mature techniques in neural machine translation (NMT) training models, thereby removing problems fromEssentially translates into a solution.To implement this method, they need to develop a method to decompose existing mathematical expressions into the grammar of two languages, and generate a large-scale training data set containing more than 100M paired equations and solutions.When faced with thousands of invisible expressions, these equations are not part of the training data. Researchers’ models are faster and more accurate than traditional algebraic equation solving software such as Maple, Mathematica, and Matlab.Significantly improved.This research not only proves that deep learning can be used for symbolic reasoning, but also shows that neural networks have the potential to handle a wider range of tasks, including those that are generally not related to pattern recognition.Researchers are sharing details of the researchers’ methods and methods that help others generate similar training sets.People applying new methods of neural machine translation (NMT) who are particularly good at symbolic mathematics often rely on an intuition.What do they feel about the solution to a given problem, for example, observing that if there is a cosine in the function to be integrated, there may be a sine in its integration, and then the necessary work is done to prove it.By training a model to detect patterns in symbolic equations, researchers believe that neural networks can piece together clues to a solution, which is roughly similar to human intuition-based complex problem processing methods.Therefore, researchers have begun to explore symbolic reasoning as an NMT problem, where a model can predict possible solutions based on the instance of the problem and its matching solution.How the researcher’s method expands an existing equation (on the left) into an expression tree that can be used as input to a transformation model.For this equation, the preamble sequence input into the researcher model is: (+, multiplied by 3, power, x, 2,-, cos, multiplied by 2, x, 1) To implement this application with a neural networkResearchers need a new way to represent mathematical expressions.NMT systems are usually sequence-to-sequence (seq2seq) models that take sequences of words as input and output new sequences, allowing them to translate complete sentences instead of individual words.Researchers used a two-step approach to apply it to symbolic equations.First, the researchers developed an efficient method for unpacking equations, unfolding the equations into dendritic branches, and then expanding them into sequences compatible with the seq2seq model.Constants and variables act as leaves, while operators (such as plus and minus signs) and functions are internal nodes that connect branches of the tree.Although it may not look like a traditional language, organizing expressions in this way provides a language-like syntax for equations—numbers and variables are nouns, and operators are verbs.The researchers’ method enables the NMT model to learn to align the pattern of a given tree structure problem with its matching solution (also represented as a tree), similar to matching a sentence in a language to its confirmed translation.This method allows researchers to use a powerful, ready-to-use seq2seq NMT model to replace word sequences with symbol sequences.Building a New Training Data Set Although researchers’ expression tree syntax enables NMT models to theoretically effectively turn complex mathematical problems into solutions, training such models requires a large number of examples.Because one of the two types of problems that researchers focus on (integral and differential equations) does not always have a solution, researchers cannot simply collect equations and enter them into the system.Researchers need to generate a completely new training set that contains examples of solved equations reconstructed into a model-readable expression tree.This creates a “problem-solution” pair, similar to a corpus of sentences translated between different languages.The researcher’s data set must also be much larger than the training data previously used, and previous research attempted to train the system on thousands of examples.Since neural networks usually perform better with more training data, researchers have created a collection of millions of examples.Building this data set requires researchers to incorporate a range of data cleaning and generation techniques.For example, for symbolic integral equations, researchers have changed the translation method: instead of generating problems and finding their solutions, it is the problem of generating solutions and finding them (their derivatives) that is a simpler task.This approach to generating problems from their solutions makes it possible to create millions of integration examples.The researchers’ translation-inspired dataset consists of approximately 1 million pairs of examples, which contains a subset of the integration problem and first- and second-order differential equations.Researchers use this data set to train a seq2seq transformer model with 8 focus heads and 6 layers.Transformers are often used for translation tasks, and researchers’ networks are designed to predict solutions to various equations, such as determining the primitives of a given function.To evaluate the performance of the model, researchers provided 5,000 expressions to the model, forcing the system to recognize patterns in equations that did not occur during training.The accuracy of the researcher’s model in solving the integration problem is 99.7%, and the accuracy for the first- and second-order differential equations is 94% and 81.2%, respectively.These results exceed those of all three traditional equation solvers tested by researchers.Mathematica’s results are poor, with an accuracy of 84% on the same integration problem and 77.2% and 61.6% accuracy for the differential equation results.The researcher’s model can also return most predictions in less than 0.5 seconds, while other systems take minutes to find a solution and sometimes even time out completely.The researcher’s model takes the equation on the left as an input and is able to find the correct solution in less than one second (as shown on the right).However, neither Mathematica nor Matlab can solve these equations. Comparing the generated solution with a reference solution can easily and accurately verify the results.But researchers’ models can also produce multiple solutions to a given equation.This is similar to machine translation, there are many ways to translate the input sentence.AI that can solve equations, what do you do next?The researcher’s model currently deals with univariate problems, and the researchers plan to extend it to multivariate equations.This method can also be applied to other fields based on mathematics and logic, such as physics, which may help scientists to do a wider range of work.But the researcher’s system has a wider meaning to the research and use of neural networks.It can be solved using deep learning where it previously didn’t work, and this work suggests that other tasks might benefit from artificial intelligence.Whether it is through the further application of NLP technology to traditionally language-independent fields, or through a more open exploration of pattern recognition in new or seemingly unrelated tasks, the limitations of neural networks may be the limits of imagination, andNot a technical limitation.Original link: https://ai.facebook.com/blog/using-neural-networks-to-solve-advanced-mathematics-equations/.

Tags : EntrepreneurshipInternet entrepreneurshipInternet entrepreneurship project