Crossover: 2014
Chapter 251 An Unprecedented Contribution
Chapter 251 An Unprecedented Contribution
In this case, finding the commonality of knowledge not only did not help them.
Instead, it became a drag on their studies.
It was very hard.
Instead of this happening, these people simply gave up on finding the commonality of knowledge.
Treat everyone equally, at least you won't be mistaken for your cleverness.
Similar to the dilemma faced by these people in learning.
Perhaps scholars in machine learning gave up searching for commonality of training data because of the same situation.
At least in Eve Carly it's for that reason.
Even though it is now known that Lin Hui has introduced a pre-training method in model training.
Eve Carly doesn't know exactly how Lin Hui did it now.
According to Lin Hui's explanation in the supplementary content of the paper.
Under the traditional training mechanism, the generation idea of the text summarization model is:
Corpus training → model
After introducing the pre-training mechanism according to Lin Hui's idea.
The idea of the text summary model is:
Corpus pre-training → pre-training model → fine-tuning → model
The idea itself is fine.
But Eve Carly was full of problems when she came up with this brand new model.
What kind of pre-training method should be introduced in the specific application to achieve twice the result with half the effort?
What kind of pre-training model is the target of pre-training?
How should we understand the "fine-tuning" of the pre-trained model?
The first two questions are questions about Lin Hui's theory.
The third problem is some doubts arising from the elaboration of language.
Although Eve Kali has been working hard to learn Chinese from Mina Kali recently.
But Chinese is obviously not something that can be mastered in a short period of time.
How should we understand the "fine-tuning" of the pre-training model Lin Hui's so-called "fine-tuning"?
Just a little tweak?
Or is the so-called "micro" just because of Lin Hui's contempt for the difficulty of this matter.
Eve Carly thinks it should be the latter.
Unlikely to be a minor tweak.
Why does Eve Carly think so?
Eve Carly feels that models involving text summarization are often extremely complex.
There are many parameters involved in a formal model.
Not to mention the pre-trained model generated by pre-training?
This coarse model, which precedes the formal model, may have more complex parameters.
Of course, this is just Eve Carly's guesswork.
When it comes to these questions, only Lin Hui himself may have the real answer.
Since coming to Lin Hui's side.
Originally, Eve Carly thought that her problems would gradually decrease.
But the reality is that the problem is getting more and more.
At least Eve Carly never had any doubts about those questions just now when she was in the United States.
But Eve Carly was not discouraged.
Scientific research has always been more important to ask questions than to solve them.
Eve Carly is very clear, although at this time she has more doubts than when she was in the United States.
But that doesn't matter, at least the questions she's asking now are closer to the nature of the technology than they were in the past.
And that's academic growth.
Eve Carly is not for nothing.
Originally, she had always been more curious about how Lin Hui, who had been almost unknown in terms of text summaries before, managed to overtake in a corner in a short time.
After all, it often takes a lot of time to build a language model.
But now I know that Lin Hui did the preprocessing.
Eve Carly felt that this problem did not seem to be a big problem.
Operate according to the idea of the pre-training mechanism proposed by Lin Hui in the supplementary content of the paper.
Although training is still required after the introduction of the pre-training mechanism.
It even seems that the steps are more cumbersome.
But Eve Carly estimated that the training under the same size corpus
The training that introduces the pre-training mechanism can save at least 50% of the time compared with the conventional one.
The introduction of pre-training into model training will improve efficiency.
The reason here is easy to understand through the analogy of learning examples.
Under normal circumstances, it is obviously more efficient to overcome difficulties after mastering the commonality of knowledge than to learn step by step.
In the same way, when machine learning, let the machine grasp the commonality of the data and then make the remaining labeled data will also improve efficiency.
Lin Hui was once a genius in the absolute sense in the eyes of Eve Carly.
In Eve Carly's view, the focus of genius is not on "talent", but on "talent"
Everyone seems to know to find the door to get out of the room, but they can't find the way.
And the genius is the one who walks to the door under the blank eyes of everyone and gently pushes the door open.
When everyone faces the bottleneck of the extractive summarization algorithm and cannot find a way out of the text summarization room.
LINHUI appeared just right, and pushed open a brand new door called "generative text summarization" under the confusion of everyone.
But looking at it now, Eve Carly felt that her previous cognition was almost meaningless.
The fact is that Lin Hui is not only a genius in the absolute sense, but also a well-deserved strongman.
If what Lin Hui described in the supplementary content of the thesis is true.
What is such a person if he is not strong?
It is not an exaggeration to say that the introduction of pre-training is a revolution to the traditional corpus training method.
This will greatly assist the training of the language model.
Eve Carly has a hunch that after the introduction of pre-training, the traditional field of natural language processing is expected to fully enter the era of neural network learning.
If it is possible to do so.
That would be a contribution of unprecedented significance.
You must know that what Lin Hui has created is not just pre-training.
Eve Carly noticed that Lin Hui's description of pre-training in the paper is pre-training based on the idea of transfer learning.
What is transfer learning?
With the help of transfer learning, existing knowledge can be used to learn new knowledge.
The core of this idea is to find the similarity between existing knowledge and new knowledge so as to infer other cases from one instance.
In the field of machine learning, it is too expensive to directly learn the target from scratch.
With the help of transfer learning, it doesn't have to be so troublesome.
In many cases, we can use existing relevant knowledge to assist in learning new knowledge as soon as possible.
For example, if you already know C language, you can learn C++ by analogy;
Once you have learned Greek, you can learn English by analogy.
Everything in the world has something in common, after looking for similarities between them reasonably.
Using this bridge to help learn new knowledge can save a lot of new trouble.
If it is true that it is by means of this thought.
After pre-trained data commonality learning.
When doing additional learning on non-commonly labeled data.
If because of the introduction of transfer ideas, pre-training has the ability to learn by analogy.
Then the time spent learning on non-commonly labeled data may be less.
(End of this chapter)
In this case, finding the commonality of knowledge not only did not help them.
Instead, it became a drag on their studies.
It was very hard.
Instead of this happening, these people simply gave up on finding the commonality of knowledge.
Treat everyone equally, at least you won't be mistaken for your cleverness.
Similar to the dilemma faced by these people in learning.
Perhaps scholars in machine learning gave up searching for commonality of training data because of the same situation.
At least in Eve Carly it's for that reason.
Even though it is now known that Lin Hui has introduced a pre-training method in model training.
Eve Carly doesn't know exactly how Lin Hui did it now.
According to Lin Hui's explanation in the supplementary content of the paper.
Under the traditional training mechanism, the generation idea of the text summarization model is:
Corpus training → model
After introducing the pre-training mechanism according to Lin Hui's idea.
The idea of the text summary model is:
Corpus pre-training → pre-training model → fine-tuning → model
The idea itself is fine.
But Eve Carly was full of problems when she came up with this brand new model.
What kind of pre-training method should be introduced in the specific application to achieve twice the result with half the effort?
What kind of pre-training model is the target of pre-training?
How should we understand the "fine-tuning" of the pre-trained model?
The first two questions are questions about Lin Hui's theory.
The third problem is some doubts arising from the elaboration of language.
Although Eve Kali has been working hard to learn Chinese from Mina Kali recently.
But Chinese is obviously not something that can be mastered in a short period of time.
How should we understand the "fine-tuning" of the pre-training model Lin Hui's so-called "fine-tuning"?
Just a little tweak?
Or is the so-called "micro" just because of Lin Hui's contempt for the difficulty of this matter.
Eve Carly thinks it should be the latter.
Unlikely to be a minor tweak.
Why does Eve Carly think so?
Eve Carly feels that models involving text summarization are often extremely complex.
There are many parameters involved in a formal model.
Not to mention the pre-trained model generated by pre-training?
This coarse model, which precedes the formal model, may have more complex parameters.
Of course, this is just Eve Carly's guesswork.
When it comes to these questions, only Lin Hui himself may have the real answer.
Since coming to Lin Hui's side.
Originally, Eve Carly thought that her problems would gradually decrease.
But the reality is that the problem is getting more and more.
At least Eve Carly never had any doubts about those questions just now when she was in the United States.
But Eve Carly was not discouraged.
Scientific research has always been more important to ask questions than to solve them.
Eve Carly is very clear, although at this time she has more doubts than when she was in the United States.
But that doesn't matter, at least the questions she's asking now are closer to the nature of the technology than they were in the past.
And that's academic growth.
Eve Carly is not for nothing.
Originally, she had always been more curious about how Lin Hui, who had been almost unknown in terms of text summaries before, managed to overtake in a corner in a short time.
After all, it often takes a lot of time to build a language model.
But now I know that Lin Hui did the preprocessing.
Eve Carly felt that this problem did not seem to be a big problem.
Operate according to the idea of the pre-training mechanism proposed by Lin Hui in the supplementary content of the paper.
Although training is still required after the introduction of the pre-training mechanism.
It even seems that the steps are more cumbersome.
But Eve Carly estimated that the training under the same size corpus
The training that introduces the pre-training mechanism can save at least 50% of the time compared with the conventional one.
The introduction of pre-training into model training will improve efficiency.
The reason here is easy to understand through the analogy of learning examples.
Under normal circumstances, it is obviously more efficient to overcome difficulties after mastering the commonality of knowledge than to learn step by step.
In the same way, when machine learning, let the machine grasp the commonality of the data and then make the remaining labeled data will also improve efficiency.
Lin Hui was once a genius in the absolute sense in the eyes of Eve Carly.
In Eve Carly's view, the focus of genius is not on "talent", but on "talent"
Everyone seems to know to find the door to get out of the room, but they can't find the way.
And the genius is the one who walks to the door under the blank eyes of everyone and gently pushes the door open.
When everyone faces the bottleneck of the extractive summarization algorithm and cannot find a way out of the text summarization room.
LINHUI appeared just right, and pushed open a brand new door called "generative text summarization" under the confusion of everyone.
But looking at it now, Eve Carly felt that her previous cognition was almost meaningless.
The fact is that Lin Hui is not only a genius in the absolute sense, but also a well-deserved strongman.
If what Lin Hui described in the supplementary content of the thesis is true.
What is such a person if he is not strong?
It is not an exaggeration to say that the introduction of pre-training is a revolution to the traditional corpus training method.
This will greatly assist the training of the language model.
Eve Carly has a hunch that after the introduction of pre-training, the traditional field of natural language processing is expected to fully enter the era of neural network learning.
If it is possible to do so.
That would be a contribution of unprecedented significance.
You must know that what Lin Hui has created is not just pre-training.
Eve Carly noticed that Lin Hui's description of pre-training in the paper is pre-training based on the idea of transfer learning.
What is transfer learning?
With the help of transfer learning, existing knowledge can be used to learn new knowledge.
The core of this idea is to find the similarity between existing knowledge and new knowledge so as to infer other cases from one instance.
In the field of machine learning, it is too expensive to directly learn the target from scratch.
With the help of transfer learning, it doesn't have to be so troublesome.
In many cases, we can use existing relevant knowledge to assist in learning new knowledge as soon as possible.
For example, if you already know C language, you can learn C++ by analogy;
Once you have learned Greek, you can learn English by analogy.
Everything in the world has something in common, after looking for similarities between them reasonably.
Using this bridge to help learn new knowledge can save a lot of new trouble.
If it is true that it is by means of this thought.
After pre-trained data commonality learning.
When doing additional learning on non-commonly labeled data.
If because of the introduction of transfer ideas, pre-training has the ability to learn by analogy.
Then the time spent learning on non-commonly labeled data may be less.
(End of this chapter)
You'll Also Like
-
The Growth System Comes at the Age of Thirty
Chapter 131 4 hours ago -
Family Immortal Cultivation: Li Clan
Chapter 1035 13 hours ago -
Longevity, starting from the blood contract turtle
Chapter 609 13 hours ago -
Wanjie Technology System.
Chapter 701 17 hours ago -
On the Avenue
Chapter 411 17 hours ago -
Diary of the Improper Monster Girl Transformation
Chapter 253 17 hours ago -
Oh no, the young villain got the heroine's script!
Chapter 915 17 hours ago -
Having a child makes you invincible
Chapter 329 17 hours ago -
Just a quick calculation, you are a fugitive!
Chapter 657 17 hours ago -
Who brought this guy into the monastic circle?
Chapter 386 17 hours ago