Crossover: 2014
Chapter 250 An Ingenious Idea
Chapter 250 An Ingenious Idea
Specifically, Lin Hui made changes to the paper written by Eve Carly at that time.
Objectively speaking, in fact, Lin Hui did not make many changes in the paper involving generative text summarization at that time.
Lin Hui just added some content.
But what Lin Hui added was almost the essence.
Through Lin Hui's supplementary content, Eve Carly learned more about how Lin Hui solved the text summarization technology in Nanfeng APP.
Lin Huizai has adopted many ingenious methods for constructing generative text summarization algorithms.
Whether it is designing a suitable model architecture and training strategy based on deep learning technology.
With the help of the idea of transfer learning, a generative automatic text summarization algorithm based on the pre-trained model is proposed.
Or complete content representation and weight calculation through unsupervised.
These are things that Eve Carly hadn't thought of before, or never had a deep understanding of.
A doctor in a related field actually has something that he didn't realize before?
It may sound unbelievable, but it is true.
As the saying goes, there is a priority in hearing the truth, and there is a specialization in the art industry.
There is nothing undeniable about falling behind others for a while.
And Eve Carly is sure that her situation is definitely not an isolated case.
Eve Carly felt that what Lin Hui added might not have been her own surprise.
Many other researchers probably didn't expect it either.
Some new insights put forward by Lin Hui are not only compared to the traditional text summarization research
Even for the whole NLP direction, the things that Lin Hui tinkered with can be called brand-new ideas.
Anyway, Eve Carly thinks these ideas are amazing, and even have the effect of giving people a kind of enlightenment.
The reason for this effect is largely due to the fact that most researchers of text summarization have studied extractive text summarization before.
Both extractive text summarization and generative text summarization are text summarization.
But the transition from the former to the latter involves a process of thinking up conversion.
In many cases, most researchers in traditional text summarization, that is, researchers who study extractive text summarization, are influenced by preconceptions, and it is not uncommon for generative text summarization to have a poor understanding of generative text summarization.
For example, let's say the pre-training proposed by Lin Hui when he solved the generative text summarization.
Ordinarily, this thing is not a profound concept.
The so-called pre-training is not difficult to understand, it is nothing more than rough processing of the data of the training model.
But this thing is more difficult to think of.
In the past, Eve Carly did not use pre-training when adjusting the extractive text summarization.
In most cases, training is carried out directly.
The pre-training step is not applied.
According to Lin Hui's supplement in the paper.
The common practice of pre-training is to put together a large amount of training data collected at low cost.
Then use a certain or a certain type of specific pre-training method to learn the commonality of these training data.
These commonalities are then grafted into task-specific models.
Fine-tuning is then performed using a small amount of labeled data of the relevant domain.
After completing this process, the model models used for practical applications in the future only need to start from the commonality.
Then go to learn the special part of the specific task.
It is roughly similar to the process of first finding general solutions for some equations and then finding specific solutions.
It sounds pretty abstract.
It's actually not that deep.
When it comes to machine learning, no matter how advanced it is.
In essence, it is basically imitating people.
In such cases, often we only need to understand how people deal with problems.
You can understand the idea or way of machine learning to deal with problems.
Usually when we are learning something.
Perhaps our original intention is to learn everything we want to learn at once.
However, due to limited study time, numerous academic tasks, or various other objective factors.
When actually studying, it is difficult to learn all the knowledge in one step.
In this case, how do some people who are good at learning learn?
The way these people may adopt when learning is to first understand the common content of the knowledge they want to learn.
Then spend time on some of those "difficult diseases".
Although this approach seems to be a bit "lazy".
But more than half of the wisdom crystallization of human beings appeared because of laziness.
It is undeniable that this seemingly lazy way of learning is full of wisdom.
This approach is laudable, at least from an efficiency standpoint.
After all, except for extremely special subjects like medicine.
80% of the knowledge involved in most fields can find commonality.
After finding the commonality, go to solve the other 20% complex knowledge.
This is undoubtedly a relatively labor-saving way of thinking.
Introduce pre-training in the typical direction of machine learning in natural language processing.
Undoubtedly, it is equivalent to "transplanting" a special skill that some outstanding students will use in their studies.
This idea is undoubtedly very ingenious.
The idea is certainly ingenious.
But just like the truth of Li Ku on the side of the road.
Why has no one tried this ingenious idea before?
Eve Carly felt that maybe no one had thought about this aspect.
But others have failed without exception.
When it comes to the acquisition of knowledge, perhaps the vast majority of people also know that it can save effort to get 80% of the common knowledge first and then get the other 20%.
But judging from past studies, Eve Carly feels that there are very few people around her who can find out 80% of the commonality of knowledge and then overcome difficulties.
No one can even do this except for the top student in the eyes of Eve Carly.
How many academic masters can there be in Eve Carly's eyes?It can be said that there are very few.
That is to say, it is very wise to get 80% of the common knowledge first and then get the other 20%. Few people actually use it.
Obviously it looks easier.
Why not many people do this?
Eve Carly thinks the main reasons are:
——Most people are not good at finding commonalities in knowledge.
In the case of not being good at finding commonalities in knowledge, some people will try to find commonalities in knowledge.
But in actual operation, finding the commonality of 80% of the knowledge is completely extravagant.
It may only be possible to find commonality of 30%, 20% or even less knowledge.
In this way, these people not only failed to find the commonality of subject knowledge.
On the contrary, when looking for commonality, some other content that was originally ordinary was unknowingly alienated into "non-common knowledge" in the eyes of these people.
In the minds of these people, non-common knowledge is suggested to be more troublesome knowledge by those who try to find commonness.
These were not particularly difficult knowledge in the first place, but under the debuff of psychological hints.
The efficiency is even lower than the efficiency when no commonality is found.
In this way, people who have not found commonality may become the content that those who try to find commonality need to spend a lot of time to overcome.
(End of this chapter)
Specifically, Lin Hui made changes to the paper written by Eve Carly at that time.
Objectively speaking, in fact, Lin Hui did not make many changes in the paper involving generative text summarization at that time.
Lin Hui just added some content.
But what Lin Hui added was almost the essence.
Through Lin Hui's supplementary content, Eve Carly learned more about how Lin Hui solved the text summarization technology in Nanfeng APP.
Lin Huizai has adopted many ingenious methods for constructing generative text summarization algorithms.
Whether it is designing a suitable model architecture and training strategy based on deep learning technology.
With the help of the idea of transfer learning, a generative automatic text summarization algorithm based on the pre-trained model is proposed.
Or complete content representation and weight calculation through unsupervised.
These are things that Eve Carly hadn't thought of before, or never had a deep understanding of.
A doctor in a related field actually has something that he didn't realize before?
It may sound unbelievable, but it is true.
As the saying goes, there is a priority in hearing the truth, and there is a specialization in the art industry.
There is nothing undeniable about falling behind others for a while.
And Eve Carly is sure that her situation is definitely not an isolated case.
Eve Carly felt that what Lin Hui added might not have been her own surprise.
Many other researchers probably didn't expect it either.
Some new insights put forward by Lin Hui are not only compared to the traditional text summarization research
Even for the whole NLP direction, the things that Lin Hui tinkered with can be called brand-new ideas.
Anyway, Eve Carly thinks these ideas are amazing, and even have the effect of giving people a kind of enlightenment.
The reason for this effect is largely due to the fact that most researchers of text summarization have studied extractive text summarization before.
Both extractive text summarization and generative text summarization are text summarization.
But the transition from the former to the latter involves a process of thinking up conversion.
In many cases, most researchers in traditional text summarization, that is, researchers who study extractive text summarization, are influenced by preconceptions, and it is not uncommon for generative text summarization to have a poor understanding of generative text summarization.
For example, let's say the pre-training proposed by Lin Hui when he solved the generative text summarization.
Ordinarily, this thing is not a profound concept.
The so-called pre-training is not difficult to understand, it is nothing more than rough processing of the data of the training model.
But this thing is more difficult to think of.
In the past, Eve Carly did not use pre-training when adjusting the extractive text summarization.
In most cases, training is carried out directly.
The pre-training step is not applied.
According to Lin Hui's supplement in the paper.
The common practice of pre-training is to put together a large amount of training data collected at low cost.
Then use a certain or a certain type of specific pre-training method to learn the commonality of these training data.
These commonalities are then grafted into task-specific models.
Fine-tuning is then performed using a small amount of labeled data of the relevant domain.
After completing this process, the model models used for practical applications in the future only need to start from the commonality.
Then go to learn the special part of the specific task.
It is roughly similar to the process of first finding general solutions for some equations and then finding specific solutions.
It sounds pretty abstract.
It's actually not that deep.
When it comes to machine learning, no matter how advanced it is.
In essence, it is basically imitating people.
In such cases, often we only need to understand how people deal with problems.
You can understand the idea or way of machine learning to deal with problems.
Usually when we are learning something.
Perhaps our original intention is to learn everything we want to learn at once.
However, due to limited study time, numerous academic tasks, or various other objective factors.
When actually studying, it is difficult to learn all the knowledge in one step.
In this case, how do some people who are good at learning learn?
The way these people may adopt when learning is to first understand the common content of the knowledge they want to learn.
Then spend time on some of those "difficult diseases".
Although this approach seems to be a bit "lazy".
But more than half of the wisdom crystallization of human beings appeared because of laziness.
It is undeniable that this seemingly lazy way of learning is full of wisdom.
This approach is laudable, at least from an efficiency standpoint.
After all, except for extremely special subjects like medicine.
80% of the knowledge involved in most fields can find commonality.
After finding the commonality, go to solve the other 20% complex knowledge.
This is undoubtedly a relatively labor-saving way of thinking.
Introduce pre-training in the typical direction of machine learning in natural language processing.
Undoubtedly, it is equivalent to "transplanting" a special skill that some outstanding students will use in their studies.
This idea is undoubtedly very ingenious.
The idea is certainly ingenious.
But just like the truth of Li Ku on the side of the road.
Why has no one tried this ingenious idea before?
Eve Carly felt that maybe no one had thought about this aspect.
But others have failed without exception.
When it comes to the acquisition of knowledge, perhaps the vast majority of people also know that it can save effort to get 80% of the common knowledge first and then get the other 20%.
But judging from past studies, Eve Carly feels that there are very few people around her who can find out 80% of the commonality of knowledge and then overcome difficulties.
No one can even do this except for the top student in the eyes of Eve Carly.
How many academic masters can there be in Eve Carly's eyes?It can be said that there are very few.
That is to say, it is very wise to get 80% of the common knowledge first and then get the other 20%. Few people actually use it.
Obviously it looks easier.
Why not many people do this?
Eve Carly thinks the main reasons are:
——Most people are not good at finding commonalities in knowledge.
In the case of not being good at finding commonalities in knowledge, some people will try to find commonalities in knowledge.
But in actual operation, finding the commonality of 80% of the knowledge is completely extravagant.
It may only be possible to find commonality of 30%, 20% or even less knowledge.
In this way, these people not only failed to find the commonality of subject knowledge.
On the contrary, when looking for commonality, some other content that was originally ordinary was unknowingly alienated into "non-common knowledge" in the eyes of these people.
In the minds of these people, non-common knowledge is suggested to be more troublesome knowledge by those who try to find commonness.
These were not particularly difficult knowledge in the first place, but under the debuff of psychological hints.
The efficiency is even lower than the efficiency when no commonality is found.
In this way, people who have not found commonality may become the content that those who try to find commonality need to spend a lot of time to overcome.
(End of this chapter)
You'll Also Like
-
The original god's plan to defeat the gods is revealed, starting with the God of Fire saving th
Chapter 117 21 hours ago -
The end of the world: My refuge becomes a land of women
Chapter 430 21 hours ago -
Return to Immortality: One point investment, a billion times critical hit!
Chapter 120 21 hours ago -
Steel, Guns, and the Industrial Party that Traveled to Another World
Chapter 764 1 days ago -
The Journey Against Time, I am the King of Scrolls in a Hundred Times Space
Chapter 141 2 days ago -
Start by getting the cornucopia
Chapter 112 2 days ago -
Fantasy: One hundred billion clones are on AFK, I am invincible
Chapter 385 2 days ago -
American comics: I can extract animation abilities
Chapter 162 2 days ago -
Swallowed Star: Wish Fulfillment System.
Chapter 925 2 days ago -
Cultivation begins with separation
Chapter 274 2 days ago