Big data in China
Chapter 3 Big data, the part you don't know yet
Chapter 3 Big data, the part you don't know yet (1)
FB data unit - information navigation map
What is the data made of?How big is a data unit?How is it generated and transmitted?
This is the basic question we need to know first.Someone once compared data to pollen, and bees carry pollen to produce fruit.Every flower is a source of data generation, and bees undertake the work of data porters.I think this metaphor is very appropriate, but there is a better generalization - data is like red blood cells in the human body, and a data unit is a group of nutritional units, which are produced by the liver and transported to all parts of the body to supply the needs of organs.
A data unit is the basic unit of information transmission.Especially in the network, the general network connection will not allow data packets of any size to be transmitted. It has strict rules, using packet technology to divide a data into several small data packets, and give each small data packet All add its attributes.This attribute is related to transmission, including source IP address, destination IP address, data length, etc.
Like blood, it has a fixed destination.Therefore, we call such a small data packet a data unit, which can also be called a data frame or frame.In this way, the characteristics of the data information flow are clear, and the data to be transmitted each time are "packages" with distinctive characteristics, and their specifications and packaging methods are the same.This is conducive to the standardization of data transmission, and also simplifies its generation, processing, packaging and transmission methods, making it possible to apply data on a large scale.
We found that any data organization has its established system.In this system, it can be divided into six levels: bit, character, data element, record, file and database.The combination of data elements at the previous level produces the latter level, and finally achieves a larger-scale data collection.
Among these six levels, the "bit" data is at the first level, and the average user does not need to explore, but the next five levels need us to master, because they are what people will apply when inputting and requesting data.
When there is a specific relationship (one or many types) between different data packets or data elements, they constitute a data structure, which results in a specific way of "computer storage and organization of data".Data structures that people choose carefully can lead to higher execution or storage efficiency.At this time, the demand for retrieval and indexing technology arises.Better technology can make our searches more efficient.
My friend Shanil is a big data expert who works at Google. He explained the nature of data in his book "Data Algorithms and Applications" published last year:
"Data structure represents a connection, which is a data object and various connections between the instance of the object and the data elements that constitute the instance. At the same time, these connections can be given and quantified by defining related functions."
What is a data object?According to Shanier, a data object is a collection of instances or values, and a data structure is the physical realization of an abstract data type (ADT).He divides the design process of a data structure into three levels: abstraction layer, data structure layer and implementation layer.Among them, the abstraction layer refers to the type layer of abstract data, which discusses the logical structure of data and its operations, while the data structure layer and implementation layer are closer to visualization and practicality, and they discuss a data structure. Representation and storage details in a computer and the implementation of such operations.
If we combine real-world applications and dissect the data structure, what will we see?You will immediately find yourself floating on the ocean of the data kingdom, which is so close to you and has a relationship with your life all the time.
●Character
When we enter a character (via a keyboard or other device), the system directly translates the character into a sequence of bits in a specific encoding system.A character occupies 8 bits in the computer, that is, a byte.This is the character, and the most basic unit of data in general.Also, computer systems can use more than one encoding scheme to process characters.For example, some systems use the ASCII encoding system for data communication and the EBCDIC encoding system for data storage.In a broad sense, we write a Chinese word or an Arabic numeral on paper, which can also be regarded as a character in "data".
↓
● data element
A data element is the logical unit at the lowest level in the data hierarchy.In order to form a logical unit, we need to combine several bits and several bytes (characters) together.For example, a complete sentence, a complete logical code, a minimum flow of information, etc.Therefore, a data element may also be called a field.It refers generally, and the data items in it are data entities. For example, a complete mobile phone number is a data element, and 138 or subsequent numbers are separated by segments, which are data items with independent meaning.
↓
●Record
Data elements are combined in a logically related form to form a data record.The value starts to increase sharply at this time.For example, an employee record—number, name, gender, title, department—contains several data elements, which are logically related, and together with auxiliary data items, constitute a complete record.This is the lowest logical unit of access in the database.
↓
●Documents
A complete file is composed of information and media, and it is a collection of a group of information that is named and stored on a certain medium.For example, an article, a record, a contract, or even a book can be called data elements.A file can be logically divided into several records, then the file is represented in the form of record sequence.The file has nothing to do with the storage medium, and the change of the medium will not change the nature and value of the file.
↓
●Database
The database is the largest level, it is a collection of ordered data.In this set of ordered data, there are a large number of documents - these documents are logically related to each other, and are marked with a certain retrieval value.According to different application requirements and different fields, people sometimes divide the database into several segments, rather than exist exclusively.The database is backed up and can be retrieved, organized and utilized at any time, and can also be changed by authorized persons at any time.
Core: organize, analyze, predict, control
The core of "big data" is not how much data we have, but what we do with the data.Data is useless if it just piles up somewhere.Its value lies in "usability", not quantity and storage place.Any collection of data is related to its ultimate function.If the function of data cannot be reflected, all links of big data are inefficient and lifeless.
☆ tidy up
Sorting has two purposes, one is to classify all the data and put them where they should go; the other is to facilitate our retrieval and retrieve the data at any time for use.This is the same purpose as we organize our bookshelves.Facing the same data, different sorting methods determine whether our results are good or bad.
The importance of "sorting" is well illustrated by the Library of Congress's Retrieval Engineering Update.At the Library of Congress, people have had a difficult time because the amount of information has skyrocketed with the development of network technology, and even the preserved Twitter (Twitter) information (only a small part of the library data) has reached nearly 2000 billion pieces, and the volume of stored files reaches 133TB.Deletion is impossible, because every piece of information has already been shared and reposted by readers in this social network-so, how should such a huge amount of data be organized?
The technical team needs to think of all ways and exhaust all wisdom to come up with a practical retrieval plan, so that library users can use the information conveniently.That said, technologists must start building a system that helps researchers (and other users) quickly access data from social platforms, as web tools and cultural trends continue to gravitate toward e-reading rather than watching paper book.
Since 2000, the library has started the work of sorting and archiving. At that time, it was less difficult, because social networking sites were not yet connected, and the data stored in the government's internal system was static for a certain period of time, and the growth rate was relatively slow.Although the total amount of data exceeded 300TB, the staff felt: "It will be sorted out one day."
The advent of Twitter, however, has thrown library archiving into a painful impasse.Libraries simply cannot find a way to make information easily searchable without intolerable errors in the process.If you continue to use the old method-tape storage, it may take a day to query only one tweet between 2006 and 2010. If the query period is added to one year, the time required will increase fourfold.
Fisher, a staff member of the Library of Congress, said: "We feel headaches in the face of huge data, and sorting has become an impossible task. If they cannot be classified, these data become a burden, and they are needed. People can't retrieve them, but we have to keep them."
The reason why Twitter’s information is difficult to organize is because its data volume is too large on the one hand, and the reason is very practical on the other hand, because new data is constantly added every day.Just like our Weibo, a lot of new information is generated every minute, and people are constantly posting Weibo.Therefore, this growth rate will continue to increase, and it is almost impossible to sort it out using traditional methods.
In addition, the types of such information are also becoming more and more diverse, such as ordinary tweets, automatic reply messages sent by software clients, manual reply messages, data containing links or pictures, and so on.People who regularly use Weibo know this well.In the face of new data update features, traditional methods have no way to start.
Fisher said: "How to find a solution? The road is tortuous. We started thinking about distributed and parallel computing solutions, but these two types of systems are too expensive. To really achieve a significant reduction in search time, we need It is necessary to build a huge infrastructure consisting of hundreds or even thousands of servers. God! It is impossible to think about it. For an organization like ours that has no commercial benefits, the cost is too high. It's not realistic."
The library finally found a big data engineer.According to the specific situation of the library, the experts gave a series of practical solutions.Phillips, the founder of the open-source database tool Raik, suggested adopting a classified approach, that is, using one tool for data storage, one for retrieval, and another for responding to query requests, completing the sorting process very simply and effectively. The work allows the perfect integration of massive new information and huge old data, and also ensures that the Library of Congress realizes the update of the database.
After the sorting is completed, the total amount of data has increased dozens of times (still increasing every moment), and the retrieval speed is faster than before, and even the retrieval results are instantly available.
☆ analysis
Analysis refers to the "effective analysis" of data.Data are often large in scale, complex in composition, and come from various sources.Especially in the era of big data, data often has four characteristics at the same time, referred to as 4 Vs: large data volume (Volume), fast velocity (Velocity), miscellaneous types (Variety), and low value density (Value).How to make the most effective analysis in the shortest time has become a core task.
With the advent of the era of big data, big data analysis also came into being.Moreover, traditional data analysis is also merging with big data analysis.
At present, people's solutions to data are mainly in the following directions: how to preprocess data?How can archived documents be queried in time?How do you use your mining and analysis techniques to see holographic big data content within your field of view?In the face of massive data, traditional analysis methods cannot do it.
The weakness of data analysis also requires our vigilance and careful thinking.In June last year, a Chinese executive of an investment bank, Mr. Cai, approached me.He is considering whether to withdraw from the European market because the economic situation is so bad.He felt that there would be a euro crisis in the future, and once the crisis broke out, the company would fall into bankruptcy.
Yes, there is a potential for a downturn in the economy, that is an underlying fact.However, I reminded Mr. Cai to pay attention to another fact, that is, this investment bank has been operating in Europe for nearly 50 years. It has deep roots, a huge market, and a large number of old users.If it withdraws from Europe at this time, will it make people feel that this investment bank surrenders at the first sign of trouble and is not trustworthy at all?
Mr. Cai suddenly realized that he immediately decided that he could not liquidate the company's business in Europe, and he would stick to it regardless of any future crises, even if he paid a huge price in the short term.When making this decision, Mr. Cai did not ignore the data at the economic level. Under my suggestion, he adopted a different way of thinking and included more comprehensive information in the consideration of the data.People and institutions that make the right decisions in difficult situations can often win more respect, which cannot be captured by traditional data analysis.
Mr. Cai's story not only tells us the power of data analysis, but also fully reflects the shortcomings and limitations of data analysis.Although human life is now regulated and directed by computers that collect data, when the human brain cannot understand and judge the situation in time, data can also help us interpret and analyze its meaning, and help us compensate for our overreliance on intuition and emotion. Alleviate the distortion of our inner desires to reason.But in the final analysis, data cannot replace human thinking. Only by clarifying the true value of data can we get rid of our complete dependence on data.
The real big data analysis is to help us understand the true value of data. It looks for patterns, correlations and other useful information in the process of studying large amounts of data to help people and enterprises better adapt to changes and make those decisions. Really wise decision.
At the level of big data, there are four different directions and solutions for massive data: 1. Technically solve the problem of cheap data;
2. The data can be analyzed almost in real time without any lag, ensuring the effectiveness of the data;
3. The visualization and discoverability of big data make search and visualization a popular application and make data more accurate;
4. At the equipment level, it has optimized all-in-one equipment, which makes data production and analysis more convenient and lower cost.
Even with the best technology, before analyzing the data, people should know what the data really means - just like knowing themselves.If you are new to data, you are dangerous to your career as a decision maker.Many product managers, designers, and executives now directly modify their product designs and make decisions based on digital logic without fully understanding the true meaning of the data. The results are often counterproductive and lead to poor results. .
☆Forecast
Big data technology is like a microscopic microscope, which can not only collect and analyze the most inconspicuous information, but also make scientific decisions based on the logical relationship between these information.Just as we can judge a person's next behavior and measure his inner emotional state based on his facial expressions and words, the predictive function helps government and enterprise managers make more rational decisions in business, economics and other fields , rather than relying solely on intuition and experience.
Brand, manager of IBM's energy power application department, said: "We use big data to predict wind power and solar energy, and accurately predict the power output from solar and wind energy, and have achieved very good results. This is an unprecedented innovative model , will allow the energy power industry to address the intermittency shortcomings of renewable energy.”
IBM has developed an intelligent system that combines weather and power forecasting to increase system availability and optimize grid performance.It is a game-changing new invention combined with big data analysis and weather modeling technology. It is now the most advanced energy power solution in the world, which can improve the predictability of renewable energy.
The big data forecasting technology, called "HyRef" (Hybrid Renewable Energy Forecast), uses weather modeling capabilities, advanced cloud imaging technology and sky cameras to track cloud movement in near real time, and sensors to monitor wind speed , temperature and direction.Through accurate analysis, it can provide wind power companies with accurate weather forecasts in the next 30 days in the region, or wind power increments in the next 15 minutes.This allows energy companies to incorporate more renewable energy into production lines, reduce carbon emissions, and then produce more clean energy.
This kind of predictive ability allows our production model to be really upgraded, and can be applied to other fields, such as natural gas, coal or other traditional industries.Not only in the physical industry, but also in the non-manufacturing service industry, the demand for big data prediction is greater, and it also has a broader market.For example, it can help enterprises and government agencies conduct business (service) analysis and forecasting, tailor work, reduce costs, and respond to crises in advance; another example, it can predict the price trend of real estate sales, and its accuracy is far more than Traditional real estate analyst.Each of us will benefit immensely from it.
☆ control
(End of this chapter)
FB data unit - information navigation map
What is the data made of?How big is a data unit?How is it generated and transmitted?
This is the basic question we need to know first.Someone once compared data to pollen, and bees carry pollen to produce fruit.Every flower is a source of data generation, and bees undertake the work of data porters.I think this metaphor is very appropriate, but there is a better generalization - data is like red blood cells in the human body, and a data unit is a group of nutritional units, which are produced by the liver and transported to all parts of the body to supply the needs of organs.
A data unit is the basic unit of information transmission.Especially in the network, the general network connection will not allow data packets of any size to be transmitted. It has strict rules, using packet technology to divide a data into several small data packets, and give each small data packet All add its attributes.This attribute is related to transmission, including source IP address, destination IP address, data length, etc.
Like blood, it has a fixed destination.Therefore, we call such a small data packet a data unit, which can also be called a data frame or frame.In this way, the characteristics of the data information flow are clear, and the data to be transmitted each time are "packages" with distinctive characteristics, and their specifications and packaging methods are the same.This is conducive to the standardization of data transmission, and also simplifies its generation, processing, packaging and transmission methods, making it possible to apply data on a large scale.
We found that any data organization has its established system.In this system, it can be divided into six levels: bit, character, data element, record, file and database.The combination of data elements at the previous level produces the latter level, and finally achieves a larger-scale data collection.
Among these six levels, the "bit" data is at the first level, and the average user does not need to explore, but the next five levels need us to master, because they are what people will apply when inputting and requesting data.
When there is a specific relationship (one or many types) between different data packets or data elements, they constitute a data structure, which results in a specific way of "computer storage and organization of data".Data structures that people choose carefully can lead to higher execution or storage efficiency.At this time, the demand for retrieval and indexing technology arises.Better technology can make our searches more efficient.
My friend Shanil is a big data expert who works at Google. He explained the nature of data in his book "Data Algorithms and Applications" published last year:
"Data structure represents a connection, which is a data object and various connections between the instance of the object and the data elements that constitute the instance. At the same time, these connections can be given and quantified by defining related functions."
What is a data object?According to Shanier, a data object is a collection of instances or values, and a data structure is the physical realization of an abstract data type (ADT).He divides the design process of a data structure into three levels: abstraction layer, data structure layer and implementation layer.Among them, the abstraction layer refers to the type layer of abstract data, which discusses the logical structure of data and its operations, while the data structure layer and implementation layer are closer to visualization and practicality, and they discuss a data structure. Representation and storage details in a computer and the implementation of such operations.
If we combine real-world applications and dissect the data structure, what will we see?You will immediately find yourself floating on the ocean of the data kingdom, which is so close to you and has a relationship with your life all the time.
●Character
When we enter a character (via a keyboard or other device), the system directly translates the character into a sequence of bits in a specific encoding system.A character occupies 8 bits in the computer, that is, a byte.This is the character, and the most basic unit of data in general.Also, computer systems can use more than one encoding scheme to process characters.For example, some systems use the ASCII encoding system for data communication and the EBCDIC encoding system for data storage.In a broad sense, we write a Chinese word or an Arabic numeral on paper, which can also be regarded as a character in "data".
↓
● data element
A data element is the logical unit at the lowest level in the data hierarchy.In order to form a logical unit, we need to combine several bits and several bytes (characters) together.For example, a complete sentence, a complete logical code, a minimum flow of information, etc.Therefore, a data element may also be called a field.It refers generally, and the data items in it are data entities. For example, a complete mobile phone number is a data element, and 138 or subsequent numbers are separated by segments, which are data items with independent meaning.
↓
●Record
Data elements are combined in a logically related form to form a data record.The value starts to increase sharply at this time.For example, an employee record—number, name, gender, title, department—contains several data elements, which are logically related, and together with auxiliary data items, constitute a complete record.This is the lowest logical unit of access in the database.
↓
●Documents
A complete file is composed of information and media, and it is a collection of a group of information that is named and stored on a certain medium.For example, an article, a record, a contract, or even a book can be called data elements.A file can be logically divided into several records, then the file is represented in the form of record sequence.The file has nothing to do with the storage medium, and the change of the medium will not change the nature and value of the file.
↓
●Database
The database is the largest level, it is a collection of ordered data.In this set of ordered data, there are a large number of documents - these documents are logically related to each other, and are marked with a certain retrieval value.According to different application requirements and different fields, people sometimes divide the database into several segments, rather than exist exclusively.The database is backed up and can be retrieved, organized and utilized at any time, and can also be changed by authorized persons at any time.
Core: organize, analyze, predict, control
The core of "big data" is not how much data we have, but what we do with the data.Data is useless if it just piles up somewhere.Its value lies in "usability", not quantity and storage place.Any collection of data is related to its ultimate function.If the function of data cannot be reflected, all links of big data are inefficient and lifeless.
☆ tidy up
Sorting has two purposes, one is to classify all the data and put them where they should go; the other is to facilitate our retrieval and retrieve the data at any time for use.This is the same purpose as we organize our bookshelves.Facing the same data, different sorting methods determine whether our results are good or bad.
The importance of "sorting" is well illustrated by the Library of Congress's Retrieval Engineering Update.At the Library of Congress, people have had a difficult time because the amount of information has skyrocketed with the development of network technology, and even the preserved Twitter (Twitter) information (only a small part of the library data) has reached nearly 2000 billion pieces, and the volume of stored files reaches 133TB.Deletion is impossible, because every piece of information has already been shared and reposted by readers in this social network-so, how should such a huge amount of data be organized?
The technical team needs to think of all ways and exhaust all wisdom to come up with a practical retrieval plan, so that library users can use the information conveniently.That said, technologists must start building a system that helps researchers (and other users) quickly access data from social platforms, as web tools and cultural trends continue to gravitate toward e-reading rather than watching paper book.
Since 2000, the library has started the work of sorting and archiving. At that time, it was less difficult, because social networking sites were not yet connected, and the data stored in the government's internal system was static for a certain period of time, and the growth rate was relatively slow.Although the total amount of data exceeded 300TB, the staff felt: "It will be sorted out one day."
The advent of Twitter, however, has thrown library archiving into a painful impasse.Libraries simply cannot find a way to make information easily searchable without intolerable errors in the process.If you continue to use the old method-tape storage, it may take a day to query only one tweet between 2006 and 2010. If the query period is added to one year, the time required will increase fourfold.
Fisher, a staff member of the Library of Congress, said: "We feel headaches in the face of huge data, and sorting has become an impossible task. If they cannot be classified, these data become a burden, and they are needed. People can't retrieve them, but we have to keep them."
The reason why Twitter’s information is difficult to organize is because its data volume is too large on the one hand, and the reason is very practical on the other hand, because new data is constantly added every day.Just like our Weibo, a lot of new information is generated every minute, and people are constantly posting Weibo.Therefore, this growth rate will continue to increase, and it is almost impossible to sort it out using traditional methods.
In addition, the types of such information are also becoming more and more diverse, such as ordinary tweets, automatic reply messages sent by software clients, manual reply messages, data containing links or pictures, and so on.People who regularly use Weibo know this well.In the face of new data update features, traditional methods have no way to start.
Fisher said: "How to find a solution? The road is tortuous. We started thinking about distributed and parallel computing solutions, but these two types of systems are too expensive. To really achieve a significant reduction in search time, we need It is necessary to build a huge infrastructure consisting of hundreds or even thousands of servers. God! It is impossible to think about it. For an organization like ours that has no commercial benefits, the cost is too high. It's not realistic."
The library finally found a big data engineer.According to the specific situation of the library, the experts gave a series of practical solutions.Phillips, the founder of the open-source database tool Raik, suggested adopting a classified approach, that is, using one tool for data storage, one for retrieval, and another for responding to query requests, completing the sorting process very simply and effectively. The work allows the perfect integration of massive new information and huge old data, and also ensures that the Library of Congress realizes the update of the database.
After the sorting is completed, the total amount of data has increased dozens of times (still increasing every moment), and the retrieval speed is faster than before, and even the retrieval results are instantly available.
☆ analysis
Analysis refers to the "effective analysis" of data.Data are often large in scale, complex in composition, and come from various sources.Especially in the era of big data, data often has four characteristics at the same time, referred to as 4 Vs: large data volume (Volume), fast velocity (Velocity), miscellaneous types (Variety), and low value density (Value).How to make the most effective analysis in the shortest time has become a core task.
With the advent of the era of big data, big data analysis also came into being.Moreover, traditional data analysis is also merging with big data analysis.
At present, people's solutions to data are mainly in the following directions: how to preprocess data?How can archived documents be queried in time?How do you use your mining and analysis techniques to see holographic big data content within your field of view?In the face of massive data, traditional analysis methods cannot do it.
The weakness of data analysis also requires our vigilance and careful thinking.In June last year, a Chinese executive of an investment bank, Mr. Cai, approached me.He is considering whether to withdraw from the European market because the economic situation is so bad.He felt that there would be a euro crisis in the future, and once the crisis broke out, the company would fall into bankruptcy.
Yes, there is a potential for a downturn in the economy, that is an underlying fact.However, I reminded Mr. Cai to pay attention to another fact, that is, this investment bank has been operating in Europe for nearly 50 years. It has deep roots, a huge market, and a large number of old users.If it withdraws from Europe at this time, will it make people feel that this investment bank surrenders at the first sign of trouble and is not trustworthy at all?
Mr. Cai suddenly realized that he immediately decided that he could not liquidate the company's business in Europe, and he would stick to it regardless of any future crises, even if he paid a huge price in the short term.When making this decision, Mr. Cai did not ignore the data at the economic level. Under my suggestion, he adopted a different way of thinking and included more comprehensive information in the consideration of the data.People and institutions that make the right decisions in difficult situations can often win more respect, which cannot be captured by traditional data analysis.
Mr. Cai's story not only tells us the power of data analysis, but also fully reflects the shortcomings and limitations of data analysis.Although human life is now regulated and directed by computers that collect data, when the human brain cannot understand and judge the situation in time, data can also help us interpret and analyze its meaning, and help us compensate for our overreliance on intuition and emotion. Alleviate the distortion of our inner desires to reason.But in the final analysis, data cannot replace human thinking. Only by clarifying the true value of data can we get rid of our complete dependence on data.
The real big data analysis is to help us understand the true value of data. It looks for patterns, correlations and other useful information in the process of studying large amounts of data to help people and enterprises better adapt to changes and make those decisions. Really wise decision.
At the level of big data, there are four different directions and solutions for massive data: 1. Technically solve the problem of cheap data;
2. The data can be analyzed almost in real time without any lag, ensuring the effectiveness of the data;
3. The visualization and discoverability of big data make search and visualization a popular application and make data more accurate;
4. At the equipment level, it has optimized all-in-one equipment, which makes data production and analysis more convenient and lower cost.
Even with the best technology, before analyzing the data, people should know what the data really means - just like knowing themselves.If you are new to data, you are dangerous to your career as a decision maker.Many product managers, designers, and executives now directly modify their product designs and make decisions based on digital logic without fully understanding the true meaning of the data. The results are often counterproductive and lead to poor results. .
☆Forecast
Big data technology is like a microscopic microscope, which can not only collect and analyze the most inconspicuous information, but also make scientific decisions based on the logical relationship between these information.Just as we can judge a person's next behavior and measure his inner emotional state based on his facial expressions and words, the predictive function helps government and enterprise managers make more rational decisions in business, economics and other fields , rather than relying solely on intuition and experience.
Brand, manager of IBM's energy power application department, said: "We use big data to predict wind power and solar energy, and accurately predict the power output from solar and wind energy, and have achieved very good results. This is an unprecedented innovative model , will allow the energy power industry to address the intermittency shortcomings of renewable energy.”
IBM has developed an intelligent system that combines weather and power forecasting to increase system availability and optimize grid performance.It is a game-changing new invention combined with big data analysis and weather modeling technology. It is now the most advanced energy power solution in the world, which can improve the predictability of renewable energy.
The big data forecasting technology, called "HyRef" (Hybrid Renewable Energy Forecast), uses weather modeling capabilities, advanced cloud imaging technology and sky cameras to track cloud movement in near real time, and sensors to monitor wind speed , temperature and direction.Through accurate analysis, it can provide wind power companies with accurate weather forecasts in the next 30 days in the region, or wind power increments in the next 15 minutes.This allows energy companies to incorporate more renewable energy into production lines, reduce carbon emissions, and then produce more clean energy.
This kind of predictive ability allows our production model to be really upgraded, and can be applied to other fields, such as natural gas, coal or other traditional industries.Not only in the physical industry, but also in the non-manufacturing service industry, the demand for big data prediction is greater, and it also has a broader market.For example, it can help enterprises and government agencies conduct business (service) analysis and forecasting, tailor work, reduce costs, and respond to crises in advance; another example, it can predict the price trend of real estate sales, and its accuracy is far more than Traditional real estate analyst.Each of us will benefit immensely from it.
☆ control
(End of this chapter)
You'll Also Like
-
Senior sister, please let me go. I still have seven fiancées.
Chapter 552 5 hours ago -
I am in Naruto, and the system asks me to entrust the elves to someone?
Chapter 628 5 hours ago -
As a blacksmith, it's not too much to wear a set of divine equipment.
Chapter 171 5 hours ago -
Treasure Appraisal: I Can See the Future
Chapter 1419 5 hours ago -
Immortality cultivation starts with planting techniques
Chapter 556 5 hours ago -
The Lord of Ghost
Chapter 217 5 hours ago -
Practice starts with skill points
Chapter 564 20 hours ago -
1890 King of Southeast Asia
Chapter 910 21 hours ago -
The other world starts with debt
Chapter 150 21 hours ago -
Witch Alchemist
Chapter 368 21 hours ago