An interesting article appeared in Wired's June 6rd. The End of Theory: The Data Deluge Makes the Scientific Method Obsolete is titled I don't know anything about logic or philosophy of science and I took an hour of philosophy of science class in cognitive science class. I'm not sure exactly what you mean in English. Blog post in KoreanReading it together, I think I might be able to notice a little bit.
In general, when looking at a phenomenon, it is assumed that there is a cause for the effect, and the task is to identify the cause and establish a model. A cause and an effect are called causation, and a correlation is a relationship that exists even if a cause cannot be found for an effect. Science seems to be more meaningful in finding causation rather than correlation.
The argument of this article has been that science is not correlation, but causation is important, but in the computer world, data is terrifying (paratitic), so if the correlation increases to some extent without finding out the cause, it is not true.
And talking about Google, the philosophy of Google is that this page doesn't know why that page is good, it just knows that the stats of the incoming links are good enough. So there is no need for semantic or causal analysis.
Google's founding philosophy is that we don't know why this page is better than that one: If the statistics of incoming links say it is, that's good enough. No semantic or causal analysis is required.
You can do better with a staggering amount of data, so you should abandon the scientific method of hypothesising, modeling, and testing.
But faced with massive data, this approach to science — hypothesize, model, test — is becoming obsolete.
terribly many (patabytes) data, so correlation is sufficient. It is argued that this is not the time to reveal causality. If you put a number into the computer without hypotheses, it will find it.
There is now a better way. Petabytes allow us to say: “Correlation is enough.” We can stop looking for models. We can analyze the data without hypotheses about what it might show. We can throw the numbers into the biggest computing clusters the world has ever seen and let statistical algorithms find patterns where science cannot.
And talk like this. They first look at the data mathematically, and then create the context later.
It forces us to view data mathematically first and establish a context for it later.
Of course, the world is complex and cannot explain everything. The real world is complex, unlike the theory that favors control.
If you look at the history of cognitive science, it is said that connectionism, which states that factors that are likely to influence the result, although the cause of the result is unknown, learn from each other or do something to produce the result, has explanatory power. I remember writing about quantum computers in tumblr because it was a shock to me since it was to the extent that semioticism was like this. However, from what I was told in my cognitive science class, connectionism was not completely explanatory. I don't know if that's the mechanism of the human brain, but it seems rare to approach science only with sufficient correlation like connectionism.
The word 'without hypotheses' sounds to me 'without thinking' and 'irresponsibly'. It sounds like you don't need a multidisciplinary scientist, just a computer scientist. Speaking of extremes, what would I do with research in each field if I spoke to the extreme? All you need is a cluster computer, a statistical program, and a scientist who can explain the data.
In the book 'The Logic of Persuasion', it is said that the scientific inquiry method in logic is the hypothesis deduction method. In other words, it is a method to come up with a hypothesis that can solve a problem, make a prediction deduced from it, and then accept it as truth if the prediction is proven by experiments or observations. However, this article even talks about abandoning the existing research methods by claiming that a lot of data and computers are better than these research methods.
I agree to some extent that computerized correlation can find the truth. It is to supplement the existing research method to find the cause by providing other means with computers and computable information. In my case, I also discover something by data mining web logs. Most people struggle to find an explanation as to why. However, I cannot agree to the substitution.
I do not know the philosophy of science or the history of science, but Einstein established a model called the theory of relativity, and the model was verified by experiments. However, such theories do not seem to be able to come out by running the computable data to a cloud computer. If so, will the computers just pour out of the computer in a short period of time and theories worthy of a Nobel Prize?
This article seems to have been made purely for Google. It expands to logic or philosophy of science that data processing should change the scientific approach as well as search, and there is Google behind it.
No matter how complicated the world is, I think we should try to explain the cause, not the problem. It is meaningful to indicate that the probabilistic number of quantitative data is sufficiently correlated, but still, it seems to be an important method to establish a model. I don't know much about such topics as logic or philosophy, so I don't have a clear opinion of mine, but I think that's roughly what it is. Ah! I think this kind of study is necessary! Why is it that I see more things I don't know than I know?
Klaus, the head of UER at Yahoo! The End of Theory? I hope not! It is said that we should not give up understanding and explanation through the blog post.
The fact that models and theories often fall short of explaining empirical phenomena – whether in biology, physics or the social sciences – should not make us abandon the quest for understanding and explaining.