Sourced through Scoop.it from: www.aipro.info
Sourced through Scoop.it from: www.columbia.edu
The increasing power of computer technology does not dispense with the need to extract meaningful information out of data sets of ever growing size, and indeed typically exacerbates the complexity of this task. To tackle this general problem, two methods have emerged, at chronologically different times, that are now commonly used in the scientific community: data mining and complex network theory. Not only do complex network analysis and data mining share the same general goal, that of extracting information from complex systems to ultimately create a new compact quantifiable representation, but they also often address similar problems too. In the face of that, a surprisingly low number of researchers turn out to resort to both methodologies. One may then be tempted to conclude that these two fields are either largely redundant or totally antithetic. The starting point of this review is that this state of affairs should be put down to contingent rather than conceptual differences, and that these two fields can in fact advantageously be used in a synergistic manner. An overview of both fields is first provided, some fundamental concepts of which are illustrated. A variety of contexts in which complex network theory and data mining have be used in a synergistic manner are then presented. Contexts in which the appropriate integration of complex networks metrics can lead to improved classification rates with respect to classical data mining algorithms and, conversely, contexts in which data mining can be used to tackle important issues in complex network theory applications are illustrated. Finally, ways to achieve a tighter integration between complex networks and data mining, and open lines of research are discussed.
Combining complex networks and data mining: why and how
M. Zanin, D. Papo, P. A. Sousa, E. Menasalvas, A. Nicchi, E. Kubik, S. Boccaletti
Although no one can quite agree how to define it, the general idea is to find datasets so enormous that they can reveal patterns invisible to conventional inquiry. The data are often generated by millions of real-world user actions, such as tweets or credit-card purchases, and they can take thousands of computers to collect, store, and analyze. To many companies and researchers, though, the investment is worth it because the patterns can unlock information about anything from genetic disorders to tomorrow’s stock prices.
But there’s a problem: It’s tempting to think that with such an incredible volume of data behind them, studies relying on big data couldn’t be wrong. But the bigness of the data can imbue the results with a false sense of certainty. Many of them are probably bogus—and the reasons why should give us pause about any research that blindly trusts big data.
~ Samuel Arbesman (author) More about this product
From the power grid to the stock market to the latest iOS, complex systems are plagued by unintended glitches, unpredictable behavior, and unexplainable system failures. Why can’t we make things simpler? Is technological complexity inevitable? And how are we supposed to deal with technology that nobody can understand anymore?
In Overcomplicated, complexity scientist Samuel Arbesman explores the forces that lead us to continue to make systems more complicated and more incomprehensible, despite our desperate desire for them to be more coherent. He offers a new framework for dealing with complex systems. We must abandon the idea that we can understand the rules, and instead become field biologists for technology, relying on description and observation to uncover facts about how a system might work.
Whether you work in business, finance, science, or IT—or you simply own a smart phone—Overcomplicated offers valuable insight on how to adapt to the complex age we are living in.
Overcomplicated: Technology at the Limits of Comprehension
by Samuel Arbesman
A mathematical technique for comparing large symbol sets suggests that less frequently used words are mainly responsible for the evolution of the English language over the past two centuries.
Similarity of Symbol Frequency Distributions with Heavy Tails
Martin Gerlach, Francesc Font-Clos, and Eduardo G. Altmann
Phys. Rev. X 6, 021009
For a century, elites have worked to eliminate monetary gold, both physically and ideologically.
This began in 1914, with the UK’s entry into the First World War. The Bank of England wanted to suspend convertibility of bank notes into gold. Keynes counselled wisely that the bank should not do so. Gold was finite, but credit elastic.
By staying on gold, the UK could maintain its credit, and finance the war effort. This transpired. The House of Morgan organised massive credits for the UK, and none for Germany. This finance was crucial, and sustained the UK until the US abandoned neutrality and tipped the military balance against Germany.
Despite formal convertibility of sterling to gold, the Bank of England successfully discouraged actual conversion.
Gold sovereigns were withdrawn from circulation and turned into 400-ounce bars. This form of bullion limited gold ownership to the wealthy, and confined gold’s presence to vaults. A similar disappearance of gold as a circulating currency occurred in the US.
Sourced through Scoop.it from: davidstockmanscontracorner.com