I spend a lot of time these days reading papers on prediction from different fields looking for connections between methods.
This is an interesting paper: On the frequency and severity of interstate wars, 2019.
‘Lewis Fry Richardson argued that the frequency and severity of deadly conflicts of all kinds, from homicides to interstate wars and everything in between, followed universal statistical patterns: their frequency followed a simple Poisson arrival process and their severity followed a simple power-law distribution. Although his methods and data in the mid-20th century were neither rigorous nor comprehensive, his insights about violent conflicts have endured. In this chapter, using modern statistical methods and data, we show that Richardson’s original claims appear largely correct, with a few caveats. These facts place important constraints on our understanding of the underlying mechanisms that produce individual wars and periods of peace, and shed light on the persistent debate about trends in conflict…
Fifty years or more of relatively few large wars is thus entirely typical, given the empirical distribution of war sizes, and observing a long period of peace is not necessarily evidence of a changing likelihood for large wars [12, 13]. Even periods comparable to the great violence of the World Wars are not statistically rare under Richardson’s model… Under the model, the 100-year probability of at least one war with 16, 634, 907 or more battle deaths (the size of the Second World War) is 0.43 ± 0.01, implying about one such war per 161 years, on average…
[Simulation to test how unusual the long peace without very big war since 1945 is…] It is not until 100 years into the future [from 2003] that the long peace becomes statistically distinguishable from a large but random fluctuation in an otherwise stationary process… Our modeling effort here cannot rule out the existence of a change in the rules that generate interstate conflicts, but if it occurred, it cannot have been a dramatic shift. The results here are entirely consistent with other evidence of genuine changes in the international system, but they constrain the extent to which such changes could have genuinely impacted the global production of interstate wars…
The agreement between the historical record of inter- state wars and Richardson’s simple model of their frequency and severity is truly remarkable, and it stands as a testament to Richardson’s lasting contribution to the study of violent political conflict…
The lower portion of the distribution is slightly more curved than expected for a simple power law, which suggests potential differences in the processes that generate wars above and below this threshold [7k deaths].
How can it be possible that the frequency and severity of interstate wars are so consistent with a stationary model, despite the enormous changes and obviously non-stationary dynamics in human population, in the number of recognized states, in commerce, communication, public health, and technology, and even in the modes of war itself? The fact that the absolute number and sizes of wars are plausibly stable in the face of these changes is a profound mystery for which we have no explanation.
Our results here indicate that the post-war efforts to reduce the likelihood of large inter- state wars have not yet changed the observed statistics enough to tell if they are working.
The long peace pattern is sometimes described only in terms of peace among largely European powers, who fell into a peaceful configuration after the great violence for well understood reasons. In parallel, however, conflicts in other parts of the world, most notably Africa, the Middle East, and Southeast Asia, have became more common, and these may have statistically balanced the books globally against the decrease in frequency in the West, and may even be causally dependent on the drivers of European war and then peace.’
Please leave comments below and links to other work that may throw light on this…
Complexity, ‘fog and moonlight’, prediction, and politics I: Introduction (July 2014).
Complexity and prediction II: controlled skids and immune systems (September 2014). Why is the world so hard to predict? Nonlinearity and Bismarck. How to humans adapt? The difference between science and political predictions. Feedback and emergent properties. Decentralised problem-solving in the immune system and ant colonies.
Complexity and prediction III: von Neumann and economics as a science (September 2014). This examines von Neumann’s views on the proper role of mathematics in economics and some history of game theory.
Complexity and prediction IV: The birth of computational thinking (September 2014). Leibniz and computational thinking. The first computers. Punched cards. Optical data networks. Wireless. The state of the field by the time of Turing’s 1936 paper… These sketches may help in trying to understand 1) contemporary discussions about complex systems in general, 2) new tools that are being developed, and 3) contemporary debates concerning scientific, technological, economic, and political issues which depend on computers – from algorithmic high frequency trading to ‘agent based models’, machine intelligence, and military robots.
Complexity and prediction V: The crisis of mathematical paradoxes, Gödel, Turing and the basis of computing (June 2016). The paper concerns a fascinating episode in the history of ideas that saw the most esoteric and unpractical field, mathematical logic, spawn a revolutionary technology, the modern computer. NB. a great lesson to science funders: it’s a great mistake to cut funding on theory and assume that you’ll get more bang for buck from ‘applications’.
http://nautil.us/issue/70/variables/cloudy-with-a-chance-of-war-rp (about the connections between Richardson’s work on war and his work on weather)
It shows a mysteriously accurate exponential relationship between the chance of a war versus the severity of a war (as if a natural process controls war.)
No time to post anything much but I was at a presentation last week by the lead data guy from this University of Warwick project: https://warwick.ac.uk/newsandevents/pressreleases/retool_ai_to/
North East Iran and all of Saudi minus Riyadh are in trouble, apparently. I’m interested in the compatibility of something like this with Black Swan.
There’s an astonishing amount of researcher degrees of freedom here, plus a very small sample size. I’d take this with a huge grain of salt.
As a dilettante in probability, Poisson for frequency and power law for severity seems like it should be intuitively correct, but I don’t see a good case for it here.
LikeLiked by 2 people
It doesn’t surprise me that much, and here’s my hand waving attempt to explain it. Curious as to others’ takes.
As I understand it, power laws are often generated by processes characterized by preferential attachment.
A lot of human behavior follows along those lines, where the rich get richer, success breeds further success. As opposed to molecules who typically do not act in a social manner vs other molecules, and so tend more towards gaussian distributions.
What this paper shows for me is that this preferential attachment mechanism likely applies to wars as well. The initiation, and the continuation of wars as policy (or dispute resolution) by other means is a social accumulation phenomena.
Under this hypothesis, presumably interest in wars accumulates along a factor that is appealing intuitively as joining the winning side. In this case, I refer to the “sides” not as the actual participants in the war but as the internal hawkish coalitions vs dovish coalitions pushing for escalation or compromise of the disputes/policy urges. You can have a big war only if you build a big hawkish internal coalition (otherwise you won’t start it, or you’ll compromise at some point). Total deaths in a war likely fluctuates along with this cumulative interest in a war, so numerically it looks like a power law distribution.
That this hasn’t changed over eons of human social existence also isn’t too surprising. Despite the ongoing march of progress of human civilization, warlike attitudes (even in pure simulations like movies) still seem to be as in high demand as ever. Despite advances in communications, the mechanisms for starting and jumping on social bandwagons are still mostly the same. In the internet era, we’re still back to word of mouth as predominant means for persuasion.
So you can only get a war going, or continuing, if you get a big enough gang of people jumping on board. And to stretch the reasoning, with under 7061 expected deaths, case by case factors are still important, but over that scale, the dominant factor is social contagion (leading to preferential attachment and power laws).
This may not be the right place for this comment but just wanted to say “thanks” for the blog and in particular for an earlier post on number theory. In exchange, as a practising economist I must say that when it comes to predicting (usefully) human interaction across the economy, we are doomed to failure: there is no data generation process, and making chaos mathematically tractable does not make it meaningful. Maybe the problem with economics is we’re just trying too hard to explain the inexplicable?
Coming quite late to this party, but this paper is an elegant piece, but it’s certainly not the only recent paper to make this point. Nicholas Naseem Taleb (slightly less of a twat than Stephen Pinker) has a paper arguing against the declining war thesis from statistical first principles: https://www.fooledbyrandomness.com/longpeace.pdf
Most conflict researchers have shown good predictive utility for just two variables when predicting an intrastate conflict: log GDP per cap and log Population. Everything else only increases the value of prediction incrementally: https://www.researchgate.net/profile/Michael_Ward12/publication/227574659_The_perils_of_policy_by_p-value_Predicting_civil_conflicts/links/0046352d05541bf4c2000000.pdf
For a connection between power laws, see zipf’s law, which is a neat expression of the multiplicative relationships that hold human languages together.