Connect with us

Economy

Links (10/31/19)

Published

on


  • Manufacturing Ain’t Great Again. Why? – Paul Krugman 
    When Donald Trump promised to Make America Great Again, his slogan meant different things to different people. For many supporters it meant restoring the political and social dominance of white people, white men in particular. For others, however, it meant restoring the kind of economy we had a generation or two ago, which offered lots of manly jobs for manly men: farmers, coal miners, manufacturing workers. So it may matter a lot, politically, that Trump has utterly failed to deliver on that front — and that workers are noticing. Now, many of Trump’s economic promises were obvious nonsense. The hollowing out of coal country reflected new technologies, like mountaintop removal, which require few workers, plus competition from other energy sources, especially natural gas but increasingly wind and solar power. Coal jobs aren’t coming back, no matter how dirty Trump lets the air get.
  • Stop Inflating the Inflation Threat – J. Bradford DeLong
    Given the scale and severity of inflation in America in the 1970s, it is understandable that US monetary policymakers developed a deep-seated fear of it. But, nearly a half-century later, the conditions that justified such worries no longer apply, and it is past time that we stopped denying what the data are telling us.
  • How to Tax Our Way Back to Justice – Saez and Zucman
    It is absurd that the working class is now paying higher tax rates than the richest people in America.
  • It's Time to Go – Dave Giles
    When I released my first post on the blog on 20th. Febuary 2011 I really wasn't sure what to expect! After all, I was aiming to reach a somewhat niche audience. Well, 949 posts and 7.4 million page-hits later, this blog has greatly exceeded my wildest expectations. However, I'm now retired and I turned 70 three months ago. I've decided to call it quits, and this is my final post. I'd rather make a definite decision about this than have the blog just fizzle into nothingness. For now, the Econometrics Beat blog will remain visible, but it will be closed for further comments and questions.
  • Prospects for Inflation in a High Pressure Economy: Is the Phillips Curve Dead or Is It Just Hibernating? – Brad DeLong 
    I have some disagreements with this by the smart Sufi, Mishkin, and Hooper: the evidence for "significant nonlinearity" in the Phillips Curve is that the curve flattens when inflation is low, not that it steepens when labor slack is low. There is simply no "strong evidence" of significant steepening with low labor slack. Yes, you can find specifications with a t-statistic of 2 in which this is the case, but you have to work hard to find such specifications, and your results are fragile. The fact is that in the United States between 1957 and 1988—the first half of the last 60 years—the slope of the simplest-possible adaptive-expectations Phillips Curve was -0.54: each one-percentage point fall in unemployment below the estimated natural rate boosted inflation in the subsequent year by 0.54%-points above its contemporary value. Since 1988—in the second half of the past 60 years—the slope of this simplest-possible Phillips curve has been effectively zero: the estimated regression coefficient has been not -0.54 but only -0.03. The most important observations driving the estimated negative slope of the Phillips Curve in the first half of the past sixty years were 1966, 1973, and 1974—inflation jumping up in times of relatively-low unemployment—and 1975, 1981, and 1982—inflation falling in times of relatively-high unemployment. The most important observations driving the estimated zero slope of the Phillips Curve in the second half of the past sixty years have been 2009-2014: the failure of inflation to fall as the economy took its Great-Recession excursion to a high-unemployment labor market with enormous slack. Yes, if we had analogues of (a) two presidents, Johnson and Nixon, desperate for a persistent high-pressure economy; (b) a Federal Reserve chair like Arthur Burns eager to accommodate presidential demands; (c) the rise of a global monopoly in the economy's key input able to deliver mammoth supply shocks; and (d) a decade of bad luck; then we might see a return to inflation as it was in the (pre-Iran crisis) early and mid-1970s. But is that really the tail risk we should be focused monomaniacally on? And how is it, exactly, that "the difference between national and city/state results in recent decades can be explained by the success that monetary policy has had in quelling inflation and anchoring inflation expectations since the 1980s"? Neither of those two should affect the estimated coefficient. Much more likely is simply that—at the national level and at the city/state level—the Phillips Curve becomes flat when inflation becomes low:
  • Debt, Doomsayers and Double Standards – Paul Krugman
    Selective deficit hysteria has done immense damage.
  • Fed Attempts To Conclude Their Mid-Cycle Adjustment – Tim Duy
    After spending much of the year battling the forces of uncertainty weighing on the economy, the Fed declared victory today. Absent a fresh deterioration in the economic outlook, Fed Chair Jerome Powell and his colleagues believe they are done cutting rates with this month’s policy move. Expect an extended policy pause; the Fed is neither interested in easing policy further given their outlook nor in soon raising rates back up given continued below-target inflation.
  • Fall 2019 Journal of Economic Perspectives Available Online – Tim Taylor
    I'll start with the Table of Contents for the just-released Fall 2019 issue, which in the Taylor household is known as issue #130. Below that are abstracts and direct links for all of the papers. I will probably blog more specifically about some of the papers in the next week or two, as well.
  • Does a wealth tax discourage risky investments? – Digitopoly
    The other day I wrote about the potential impact of a wealth tax. In so doing, I wrote: “we can all agree that the wealth tax likely deters risk-free saving.” This was a paraphrase of a claim made by Larry Summers who then went on to say that it was unknown whether a wealth tax would encourage or discourage risky investment. But I did wonder what the impact of a wealth tax would be on various types of investments and in examining this I realized that the claim was incorrect. In fact, a wealth tax is unlikely to have any change on the risk profile of investments in contrast to an income (or even consumption tax) that will. I discovered later that this was a known result being contained in a paper from Joe Stiglitz (QJE, 1969).
  • Will Libra Be Stillborn? – Barry Eichengreen
    Where the problem for economies and financial services is lack of competition, residents of developing countries need to look to their own regulators and politicians. The remedy for their woes is not going to come from Mark Zuckerberg.
  • Children of Poor Immigrants Rise, Regardless of Where They Come From – The New York Times 
    A pattern that has persisted for a century: They tend to outperform children of similarly poor native-born Americans.
  • The tempos of capitalism – Understanding Society
    I've been interested in the economic history of capitalism since the 1970s, and there are a few titles that stand out in my memory. There were the Marxist and neo-Marxist economic historians (Marx's Capital, E.P. Thompson, Eric Hobsbawm, Rodney Hilton, Robert Brenner, Charles Sabel); the debate over the nature of the industrial revolution (Deane and Cole, NFR Crafts, RM Hartwell, EL Jones); and volumes of the Cambridge Economic History of Europe. The history of British capitalism poses important questions for social theory: is there such a thing as "capitalism", or are there many capitalisms? What are the features of the capitalist social order that are most fundamental to its functioning and dynamics of development? Is Marx's intellectual construction of the "capitalist mode of production" a useful one? And does capitalism have a logic or tendency of development, as Marx believed, or is its history fundamentally contingent and path-dependent? Putting the point in concrete terms, was there a probable path of development from the "so-called primitive accumulation" to the establishment of factory production and urbanization to the extension of capitalist property relations throughout much of the world?
  • The Way We Measure the Economy Obscures What Is Really Going On – Heather Boushey
    By looking mainly at the big picture, we are missing the reality of inequality — and a chance to level the playing field.
  • Audits as Evidence: Experiments, Ensembles, and Enforcement – Brad DeLong
    This is absolutely brilliant, and quite surprising to me. I had imagined that most of discrimination in the aggregate was the result of a thumb placed lightly on the scale over and over and over again. Here Pat and Chris present evidence that, at least in employment, it is very different: that a relatively small proportion of employers really really discriminate massively, and that most follow race-neutral procedures and strategies:
  • Study analyzed tax treaties to assess effect of offshoring on domestic employment – EurekAlert
    The practice of offshoring–moving some of a company's manufacturing or services overseas to take advantage of lower costs–is on the rise and is a source of ongoing debate. A new study identified a way to determine how U.S. multinational firms' decisions about offshoring affect domestic employment. The study found that, on average, when U.S. multinationals increase employment in their foreign affiliates, they also modestly increase employment in the United States–albeit with substantial dislocation and reallocation of workers. The study was conducted by researchers at Carnegie Mellon University, Georgetown University, and the Federal Reserve Bank of Kansas City. It is published in The Review of Economics and Statistics.





Source link

قالب وردپرس

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Economy

Statistics, lies and the virus: five lessons from a pandemic

Published

on


My new book, “How To Make The World Add Up“, is published today in the UK and around the world (except US/Canada).

Will this year be 1954 all over again? Forgive me, I have become obsessed with 1954, not because it offers another example of a pandemic (that was 1957) or an economic disaster (there was a mild US downturn in 1953), but for more parochial reasons. Nineteen fifty-four saw the appearance of two contrasting visions for the world of statistics — visions that have shaped our politics, our media and our health. This year confronts us with a similar choice.

The first of these visions was presented in How to Lie with Statistics, a book by a US journalist named Darrell Huff. Brisk, intelligent and witty, it is a little marvel of numerical communication. The book received rave reviews at the time, has been praised by many statisticians over the years and is said to be the best-selling work on the subject ever published. It is also an exercise in scorn: read it and you may be disinclined to believe a number-based claim ever again.

There are good reasons for scepticism today. David Spiegelhalter, author of last year’s The Art of Statistics, laments some of the UK government’s coronavirus graphs and testing targets as “number theatre”, with “dreadful, awful” deployment of numbers as a political performance.

“There is great damage done to the integrity and trustworthiness of statistics when they’re under the control of the spin doctors,” Spiegelhalter says. He is right. But we geeks must be careful — because the damage can come from our own side, too.

For Huff and his followers, the reason to learn statistics is to catch the liars at their tricks. That sceptical mindset took Huff to a very unpleasant place, as we shall see. Once the cynicism sets in, it becomes hard to imagine that statistics could ever serve a useful purpose. 

But they can — and back in 1954, the alternative perspective was embodied in the publication of an academic paper by the British epidemiologists Richard Doll and Austin Bradford Hill. They marshalled some of the first compelling evidence that smoking cigarettes dramatically increases the risk of lung cancer. The data they assembled persuaded both men to quit smoking and helped save tens of millions of lives by prompting others to do likewise. This was no statistical trickery, but a contribution to public health that is almost impossible to exaggerate. 

You can appreciate, I hope, my obsession with these two contrasting accounts of statistics: one as a trick, one as a tool. Doll and Hill’s painstaking approach illuminates the world and saves lives into the bargain. Huff’s alternative seems clever but is the easy path: seductive, addictive and corrosive. Scepticism has its place, but easily curdles into cynicism and can be weaponised into something even more poisonous than that.

The two worldviews soon began to collide. Huff’s How to Lie with Statistics seemed to be the perfect illustration of why ordinary, honest folk shouldn’t pay too much attention to the slippery experts and their dubious data. Such ideas were quickly picked up by the tobacco industry, with its darkly brilliant strategy of manufacturing doubt in the face of evidence such as that provided by Doll and Hill.

As described in books such as Merchants of Doubt by Erik Conway and Naomi Oreskes, this industry perfected the tactics of spreading uncertainty: calling for more research, emphasising doubt and the need to avoid drastic steps, highlighting disagreements between experts and funding alternative lines of inquiry. The same tactics, and sometimes even the same personnel, were later deployed to cast doubt on climate science. These tactics are powerful in part because they echo the ideals of science. It is a short step from the Royal Society’s motto, “nullius in verba” (take nobody’s word for it), to the corrosive nihilism of “nobody knows anything”. 

So will 2020 be another 1954? From the point of view of statistics, we seem to be standing at another fork in the road. The disinformation is still out there, as the public understanding of Covid-19 has been muddied by conspiracy theorists, trolls and government spin doctors.  Yet the information is out there too. The value of gathering and rigorously analysing data has rarely been more evident. Faced with a complete mystery at the start of the year, statisticians, scientists and epidemiologists have been working miracles. I hope that we choose the right fork, because the pandemic has lessons to teach us about statistics — and vice versa — if we are willing to learn.

1: The numbers matter

“One lesson this pandemic has driven home to me is the unbelievable importance of the statistics,” says Spiegelhalter. Without statistical information, we haven’t a hope of grasping what it means to face a new, mysterious, invisible and rapidly spreading virus. Once upon a time, we would have held posies to our noses and prayed to be spared; now, while we hope for advances from medical science, we can also coolly evaluate the risks.

Without good data, for example, we would have no idea that this infection is 10,000 times deadlier for a 90-year-old than it is for a nine-year-old — even though we are far more likely to read about the deaths of young people than the elderly, simply because those deaths are surprising. It takes a statistical perspective to make it clear who is at risk and who is not.

Good statistics, too, can tell us about the prevalence of the virus — and identify hotspots for further activity. Huff may have viewed statistics as a vector for the dark arts of persuasion, but when it comes to understanding an epidemic, they are one of the few tools we possess.

2: Don’t take the numbers for granted

But while we can use statistics to calculate risks and highlight dangers, it is all too easy to fail to ask the question “Where do these numbers come from?” By that, I don’t mean the now-standard request to cite sources, I mean the deeper origin of the data.

For all his faults, Huff did not fail to ask the question. He retells a cautionary tale that has become known as “Stamp’s Law” after the economist Josiah Stamp — warning that no matter how much a government may enjoy amassing statistics, “raise them to the nth power, take the cube root and prepare wonderful diagrams”, it was all too easy to forget that the underlying numbers would always come from a local official, “who just puts down what he damn pleases”.

The cynicism is palpable, but there is insight here too. Statistics are not simply downloaded from an internet database or pasted from a scientific report. Ultimately, they came from somewhere: somebody counted or measured something, ideally systematically and with care. These efforts at systematic counting and measurement require money and expertise — they are not to be taken for granted.

In my new book, How to Make the World Add Up, I introduce the idea of “statistical bedrock” — data sources such as the census and the national income accounts that are the results of painstaking data collection and analysis, often by official statisticians who get little thanks for their pains and are all too frequently the target of threats, smears or persecution.

In Argentina, for example, long-serving statistician Graciela Bevacqua was ordered to “round down” inflation figures, then demoted in 2007 for producing a number that was too high. She was later fined $250,000 for false advertising — her crime being to have helped produce an independent estimate of inflation.

In 2011, Andreas Georgiou was brought in to head Greece’s statistical agency at a time when it was regarded as being about as trustworthy as the country’s giant wooden horses. When he started producing estimates of Greece’s deficit that international observers finally found credible, he was prosecuted for his “crimes” and threatened with life imprisonment. Honest statisticians are braver — and more invaluable — than we know. 

In the UK, we don’t habitually threaten our statisticians — but we do underrate them.

“The Office for National Statistics is doing enormously valuable work that frankly nobody has ever taken notice of,” says Spiegelhalter, pointing to weekly death figures as an example. “Now we deeply appreciate it.” 

Quite so. This statistical bedrock is essential, and when it is missing, we find ourselves sinking into a quagmire of confusion.

The foundations of our statistical understanding of the world are often gathered in response to a crisis. For example, nowadays we take it for granted that there is such a thing as an “unemployment rate”, but a hundred years ago nobody could have told you how many people were searching for work. Severe recessions made the question politically pertinent, so governments began to collect the data. More recently, the financial crisis hit. We discovered that our data about the banking system was patchy and slow, and regulators took steps to improve it.

So it is with the Sars-Cov-2 virus. At first, we had little more than a few data points from Wuhan, showing an alarmingly high death rate of 15 per cent — six deaths in 41 cases. Quickly, epidemiologists started sorting through the data, trying to establish how exaggerated that case fatality rate was by the fact that the confirmed cases were mostly people in intensive care.

Quirks of circumstance — such as the Diamond Princess cruise ship, in which almost everyone was tested — provided more insight. Johns Hopkins University in the US launched a dashboard of data resources, as did the Covid Tracking Project, an initiative from the Atlantic magazine. An elusive and mysterious threat became legible through the power of this data.  That is not to say that all is well.

Nature recently reported on “a coronavirus data crisis” in the US, in which “political meddling, disorganization and years of neglect of public-health data management mean the country is flying blind”.  Nor is the US alone. Spain simply stopped reporting certain Covid deaths in early June, making its figures unusable. And while the UK now has an impressively large capacity for viral testing, it was fatally slow to accelerate this in the critical early weeks of the pandemic. Ministers repeatedly deceived the public about the number of tests being carried out by using misleading definitions of what was happening. For weeks during lockdown, the government was unable to say how many people were being tested each day.

Huge improvements have been made since then. The UK’s Office for National Statistics has been impressively flexible during the crisis, for example in organising systematic weekly testing of a representative sample of the population. This allows us to estimate the true prevalence of the virus. Several countries, particularly in east Asia, provide accessible, usable data about recent infections to allow people to avoid hotspots.

These things do not happen by accident: they require us to invest in the infrastructure to collect and analyse the data. On the evidence of this pandemic, such investment is overdue, in the US, the UK and many other places.

3: Even the experts see what they expect to see

Jonas Olofsson, a psychologist who studies our perceptions of smell, once told me of a classic experiment in the field. Researchers gave people a whiff of scent and asked them for their reactions to it. In some cases, the experimental subjects were told: “This is the aroma of a gourmet cheese.” Others were told: “This is the smell of armpits.” In truth, the scent was both: an aromatic molecule present both in runny cheese and in bodily crevices. But the reactions of delight or disgust were shaped dramatically by what people expected.

Statistics should, one would hope, deliver a more objective view of the world than an ambiguous aroma. But while solid data offers us insights we cannot gain in any other way, the numbers never speak for themselves. They, too, are shaped by our emotions, our politics and, perhaps above all, our preconceptions. There is great damage done to the integrity and trustworthiness of statistics when they’re under the control of the spin doctors

A striking example is the decision, on March 23 this year, to introduce a lockdown in the UK. In hindsight, that was too late. “Locking down a week earlier would have saved thousands of lives,” says Kit Yates, author of The Maths of Life and Death — a view now shared by influential epidemiologist Neil Ferguson and by David King, chair of the “Independent Sage” group of scientists.

The logic is straightforward enough: at the time, cases were doubling every three to four days. If a lockdown had stopped that process in its tracks a week earlier, it would have prevented two doublings and saved three-quarters of the 65,000 people who died in the first wave of the epidemic, as measured by the excess death toll.

That might be an overestimate of the effect, since people were already voluntarily pulling back from social interactions. Yet there is little doubt that if a lockdown was to happen at all, an earlier one would have been more effective. And, says Yates, since the infection rate took just days to double before lockdown but long weeks to halve once it started, “We would have got out of lockdown so much sooner . . . Every week before lockdown cost us five to eight weeks at the back end of the lockdown.”

Why, then, was the lockdown so late? No doubt there were political dimensions to that decision, but senior scientific advisers to the government seemed to believe that the UK still had plenty of time. On March 12, prime minister Boris Johnson was flanked by Chris Whitty, the government’s chief medical adviser, and Patrick Vallance, chief scientific adviser, in the first big set-piece press conference.

Italy had just suffered its 1,000th Covid death and Vallance noted that the UK was about four weeks behind Italy on the epidemic curve. With hindsight, this was wrong: now that late-registered deaths have been tallied, we know that the UK passed the same landmark on lockdown day, March 23, just 11 days later.  It seems that in early March the government did not realise how little time it had.

As late as March 16, Johnson declared that infections were doubling every five to six days. The trouble, says Yates, is that UK data on cases and deaths suggested that things were moving much faster than that, doubling every three or four days — a huge difference. What exactly went wrong is unclear — but my bet is that it was a cheese-or-armpit problem. Some influential epidemiologists had produced sophisticated models suggesting that a doubling time of five to six days seemed the best estimate, based on data from the early weeks of the epidemic in China.

These models seemed persuasive to the government’s scientific advisers, says Yates: “If anything, they did too good a job.” Yates argues that the epidemiological models that influenced the government’s thinking about doubling times were sufficiently detailed and convincing that when the patchy, ambiguous, early UK data contradicted them, it was hard to readjust. We all see what we expect to see.

The result, in this case, was a delay to lockdown: that led to a much longer lockdown, many thousands of preventable deaths and needless extra damage to people’s livelihoods. The data is invaluable but, unless we can overcome our own cognitive filters, the data is not enough.

4: The best insights come from combining statistics with personal experience

The expert who made the biggest impression on me during this crisis was not the one with the biggest name or the biggest ego. It was Nathalie MacDermott, an infectious-disease specialist at King’s College London, who in mid-February calmly debunked the more lurid public fears about how deadly the new coronavirus was. Then, with equal calm, she explained to me that the virus was very likely to become a pandemic, that barring extraordinary measures we could expect it to infect more than half the world’s population, and that the true fatality rate was uncertain but seemed to be something between 0.5 and 1 per cent. In hindsight, she was broadly right about everything that mattered.

MacDermott’s educated guesses pierced through the fog of complex modelling and data-poor speculation. I was curious as to how she did it, so I asked her.

“People who have spent a lot of their time really closely studying the data sometimes struggle to pull their head out and look at what’s happening around them,” she said. “I trust data as well, but sometimes when we don’t have the data, we need to look around and interpret what’s happening.”

MacDermott worked in Liberia in 2014 on the front line of an Ebola outbreak that killed more than 11,000 people. At the time, international organisations were sanguine about the risks, while the local authorities were in crisis. When she arrived in Liberia, the treatment centres were overwhelmed, with patients lying on the floor, bleeding freely from multiple areas and dying by the hour.

The horrendous experience has shaped her assessment of subsequent risks: on the one hand, Sars-Cov-2 is far less deadly than Ebola; on the other, she has seen the experts move too slowly while waiting for definitive proof of a risk.

“From my background working with Ebola, I’d rather be overprepared than underprepared because I’m in a position of denial,” she said.

There is a broader lesson here. We can try to understand the world through statistics, which at their best provide a broad and representative overview that encompasses far more than we could personally perceive. Or we can try to understand the world up close, through individual experience. Both perspectives have their advantages and disadvantages.

Muhammad Yunus, a microfinance pioneer and Nobel laureate, has praised the “worm’s eye view” over the “bird’s eye view”, which is a clever sound bite. But birds see a lot too. Ideally, we want both the rich detail of personal experience and the broader, low-resolution view that comes from the spreadsheet. Insight comes when we can combine the two — which is what MacDermott did.

5: Everything can be polarised

Reporting on the numbers behind the Brexit referendum, the vote on Scottish independence, several general elections and the rise of Donald Trump, there was poison in the air: many claims were made in bad faith, indifferent to the truth or even embracing the most palpable lies in an effort to divert attention from the issues. Fact-checking in an environment where people didn’t care about the facts, only whether their side was winning, was a thankless experience.

For a while, one of the consolations of doing data-driven journalism during the pandemic was that it felt blessedly free of such political tribalism. People were eager to hear the facts after all; the truth mattered; data and expertise were seen to be helpful. The virus, after all, could not be distracted by a lie on a bus. 

That did not last. America polarised quickly, with mask-wearing becoming a badge of political identity — and more generally the Democrats seeking to underline the threat posed by the virus, with Republicans following President Trump in dismissing it as overblown. The prominent infectious-disease expert Anthony Fauci does not strike me as a partisan figure — but the US electorate thinks otherwise. He is trusted by 32 per cent of Republicans and 78 per cent of Democrats.

The strangest illustration comes from the Twitter account of the Republican politician Herman Cain, which late in August tweeted: “It looks like the virus is not as deadly as the mainstream media first made it out to be.” Cain, sadly, died of Covid-19 in July — but it seems that political polarisation is a force stronger than death.

Not every issue is politically polarised, but when something is dragged into the political arena, partisans often prioritise tribal belonging over considerations of truth. One can see this clearly, for example, in the way that highly educated Republicans and Democrats are further apart on the risks of climate change than less-educated Republicans and Democrats. Rather than bringing some kind of consensus, more years of education simply seem to provide people with the cognitive tools they require to reach the politically convenient conclusion. From climate change to gun control to certain vaccines, there are questions for which the answer is not a matter of evidence but a matter of group identity.

In this context, the strategy that the tobacco industry pioneered in the 1950s is especially powerful. Emphasise uncertainty, expert disagreement and doubt and you will find a willing audience. If nobody really knows the truth, then people can believe whatever they want.

All of which brings us back to Darrell Huff, statistical sceptic and author of How to Lie with Statistics. While his incisive criticism of statistical trickery has made him a hero to many of my fellow nerds, his career took a darker turn, with scepticism providing the mask for disinformation. Huff worked on a tobacco-funded sequel, How to Lie with Smoking Statistics, casting doubt on the scientific evidence that cigarettes were dangerous. (Mercifully, it was not published.) 

Huff also appeared in front of a US Senate committee that was pondering mandating health warnings on cigarette packaging. He explained to the lawmakers that there was a statistical correlation between babies and storks (which, it turns out, there is) even though the true origin of babies is rather different. The connection between smoking and cancer, he argued, was similarly tenuous. 

Huff’s statistical scepticism turned him into the ancestor of today’s contrarian trolls, spouting bullshit while claiming to be the straight-talking voice of common sense. It should be a warning to us all.

There is a place in anyone’s cognitive toolkit for healthy scepticism, but that scepticism can all too easily turn into a refusal to look at any evidence at all.

This crisis has reminded us of the lure of partisanship, cynicism and manufactured doubt. But surely it has also demonstrated the power of honest statistics. Statisticians, epidemiologists and other scientists have been producing inspiring work in the footsteps of Doll and Hill. I suggest we set aside How to Lie with Statistics and pay attention.

Carefully gathering the data we need, analysing it openly and truthfully, sharing knowledge and unlocking the puzzles that nature throws at us — this is the only chance we have to defeat the virus and, more broadly, an essential tool for understanding a complex and fascinating world.

Written for and published by the FT Magazine on 10 September 2020.

My new book, “How To Make The World Add Up“, is published today in the UK and around the world (except US/Canada).



Source link

قالب وردپرس

Continue Reading

Economy

Nominal Income Targeting and Measurement Issues

Published

on


Nominal GDP targeting has been advocated in a recent Joint Economic Committee report “Stable Monetary Policy to Connect More Americans to Work”.

The best anchor for monetary policy decisions is nominal income or nominal spending—the amount of money people receive or pay out, which more or less equal out economy-wide. Under an ideal monetary regime, spending should not be too scarce (characterized by low investment and employment), but nor should it be too plentiful (characterized by high and increasing inflation). While this balance may be easier to imagine than to achieve, this report argues that stabilizing general expectations about the level of nominal income or nominal spending in the economy best allows the private sector to value individual goods and services in the context of that anchored expectation, and build long-term contracts with a reasonable degree of certainty. This target could also be understood as steady growth in the money supply, adjusted for the private sector’s ability to circulate that money supply faster or slower.

One challenge to implementation is the relatively large revisions in the growth rate of this variable (and don’t get me started on the level). Here’s an example from our last recession.

Figure 1: Q/Q nominal GDP growth, SAAR, from various vintages. NBER defined recession dates shaded gray. Source: ALFRED.

How big are the revisions? The BEA provides a detailed description. This table summarizes the results.

The standard deviation of revisions going from Advance to Latest is one percent (annualized), mean absolute revision is 1.3 percent. Now, the Latest Vintage might not be entirely relevant for policy, so lets look at Advance to Third revision standard deviation of 0.5 percent (0.6 percent mean absolute). That’s through 2017. From the advance to 2nd release, 2020Q2 GDP growth went from -42.1% to -40.5% (log terms).

Compare against the personal consumption expenditure deflator, at the monthly — not quarterly — frequency; the mean absolute revision is 0.5 percent going from Advance to Third. The corresponding figure for Core PCE is 0.35 percent. Perhaps this is why the Fed focused more on price/inflation targets, i.e.:

…variants of so-called makeup strategies, “so called” because they at times require the Committee to deliberately target rates of inflation that deviate from 2 percent on one side so as to make up for times that inflation deviated from 2 percent on the other side. Price-level targeting (PLT) is a useful benchmark among makeup policies but also represents a more significant and perhaps undesirable departure from the flexible inflation-targeting framework compared with other alternatives. “Nearer neighbors” to flexible inflation targeting are more flexible variants of PLT, which include temporary PLT—that is, use of PLT only around ELB episodes to offset persistently low inflation—and average inflation targeting (AIT), including one-sided AIT, which only restores inflation to a 2 percent average when it has been below 2 percent, and AIT that limits the degree of reversal for overshooting and undershooting the inflation target.10

Admittedly, the estimation of output gap is fraught with much larger (in my opinion) measurement challenges than the trend in nominal GDP, as it compounds the problems of real GDP measurement and potential GDP estimation; this is a point made by  Beckworth and Hendrickson (JMCB, 2019). Even use of the unemployment rate, which can be substituted for the output gap in the Taylor principle by way of Okun’s Law, encounters a problem. As Aruoba (2008) notes, the unemployment rate is not subject to large and/or biased revisions; however the estimated natural rate of unemployment, on the other hand, does change over time, as estimated by CBO by Fed, and others, so there is going to be revision to the implied unemployment gap (this point occupies a substantial portion of JEC report). Partly for this reason, the recently announced modification of the Fed’s policy framework stresses shortfalls rather than deviations, discussed in this post.

One interesting aspect of the debate over nominal GDP targeting relates to growth rates vs. levels. If it’s growth rates  (as in Beckworth and Hendrickson (JMCB, 2019), there is generally a “fire and forget” approach to setting rates. An actual nominal GDP target of the level implies that past errors are not forgotten (McCallum, 2001) (this is not a distinction specific to GDP as we know from the inflation vs. price level debate).  Targeting the level of nominal GDP faces another — perhaps even more problematic — challenge, as suggested by Figure 2.

Figure 2: Nominal GDP in billions of current dollars, SAAR, from various vintages. NBER defined recession dates shaded gray. Dashed red line at annual benchmark revisions. Red arrows denote implied revisions to last overlapping observation between two benchmarked series. Source: ALFRED. 

Revisions can be large at benchmark revisions, shown as dashed lines in the above Figure. But even nonbenchmark revision can be large, as in 2009Q3. ( Beckworth (2019) suggests using a Survey of Professional Forecasters forecast relative to target and a level gap as means of addressing this issue — I think — insofar as the target can be moved relative to current vintage.)

None of the foregoing should be construed as a comprehensive case against some form of nominal GDP targeting — after all Frankel with Chinn (JMCB, 1995) provides some arguments in favor. But it suggests that the issue of data revisions in the conduct of monetary policy is not inconsequential.

 



Source link

قالب وردپرس

Continue Reading

Economy

Biden’s Big Test: Selecting a White House Chief of Staff

Published

on


The American Prospect

See article on original site

Vice President Joe Biden was angry. It was 2013, and Ron Klain, his trusted chief of staff, was leaving for the private sector. Biden needed someone dependable to replace him during the second term. But David Plouffe, President Obama’s campaign guru and top political adviser, kept shooting down his picks. First, he vetoed Kevin Sheekey, an adviser to then–New York Mayor Michael Bloomberg, out of fear he’d be too loyal to the financier-oligarch. Biden conceded, and instead suggested Steve Ricchetti.

Ricchetti, a longtime political operative, was at the time the founder and chairman of the powerful lobbying firm Ricchetti, Inc. Though he hadn’t personally registered as a lobbyist in years, he did give the marching orders to a team of hired guns for the most powerful industries in America, including extensive ties to Big Pharma.

Plouffe didn’t budge. “The Ricchetti pick also was killed,” Glenn Thrush reported for Politico Magazine at the time, “in part because Plouffe said his background violated the president’s no-lobbyists pledge—but mostly because Ricchetti was deemed to be too chummy with the Clintons and too much of a ‘free agent’ who would look after Biden’s interests first.”

This enraged Biden. Who was Plouffe, a man 24 years his junior, to tell him what to do? “He [Biden] appealed directly to Obama, who initially deferred to Plouffe’s judgment,” Thrush reports. “Biden pressed Obama harder, arguing that he ‘needed to have his own people to do this job,’ as one aide briefed on the interaction put it. Obama finally assented—with the caveat that Biden had ‘to keep Steve from coloring outside of the lines.’”

Progressives need only to hear the name “Rahm Emanuel” to remember the stakes of this job. 

That fight was over seven years ago. Ricchetti kept the job as Biden’s chief of staff throughout the second term. In 2014, Plouffe, who was so wary of bad optics on corporate power, revolved out to lobby for Uber, and now represents Mark Zuckerberg. Today, Ricchetti co-chairs Biden’s presidential campaign, and is well positioned to resume his role as Biden’s chief of staff in the White House, should his boss vanquish Donald Trump in November. If this happens, a man who has always argued against Biden’s best instincts, and a lifelong enemy of the progressive movement, will be the chief gatekeeper to the president’s desk.

The White House chief of staff is the single most powerful non-Senate-confirmed job in the federal executive branch. It typically involves managing the president’s schedule, deciding which of his advisers get face time with him, and giving marching orders for any day-to-day work that the president doesn’t personally oversee.

Progressives need only to hear the name “Rahm Emanuel” to remember the stakes of this job. As Obama’s first chief of staff, Emanuel kneecapped efforts to even propose a federal stimulus that matched the scale of the Great Recession, and constantly pushed the administration to make up the difference by hurting the most vulnerable. (Anyone remember “Fuck the UAW”?)

Ricchetti may be subtler and smarter than “Rahmbo,” but he would be no less of a threat if placed in charge of the Biden White House. More than just cussing out organized labor, Ricchetti’s career highlight was deliberately undermining it: He led Bill Clinton’s effort to pass permanent normal trade relations (PNTR) with China in 2000, which economists predicted at the time would cause massive blue-collar job loss. Research since has concluded that PNTR directly led to the manufacturing collapse, and that the affected (largely union) workers were unable to re-skill in the way traditional trade theory suggests they would.

At the time, only 28 percent of the public supported normalized trade with China, while a staggering 56 percent opposed it. A separate poll found 79 percent of Americans felt the country should only normalize trade with China after it improved its human rights record and labor standards. None of this dissuaded Ricchetti from pushing PNTR through Congress by any means necessary.

As Biden rightly calls out Donald Trump’s contempt for democracy on the campaign trail, he should consider what it would mean to put someone so dismissive of the popular will in charge of his own White House. Moreover, his “Build Back Better” economic agenda hinges on revitalizing American manufacturing. Why would he trust the man who helped crush manufacturing in the first place to accomplish that?

Given the clients that both Ricchettis are willing to take on, it’s perhaps unsurprising that Steve has been the Biden campaign’s ace in the hole when it comes to high-dollar fundraising.

And then there are Ricchetti’s ties to the most hated industry in America, Big Pharma. Biden has pivoted to an aggressive plan for lowering prescription drug prices, a problem on which GOP voters, Nancy Pelosi, and Alexandria Ocasio-Cortez are all united. He has vowed to repeal the law that prohibits negotiation with drug companies under Medicare, limit launch prices that set a high baseline for prescription drugs, confine price increases to the rate of inflation, and accelerate the development of generics. What message would it send about the seriousness of this plan for Biden’s right-hand man to have personally represented Novartis, Eli Lilly, and Sanofi?

Just last night, news broke that Ricchetti, Inc. has signed on to lobby on behalf of two pharmaceutical companies, Horizon Pharma and GlaxoSmithKline. This is on top of the firm’s longstanding relationship with Japanese pharmaceutical giant Eisai, whom Steve Ricchetti’s brother Jeff, co-founder of the lobbying firm, personally represents. Big Pharma clearly knows that their route into sabotaging Biden’s prescription-drug agenda runs through Steve Ricchetti. The right’s propaganda machine would have a field day with such a glaring conflict of interest. If Biden grants Ricchetti a senior job, he’d give Tucker Carlson a free attack line grounded in actually legitimate complaints.

Given the clients that both Ricchettis are willing to take on, it’s perhaps unsurprising that Steve has been the Biden campaign’s ace in the hole when it comes to high-dollar fundraising. Early on, he sold private conversations with himself as an incentive to wealthy donors, and has given backroom pitches to Wall Street executives. When Biden’s campaign was flailing in January, Ricchetti was personally “imploring bundlers to gather as much money as possible,” according to The New York Times.

Cleverly, Ricchetti has pushed Biden toward opposing support from super PACs, even as he cozies up to the wealthy donors who make super PACs so odious and gets them to donate to Biden directly. By spurning the best-known means of big-money corruption, but not big-money corruption per se, Ricchetti can create an appearance of concern for the public interest without meaningfully changing his tactics.

Ricchetti’s comfort around the ultra-wealthy is probably one of his biggest assets to the Biden campaign. Journalist George Packer writes that Biden used to denigrate his fundraising aide Jeff Connaughton because “Biden hated fund-raising, the drudgery and compromises it entailed. He resented any demands placed on him by the people who helped him raise money and the people who wrote checks, as if he couldn’t stand owing them.” (For his part, Connaughton went on to author the angry confessional The Payoff: Why Wall Street Always Wins, where he writes, “I came to D.C. a Democrat and left a plutocrat.”)

Biden bragged for decades about being one of the poorest men in Congress. His disdain for D.C. glad-handing meant that from day one, onlookers predicted Biden would struggle with funding a national presidential campaign—and sure enough, Biden for President, Inc., was running on fumes in January when Ricchetti told bundlers to dig deep.

Left to his own instincts, Biden seems likelier to chat with the shrimp cocktail waiter than the banker at a fancy fundraising event. Biden may have been a centrist standard-bearer over the decades, but he was never an ideological one—he just knows which way the wind is blowing.

That’s why the minders who shape the president’s thinking—and most especially, the chief of staff who controls their access to him—are so immensely powerful. They literally control which policies and ideas the president gets to see. Imagine how differently history might have gone if Obama had known Christina Romer’s estimate that the country needed $1.8 trillion of stimulus in 2009, information which his future chief of staff Rahm Emanuel pushed to restrict from his ears?

We now face an economic downturn that dwarfs the Great Recession, and it’s just one of a dozen interlinked crises. True, the Biden campaign is claiming that they’ve learned the lessons of history about a too-small stimulus. But it’s terrifying to think that a man who has spun through the revolving door between the White House and K Street three separate times might guide Biden through fights with his former benefactors in Big Pharma over a coronavirus vaccine, with Big Oil over a climate plan, and with corporate boardrooms around the world over rebuilding domestic industry.

Progressives may be able to persuade a President Biden on many issues if they can just sit down to talk with him, but they’ll need to get through the door first. That won’t be doable if Ricchetti is the doorman. Biden should do himself a favor and keep him out of the loop.

The post Biden’s Big Test: Selecting a White House Chief of Staff appeared first on Center for Economic and Policy Research.



Source link

قالب وردپرس

Continue Reading

Trending