Tuesday, April 22, 2014

Earth Day: A Baptists and Bootleggers Story

Earth Day was first celebrated on April 22, 1970. It is now observed in 192 countries, and is coordinated by the Earth Day Network. Bruce Yandle offers a hard-eyed look at how the original Earth Day affected U.S. environmental legislation in "How Earth Day Triggered Environmental Rent Seeking," which appeared in the Summer 2013 issue of the Independent Review.

One of Yandle's signature insights is the idea of a "Baptists-and-bootleggers" coalition. Who favored prohibition of alcohol sales? Baptists, on moral grounds, and bootleggers, because government prohibition would limit competition and boost their profits. He makes a strong argument that Earth Day led to a similar environmentalists-and-industrialists coalition, in which environmentalists pushed for laws to reduce pollution, and industrialists pushed for anti-pollution laws that would hinder their competition.

Before the passage of the Clean Air Act and Clean Water Act in 1970, pollution was often restricted by common law cases brought through the courts. From the point of view of incumbent business, these court cases were an unpleasant way to deal with environmental problems. Court decisions could be inconsistent, and sometimes harshly punitive. But in addition, common law court decisions offered no way to inhibit competition by raising the costs of new entrants and rival producers. Thus, many large companies saw opportunities to limit competition in the idea of federal environmental laws.

In some ways, the use of anti-pollution laws to limit competition was pretty obvious. For example, the new environmental laws commonly grandfathered in existing plants, but required new plants to meet much stricter standards.

In other ways, the methods of restricting competition were less obvious. Consider that there are essentially three ways to set environmental standards. One is to use economic incentives like pollution taxes and tradeable pollution permits. A second is to set performance standards for how much pollution can be emitted, but to leave firms the flexibility to decide how to meet the standards in the most cost-effective way. The third way is a technological standard which requires that every firm use the same method for reducing pollution. When a technological standard is required, then firms which could have reduced pollution more cheaply are not allowed to gain a competitive advantage from doing so--because all must follow the prescribed standard.

For several decades after 1970s, one could at least argue that most environmental indicators were moving in the right direction. But after a review of the more limited progress against air and water pollution in the last couple of decades, Yandle argues, "These data strongly suggest we have hit the cleanup limits of a top-down, command-and-control, technology-based pollution-control system. We know we can do better, and so do EPA managers."

Thus, the environmental authorities have been pushing away from technology-based standards, and toward offering flexibility in meeting environmental goals. In the case of water pollution, Yandle reports: "In 1991, the EPA began to push hard to develop watershed-based nutrient trading communities where publicly owned treatment works and other dischargers are allowed to exchange discharge offsets. In some cases, farmers and land developers are included in the larger trading communities. When trades take place, the incremental cost of reducing pollution falls dramatically."

In the case of air pollution, flexible pollution permit trading arrangements were used to reduce lead emissions in the 1980s, and sulfur dioxide emissions since the 1990s. Yandle writes: "For
the nation, as of 2011 there are 242 nonattainment counties for ozone, 121 for PM2.5. But get this, there are just 9 nonattainment counties, which are those that have not achieved EPA National Ambient Air Quality Standards, for sulfur dioxide, the only criteria pollutant managed by markets. Indeed, since 1990, sulfur dioxide emissions have been reduced 65 percent at an EPA estimated cost of from
$1.17 to $2 billion. If command-and-control had been used instead of markets, the estimated cost would have ranged from $7.5 to$11.5 billion ..."

For those interested in learning more about these flexible systems for reducing pollution with tradeable permits, the Winter 2013 of the Journal of Economic Perspectives had a symposium on the subject. It starts with an overview paper by Lawrence H. Goulder, "Markets for Pollution Allowances: What Are the (New) Lessons?" There are then three papers on specific applications. Richard Schmalensee and Robert N. Stavins discuss "The SO2 Allowance Trading System: The Ironic History of a Grand Policy Experiment"; Richard G. Newell, William A. Pizer and Daniel Raimi tackle "Carbon Markets 15 Years after Kyoto: Lessons Learned, New Challenges"; and Karen Fisher-Vanden and Sheila Olmstead explore "Moving Pollution Trading from Air to Water: Potential, Problems, and Prognosis."
As always, all papers in the JEP back to the first issue in 1987 are freely available, courtesy of the American Economic Association. (Full disclosure: I've been Managing Editor of JEP since 1987, too.)

Is there some reason that the environmentalists and the industrialists will be willing to move away from the technology-based and performance-based environmental rules standards and embrace a more flexible incentive-based approach? Yandle offers the following argument: "At some point, the environmental Baptists will see that they are losing ground. The system they have supported no longer delivers the goods they desire. As we have seen, major elements of environmental progress are dead in the water. And the bootleggers? At some point, global competition becomes so severe that regulatory rent seeking no longer pays. For durable regulation to survive, bootleggers and Baptists must be singing off the same page. For now, the music has stopped."





Monday, April 21, 2014

Behind the Long-Term Rise in U.S. Health Care Costs

There is ongoing controversy over where U.S. health care costs are headed next. Has the rate of growth slowed, and if so when and why? Did it just slow briefly during the aftermath of the Great Recession and now is speeding up again? Health care expenditures in the U.S. economy were 5% of GDP in 1960, and have risen steadily to 17% of GDP. Of course, if health care is getting a bigger slice of the GDP pie, then other desireable areas of spending, both for households and for government, must be getting a smaller slice. Indeed, the projections for rising health care costs are by far the largest factor that drives the projections of expanding federal budget deficits in the long run.

Louise Sheiner offers some useful "Perspectives on Health Care Spending Growth"  in a paper recently written for the Engelberg Center on Health Care Reform at the Brookings Institution. She makes the point that even as health care costs have been rising, public and private health care insurance has been expanding so that Americans have been paying a lower share of those costs out of pocket.

Indeed, given the rising in health care costs as a share of GDP, but the fact that Americans are paying a lower share of those expenses in out-of-pocket in health care costs, the overall balance is that out-of-pocket health care costs as a share of GDP haven't risen for several decades. To put it another way: Back in 1960, health care spending was 5% of GDP, and Americans paid about half of that--2.5% of GDP--in the form of out-of-pocket costs. Now health care spending is 17% of GDP, but only 2% of GDP is being paid in out-of-pocket health care costs--with public and private insurance paying for the rest.



As Sheiner writes: "As [health care] spending rise as a share of income, two things happen: insurance contracts change to insulate people from the risk of large expenses if they become ill, and public programs expand to help maintain access to health services for lower income. Both of these changes fuel increased adoption of health technology. ... It is clear that it is the combination of technological
innovation and a continued willingness-to-pay for that technology that has allowed health spending to rise faster than income for so long. For example, without the dramatic decline in the share of health expenditures paid out-of-pocket, many Americans would simply not have been able to afford the new technologies when they became ill. It is inevitable that this willingness-to-pay will diminish at some point, but we have very little ability to predict when that will be."

What does this mean for the future path of health care spending? Sheiner analyzes patterns of GDP growth over time compared with health care costs. Like other analysts, she observes that the rise in health care costs started slowing down about a decade ago--that is, well before the Patient Protection and Affordable Care Act was enacted in 2009. She cautions against reading too much into this slower rise of health care costs: "[T]he slowdown in health spending growth observed since 2002 is largely the result of the two recessions that occurred in the last decade ... [I]t would be hard to argue that a few years of slower growth should be viewed as a turning point, particularly given that the recent slowdown occurred during unusual times: a decade of very slow economic growth and very low inflation (which made it harder for firms to pass on health insurance costs to their employees and may have required larger adjustments than usual), a major health reform that was accompanied by much confusion and fear, and a huge runup in budgets deficits that intensified attention on the need for future spending cuts."

Friday, April 18, 2014

When Technology Spreads Slowly

One of the most important issues in thinking about the economic growth potential for the U.S. economy is this question: Has the U.S. economy already seen most of the economic growth that will result from the innovations in information and communication technology, including the web, the cloud, robotics, and so on? Or is the U.S. economy perhaps only a fraction of the way--perhaps even less than halfway--through its adaptation to the potential for productivity gains from these technologies, and thus has stronger prospects for future growth?

When confronted by these kinds of questions, hindsight is clearer than foresight. And among economic historians, it is actually a standard insight that major new technologies can take decades to diffuse through the economy. Rodolfo E. Manuelli and Ananth Seshadri offer an example in "Frictionless Technology Diffusion: The Case of Tractors," which appears in the April 2014 issue of the American Economic Review. (The article is not freely available on-line, but many readers will have access through library subscriptions. Full disclosure: the AER is published by the American Economic Association, which also publishes the Journal of Economic Perspectives, where I work as Managing Editor.) They point out that in simple economic models, a firm just chooses a technology--and can choose a new technology at any time it wants. But in the real world, new technologies often take time to diffuse. They note that surveys of dozens of new technologies often find that it takes 15-30 years for a new technology to go from 10% to 90% of the potential market. But some major inventions take longer.

Here's how the tractor slowly displaced horses and mules in the U.S. agricultural sector from 1910 to 1960. Horses and mules, shown by the black dashed line and measured on the right-hand axis, declined from about 26 million in 1920 to about 3 million by 1960. Conversely, the number of tractors, shown  by the blue solid line, rose from essentially zero in 1910 to 4.5 million by 1960.



What factors might explain why it would take a half-century for tractors to spread? Lots of answers have been proposed: farmers needed time and experience to learn about the new technology; older farmers preferred not to learn, but gradually died off; some farmers didn't have large enough farms to make tractors economically viable; some farmers didn't have the financial ability to invest in a tractor; there was a lack of information about the benefits of tractors; established interests like the horse and mule industry pushed back against tractors where possible. Manuelli and Seshadri offer another explanation: During much of this time, the quality of tractors was continually improving, and also during the earlier part of this time period (like the Great Depression) wages for farm workers were not rising by much. Thus, it made some sense for a number of farmers to avoid buying the early generations of tractors. Let someone else work out the kinks! But as the quality of tractors improved and wages of farmworkers rose, investing in a tractor began to look like a better and better deal.

My own personal favorite example of the slow diffusion of technology was laid out by Paul David in "Computer and Dynamo: The Modern Productivity Paradox in a Not-too-Distant Mirror," which appeared in a 1991 OECD book called Technology and Productivity: The Challenge for Economic Policy.  At the time David was writing, the U.S. was still mired in a productivity slowdown that had started in the 1970s. However, there had clearly been a lot of computerization during that time, leading to a much-repeated comment from Robert Solow: "We see computers everywhere except in the economic statistics." David harked back to the historical example of using dynamos to produce electricity, explaining that this innovation was around for decades, sometimes at what seemed to be very large scale, before it showed up in productivity gains.

Dynamos had been producing electricity that was used for illumination since the 1870s. This technology was well-known enough that the Paris Exhibition of 1900 included many examples of electrical machinery, run from the power generated by dynamos that were 40 feet tall. But the Paris Exhibition also used electric light on a more widespread basis in public spaces than ever before. David wrote: "Although Europeans already knew of electric lighting for decades, never before Paris 1900 had it been used to illuminate a whole city--in such a way that outdoor festivals could continue into the night."

However, despite the demonstrated technological capabilities of generating and using electricity, and what seemed like a strong array of technological and scientific breakthroughs, productivity growth in the U.S. and the UK economies was actually relatively slow for about two decades after 1890. It's not until the 1920s that productivity growth based on electrification really took off. In retrospect, the reasons why are clear enough. Although the technology was already well-known, it took time for electrification to become widespread. Here's one figure showing diffusion of electrification in the household sector, and another showing the industrial sector. You could illuminate Paris with electrical light in 1900, but most places in the US didn't have access to electricity then.


But it wasn't just the spread of electricity. It was also the changes that industry and households needed to make to take advantage of it. Factories had long run on a "group drive" principle, where a single source of power (like water power or steam engines) powered everything through a series of gears. A "group drive" arrangement set constraints on the location of the factory and the organization of the machines. Electrification made "unit drive" possible, where factories had much more freedom to choose their location and set up their machines, but it took time and learning to figure out the best ways of doing this. More broadly, electricity changed everything from the lighting in factories to the fire safety, along with changes in the ability to develop new chemical and heating processes, and much more. For US households, it took time--really up into the 1920s--until they had both a source of electricity and also a supply of new household appliances like the vacuum cleaner, radio, washing machines, dishwasher, and all the changes of lifestyle that came with reliable indoor electric light.

In the mid-1990s, several years after Paul David's essay was published, a US productivity resurgence rooted in making and using information and communications technology did occur. It didn't happen on the time and schedule that many had been expecting . But as David wrote, many people "lose a proper sense of the complexity and historical contingency of the processes involved in technological change and the entanglement of the latter with economic, social, political, and legal transformations. There is no automaticity in the implentation of a new technological paradigm, such as that which we presently discern is emerging from the confluence of advances in computer and communications technologies."

In my own mind, examples like the slow spread of the tractor and electrification suggest the possibility that we may be only a moderate portion of the way through the social gains from the information and communications technology revolution. One of the reasons that tractors spread slowly was that the capabilities of tractors were steadily rising, which made them more attractive over time. In a much more extreme way way, the power of information and computing technology continues to rise, which keeps opening new horizons of potential uses and applications. One of the reasons that electrification spread slowly is that it took time for producers to rethink and revise their processes in a fundamental way, and time for spread and power of electricity to increase, and time for the invention and spread of household appliances related to electricity. In a similar way, my sense is that many firms are still very much in the process of rethinking and revising their processes in response to the developments in information and communications technology, the capabilities of that technology (like faster wireless speeds and computational power) continue to evolve, and the range of new household products using that technology (in areas from automated homes to entertainment to driverless cars and  roboticscontinue to expand.

Ultimately, of course, many of us are a little schizophrenic about the future of technological change. Some days we worry that technological change will be too slow, and that as a result the U.S. economy is headed for a future of slow growth and a stagnant standard of living. Other days we worry that technological change will be so rapid as to lead to massive disruption of jobs and workplaces across the economy. It is unlikely that both of these fears will come true! On my optimistic days, I hope that a flexible society and economy can find ways to adapt to an ongoing pattern of robust technological change and economic growth.

Thursday, April 17, 2014

What Happened to the Great Moderation?

In the 1990s and into the early years of the 2000s, it was common to hear economists speak of a "Great Moderation" in the U.S. economy. After the economic convulsions of the 1970s and early 1980s, in particular, the path of the U.S. economy seemed to have smoothed. To be sure, there was an 8-month recession in 1990-91, and another 8-month recession in 2001. But both recessions were fairly mild: unemployment topped out at 7.8% in the aftermath of the 1990-81 recession, and reached only 6.3% in the aftermath of the 2001 recession. And the recessions seemed more scarce: the average length of an economic upswing since World War II has been 58 months, but the upswing before the 1990-91 recession was 92 months, and the upswing before the 2001 recession was 120 months.

Of course, after 2007 when the Great Recession had crashed the party, talk of a Great Moderation seemed disconnected from reality. Jason Furman, chair of President Obama's Council of Economic Advisers, has taken on the question of "Whatever Happened to the Great Moderation?" in an April 10 speech.

Furman makes the interesting point that even now, including the Great Recession and its aftereffects in the data, the level of short-term volatility in economic statistics like quarterly GDP or monthly job growth seems to be lower than it was from the 1950s to the 1970s, not only in the United States but also in other high-income countries. (Of course, "less volatile" doesn't mean "healthy growth rate.")

Peering into the inner workings of the US economy, Steven J. Davis and James A. Kahn provided an overview of the evidence in the Fall 2008 issue of the Journal of Economic Perspectives in "Interpreting the Great Moderation: Changes in the Volatility of Economic Activity at the Macro and Micro Levels."  (The article, like all articles in  JEP, is freely available on-line courtesy of the American Economic Association. Full disclosure: I've been Managing Editor of the journal since its inception in 1987.) They find that the drop in short-term volatility of GDP can largely be traced to a drop in the volatility of production of durable goods. The volatility of production of nondurable goods falls only a little, and production of service was never that volatile to begin with. Volatility of production inventories declined substantially, too.

Furman points out an intriguing pattern here: "From 1960 to 1984, inventories were quite volatile, and were also procyclical, meaning that when sales increased, inventories also increased, further contributing to the volatility of production. During the post-1984 Great Moderation period, inventory investment itself became much less volatile, and the previous relationship between inventories and sales reversed, so that the two became negatively correlated. Focusing specifically on durable goods, the change in the covariance between inventories and sales accounts for nearly half of the decline in the variance in durable goods output. However, including the Great Recession, it appears that the relationship between output, sales and inventories partially reverted to the pre-Great Moderation pattern. The covariance of inventories and sales turned positive again, suggesting that improved inventory management was not enough to cushion the massive blow of the Great Recession, and in fact exacerbated it." Furman is careful to note that the argument that inventories have become procyclical is based on only a few years of data.  But if the pattern continues, it will need exploring and explaining.

Another pattern here is that consumption patterns have continued to show less short-term volatility, even through the Great Recession. Furman writes: "Disaggregating the GDP data, the reduced volatility of consumption is one of the major sources of the Great Moderation—and this reduced volatility has continued to hold up during and after the Great Recession, especially in consumer durables. The continued stability in consumption stands in contrast to other components of GDP like business fixed investment, which became less volatile during the initial Great Moderation but has since at least partially reverted to its earlier volatility."

Improvements in macroeconomic policy offer another potential explanation for the Great Moderation: that is, monetary policy was less disruptive after the mid-1980s than it had been in, say, the 1970s. The use of fiscal policy to stimulate the economy during downturns arguably became more purposeful and effective. Indeed, as Furman points out, one can make a case that monetary and fiscal policies helped to prevent the Great Recession from being even greater (citations omitted here, and  throughtout):

"Improvements in monetary and fiscal policy have likely contributed to the patterns in the high-frequency data originally identified as the Great Moderation, although one could debate the share of the credit they deserve. I believe policy steps have also played a critical role at lower frequencies as well, with the best example being the Great Recession itself, which in many ways started off looking like it could be as bad or worse than the Great Depression. To appreciate this point, consider that the plunge in stock prices in late 2008 proved similar to what occurred in late 1929, but was compounded by sharper home price declines, ultimately leading to a drop in overall household wealth that was substantially greater than the loss in wealth at the outset of the Great Recession. . . .Moreover, Alan Greenspan (2013) has argued that short-term credit markets froze more severely in 2008 than in 1929, and to find a comparable episode in this regard one has to go back to the panic of 1907. However, in large part because of an aggressive policy response, the unemployment rate increased 5 percentage points, compared to a more than 20 percentage point increase in the Great Depression from 1929 to 1934. And real GDP per working age population returned to its pre-recession peak more quickly in the United States than in other countries that also experienced systemic crises in 2007-08."
The pattern that emerges from Furman's discussion is that the Great Moderation was quite real as measured by smaller short-term fluctuations in GDP, employment, consumption, production of durable goods, and inventories. Even more surprisingly, many of these factors (although not inventories) have continued to show lower short-term volatility in the aftermath of the Great Recession. But of course, this lower level of short-term quarter-to-quarter or month-to-month economic fluctuations did not protect the economy from the enormous economic blow of the Great Recession, which lasted 18 months, spiked the unemployment rate from under 5% in mid-2007 to 10% in October 2009,m and then has been followed by years of frustrating sluggish (and without a lot of short-term volatility) recovery.

One possible interpretation here is that the Great Moderation is real, and the Great Recession was a sort of perfect storm, best understood as a one-off divergence from the long-run trend. Another possible interpretation is when short-term volatility is lower and when recessions become milder and less common, firms and households become less wary of risk, and more willing to take chances--which in turn leads to the kind of risky conditions that can create the underlying conditions for a deeper recession.  And yet another interpretation is that while the old vulnerabilities that led to the economic volatility of smokestack industries back in the 1950s and 1960s have declined, the U.S. and world economy how face some new vulnerabilities due to changes in technology, globalization, and the financial sector. In this view, the Great Recession was only a first foretaste of the kinds of disruptive interactions that can occur in this new economic configuration.


Wednesday, April 16, 2014

Demand for Sand

These are boom times for the sand industry, which is actually a mixed blessing, resulting in high prices and even environmental risks. The Global Environmental Alert Service of the United Nations Environment Programme tells some of the story in a March 2014 report, "Sand, rarer than one thinks." As the report notes (citations omitted for readability): "Globally, between 47 and 59 billion tonnes of material is mined every year, of which  sand and gravel, hereafter known as aggregates, account for both the largest share (from 68% to 85%) and the fastest extraction increase ..."

To get a sense of the volume here,  consider this comparison: "A conservative estimate for the world consumption of aggregates exceeds 40 billion tonnes a year. This is twice the yearly amount of sediment carried by all of the rivers of the world, making humankind the largest of the planet’s transforming agent with respect to aggregates ..." Or to look at it another way, one major use of aggregates like sand and gravel is for concrete. "Thus, the world’s use of aggregates for concrete can be estimated at 25.9 billion to 29.6 billion tonnes a year for 2012 alone. This represents enough concrete to build a wall 27 metres high by 27 metres wide around the equator."  Sand and gravel are also used land in reclamation, shoreline developments, road embankments, asphalt, and by industries including glass, electronics, and aeronautics.

Dredging sand and gravel from oceans and rivers causes environmental disruption, which can in some cases become severe, leading to problems with erosion, greater vulnerability to storm surges, and destruction of habitat for plant and animals. "Lake Poyang, the largest freshwater lake in China, is a distinctive site for biodiversity of international importance, including a Ramsar Wetland. It is also the largest source of sand in China and, with a conservative estimate of 236 million cubic metres a year of sand extraction, may be the largest sand extraction site in the world. ... Sand mining has led to deepening and widening of the Lake Poyang channel and an increase in water discharge into the Yangtze River. This may have influenced the lowering of the lake’s water levels, which reached a historically low level in 2008 ..." (The Ramsar Convention is the nickname for the Convention on Wetlands of International Importance, which is an intergovernmental treaty for protection of key wetlands.) In general, economic growth in China has been one of the major reasons for the expansion of sand and gravel mining in the last decade.

Or to choose a more extreme case: "In some extreme cases, the mining of marine aggregates has changed international boundaries, such as through the disappearance of sand islands in Indonesia."
The qualities of sand and gravel matter for their eventual use. For example, "If the sodium is not removed from marine aggregate, a structure built with it might collapse after few decades due to corrosion of its metal structures. Most sand from deserts cannot be used for concrete and land reclaiming, as the wind erosion process forms round grains that do not bind well."

With a combination of research and development into alternative materials, along with different materials methods of landfill and construction, the use of sand and gravel could be reduced. Some possible alternative materials for various uses include quarry dust, incinerator ash, recycled concrete and glass, perhaps finding ways to use desert sand.

According to data from the U.S. Geological Survey, the U.S. economy used about 46 million tons of sand and gravel for industrial purposes in 2012, which represents nearly a doubling since 2003. In addition, the price of sand and gravel for industrial use rose from $18.30/ton in 2003 to $52.80/ton in 2012. Essentially, this kind of sand has a high silicon dioxide content, and a large portion of this run-up in demand is because this kind of sand is used in hydraulic fracturing, which now consumes about 62% of this kind of sand in the U.S.

Use of sand and gravel for construction purposes was much greater in the U.S economy, about 842 million tons in 2012. However, this was down from about 1,200 million tons per year during the housing and construction boom of the years leading up to the Great Recession. The USGS reports: "It is estimated that about 44% of construction sand and gravel was used as concrete aggregates; 25% for road base and coverings and road  stabilization; 13% as asphaltic concrete aggregates and other bituminous mixtures; 12% as construction fill; 1% each for concrete products, such as blocks, bricks, and pipes; plaster and gunite sands; and snow and ice control; and the remaining 3% for filtration, golf courses, railroad ballast, roofing granules, and other miscellaneous uses."

With all due apologies to the good people and productive firms working in this industry, it's a little difficult for me to imagine a more boring product than sand and gravel. As a first step toward getting out of my ivory tower and getting over this prejudice, I close here with some comments from a 1999 report by the U.S. Geological Survey, "Natural Aggregates—Foundation of America’s Future."

"Natural aggregates, which consist of crushed stone and sand and gravel, are among the most abundant natural resources and a major basic raw material used by construction, agriculture, and industries employing complex chemical and metallurgical processes. Despite the low value of the basic products, natural aggregates are a major contributor to and an indicator of the economic well-being of the Nation. Aggregates have an amazing variety of uses. Imagine our lives without roads, bridges, streets, bricks, concrete, wallboard, and roofing tiles or without paint, glass, plastics, and medicine. Every small town or big city and every road connecting them were built and are maintained with aggregates. More than 90 percent of asphalt pavements and 80 percent of concrete are aggregates. Paint, paper, plastics, and glass also require sand, gravel, or crushed stone as a constituent. When ground into powder, limestone is used as an important mineral supplement in agriculture, medicine, and household products. ... On the basis of either weight or volume, aggregates accounted for more than two-thirds of about 3.3 billion metric tons of nonfuel minerals produced in the United States in 1996."




Tuesday, April 15, 2014

When Government Pre-Fills Income Tax Returns

As Americans hit that annual April 15 deadline for filing income tax returns, they may wish to contemplate how it's done in Denmark. Since 2008, in Denmark the government sends you a tax assessment notice: that is, either the refund you can receive or the amount you owe. It includes an on-line link to a website where you can look to see how the government calculated your taxes. If the underlying information about your financial situation is incorrect, you remain responsible for correcting it. But if you are OK with the calculation, as about 80% of Danish taxpayers are, you send a confirmation note, and either send off a check or wait to receive one.

This is called a "pre-filled" tax return. As discussed in OECD report Tax Administration 2013: Comparative Information on OECD and Other Advanced and Emerging Economies: "One of the more significant developments in tax return process design and the use of technology by revenue bodies over the last decade or so concerns the emergence of systems of pre-filled tax returns for the PIT [personal income tax]."  After all, most high-income governments already have data from employers on wages paid and taxes withheld, as well as data from financial institutions on interest paid. For a considerable number of taxpayers, that's pretty much all the third-party information that's needed to calculate their taxes. The OECD reports: 

"Seven revenue bodies (i.e. Chile, Denmark, Finland, Malta, New Zealand, Norway, and Sweden) provide a capability that is able to generate at year-end a fully completed tax return (or its equivalent) in electronic and/or paper form for the majority of taxpayers required to file tax returns while three bodies (i.e. Singapore, South Africa, Spain, and Turkey) achieved this outcome in 2011 for between 30-50% of their personal taxpayers. [And yes, I count four countries  in this category, not three, but so it goes.] In addition to the countries mentioned, substantial use of pre-filling to partially complete tax returns was reported by seven other revenue bodies -- Australia, Estonia, France, Hong Kong, Iceland, Italy, Lithuania, and Portugal. [And yes, I count eight countries in this category, not seven, but so it goes.] Overall, almost half of surveyed revenue bodies reported some use of  prefilling ..."
For the United States, the OECD report notes that in 2011, zero percent of returns were pre-filled. Could pre-filling work in the U.S.?  Austan Goolsbee provided a detailed proposal for how prefilling might work for the United States in a July 2006 paper, "The Simple Return: Reducing America's Tax Burden Through Return-Free Filing." He wrote: 

"Around two-thirds of taxpayers take only the standard deduction and do not itemize. Frequently, all of their income is solely from wages from one employer and interest income from one bank. For almost all of these people, the IRS already receives information about each of their sources of income directly from their employers and banks. The IRS then asks these same people to spend time gathering documents and filling out tax forms, or to spend money paying tax preparers to do it. In essence, these taxpayers are just copying into a tax return information that the IRS already receives independently. The Simple Return would have the IRS take the information about income directly from the employers and banks and, if the person's tax status were simple enough, send that taxpayer a return prefilled with the information. The program would be voluntary. Anyone who preferred to fill out his own tax form, or to pay a tax preparer to do it, would just throw the Simple Return away and file his taxes the way he does now. For the millions of taxpayers who could use the Simple Return, however, filing a tax return would entail nothing more than checking the numbers, signing the return, and then either sending a check or getting a refund. ... The Simple Return might apply to as many as 40 percent of Americans, for whom it could save up to 225 million hours of time and more than $2 billion a year in tax preparation fees. Converting the time savings into a monetary value by multiplying the hours saved by the wage rates of typical taxpayers, the Simple Return system would be the equivalent of reducing the tax burden for this group by about $44 billion over ten years."
Most of this benefit would flow to those with lower income levels. The IRS would save money, too, from not having to deal with as many incomplete, erroneous, or nonexistent forms.  

For the U.S., the main  practical difficulty that prevents a move to pre-filling is that with present arrangements, the IRS doesn't get the information about wages and interest payments from the previous year quickly enough to prefill income tax forms, send them out, and get answers back from people by the traditional April 15 timeline. The 2013 report of the National Taxpayer Advocate has some discussion related to these issues in Section 5 of Volume 2. The report does not recommend that the IRS develop pre-filled returns. But it does advocate the expansion of "upfront matching," which means that the IRS should develop a capability to tell taxpayers in advance, before they file their return, about what their parties are reporting to the IRS about wages, interest, and even matters like mortgage interest or state and local taxes paid. If taxpayers could use this information when filling out their taxes in the first place, then at a minimum, the number of errors in tax returns could be substantially reduced. And for those with the simplest kinds of tax returns, the cost and paperwork burden of doing their taxes could be substantially reduced. 






Saturday, April 12, 2014

How Milton Friedman Helped Invent Income Tax Withholding

In one of the great ironies, the great economist Milton Friedman--known for his pro-market, limited government views--helped to invent government withholding of income tax. It happened early in his career, when he was working for the U.S. government during World War II. Of course, the IRS opposed the idea at the time as impractical. Friedman summarized the story in a 1995 interview with Brian Doherty published in Reason magazine. Here it is:

"I was an employee at the Treasury Department. We were in a wartime situation. How do you raise the enormous amount of taxes you need for wartime? We were all in favor of cutting inflation. I wasn't as sophisticated about how to do it then as I would be now, but there's no doubt that one of the ways to avoid inflation was to finance as large a fraction of current spending with tax money as possible.
In World War I, a very small fraction of the total war expenditure was financed by taxes, so we had a doubling of prices during the war and after the war. At the outbreak of World War II, the Treasury was determined not to make the same mistake again.
You could not do that during wartime or peacetime without withholding. And so people at the Treasury tax research department, where I was working, investigated various methods of withholding. I was one of the small technical group that worked on developing it.
One of the major opponents of the idea was the IRS. Because every organization knows that the only way you can do anything is the way they've always been doing it. This was something new, and they kept telling us how impossible it was. It was a very interesting and very challenging intellectual task. I played a significant role, no question about it, in introducing withholding. I think it's a great mistake for peacetime, but in 1941-43, all of us were concentrating on the war.
I have no apologies for it, but I really wish we hadn't found it necessary and I wish there were some way of abolishing withholding now."