Wednesday, April 30, 2014

100 Years of Consumer Price Index Data

The Consumer Price Index was officially first published in 1921, but at that time, price data was also published going back to 1913. Thus, there's a sense in which a very small group of geeks and wonks are now celebrating the first 100 years of price data. Those who wish to join the our very small party might start by checking the Monthly Labor Review, published by the U.S. Bureau of Labor Statistics, has two articles in the April 2014 issue looking back in time: "One hundred years of price change: the Consumer Price Index and the American inflation experience," and "The first hundred years of the Consumer Price Index: a methodological and political history."

Here, I'll just mention some points that caught my eye from these two articles.  But as a starting point, here's a figure showing the path of annual rates of inflation as measured by the Consumer Price Index during the last century, taken from the third edition of my Principles of Economics textbook that was published earlier this year. You can see the three big inflations: those intertwined with World War I or World War II, and the high peacetime inflation of the 1970s. You can also see the episodes of deflation after World Wars I and II, during the Great Depression, and the recent blip of deflation in the aftermath of the Great Recession.


1) Reading the history is an ongoing reminder that price changes have been politically sensitive for a long time, and indeed, that price controls have been fairly common in U.S. experience. As BLS writes: "[A]ctivist policies aimed at directly controlling prices were a regular feature of the nation’s economy until the last few decades." When President Richard Nixon instituted wage and price controls in 1971, there had been price controls "during World War I, the 1930s, World War II, and the Korean war." There had also been episodes, like the New Deal legislation after Great Depression, where the government had attempted to institute price floors to block deflation. Add in the price controls that affected certain sectors for much of the 20th century--airfares, phone service, interest rates paid by banks, public utilities, and others--and the notion that the U.S. economy used to be be a wide-open free market is open to some qualification. 

2) There have been various efforts over time to hold down inflation by what became known as "jaw-boning" under President Gerald Ford called "jaw-boning"--that is, telling firms publicly that they shouldn't raise prices, and making a fuss when they did. For example, there were "fair-price committees" established after World War I to monitor if sellers exceeded price guidelines, and thre were also attempts to restrain inflation voluntarily after the Korean war and under President Carter int the late 1970s. But perhaps the best-remembered episode today is President Ford's Whip Inflation Now (WIN) campaign in 1974:


President Ford inherited the difficult inflation situation. In late 1974, he declared inflation to be “public enemy number one.” He solicited inflation-fighting ideas from the public, and his signature “Whip Inflation Now” (WIN) campaign was started. Citizens could receive their WIN button by signing this pledge:
I enlist as an Inflation Fighter and Energy Saver for the duration. I will do the very best I can for America.
An October 1974 newspaper reprints the form containing the pledge. Tellingly, the story next to the form asserts that relief from food prices was unlikely before 1976, while another account details the administration’s efforts to advance price-fixing legislation.46 Buttons were hardly the only WIN product: there were WIN duffel bags (as shown below), WIN earrings, and even a WIN football. 

3) Whatever number for price increases was produced by the Bureau of Labor Statistics was often used in labor disputes. Thus, such estimates were always controversial. As BLS writes about labor disputes back in the 1940s, "[M]any were confusing the additional expense of attaining a higher standard of living for an increase in the cost of a fixed standard of living." I'd say that confusion persists today.

4) There has long been a controversy over whether the CPI just looked at prices, or whether it was seeking to capture the broader idea of a "cost of living." In 1940, the Cost of Living Division in the Bureau of Labor Statistics called its index, "Cost of Living of Wage Earners and Lower-Salaried Workers in Large Cities." But there were numerous complaints that this measure only covered price changes, and didn't capture, for example, the fact that during World War II many households had higher expenses from the need to move between homes, pay higher income taxes, and other factors. Ih 1945, the BLS banished the "cost of living" term, and instead renamed the index as the Consumer’s Price Index for Moderate Income Families in Large Cities.”

5) However, soon after adopting the "price index" terminology, economists began to argue in  congressional testimony in the 1950s and the "Stigler Commission" report in 1961 that the conventional process of looking at a basket of goods purchased by consumers and seeing how the total cost of buying that basket of goods over time changed was, perhaps counterintuitively, not accurately measuring how consumers were affected by price changes. The BLS quotes the 1961 Stigler report:
"It is often stated that the Consumer Price Index measures the price changes of a fixed standard of living based on a fixed market basket of goods and services. In a society where there are no new products, no changes in the quality of existing products, no changes in consumer tastes, and no changes in relative prices of goods and services, it is indeed true that the price of a fixed market basket of goods and services will reflect the cost of maintaining (for an individual household or an average family) a constant level of utility. But in the presence of the introduction of new products, and changes in product quality, consumer tastes, and relative prices, it is no longer true that the rigidly fixed market basket approach yields a realistic measure of how consumers are affected by prices. If consumers rearrange their budgets to avoid the purchase of those products whose prices have risen and simultaneously obtain access to equally desirable new, low-priced products, it is quite possible that the cost of maintaining a fixed standard of living has fallen despite the fact that the price of a fixed market basket has risen.
As the BLS notes: "In response to the review’s findings, BLS would abandon the constant-goods conception of a price index and adopt the constant-utility framework as the guiding theoretical perspective in revising future indexes." In other words, the BLS immediately returns to a broader cost-of-living concept, but looked only at how the cost of living was affected by changes in prices and in the types of goods available.
For those who would like a primer on issues surrounding the measurement of inflation in an economy of substitution as prices change, along new and evolving products in the last couple of decades, the Journal of Economic Perspectives has run a couple of symposia on the subject. The Winter 1998 issue has an article from the members of the Boskin Commission, which had published a report on ways to reform the Consumer Price Index, along with six comments. In the Winter 2003 issue, the JEP included another symposium on the CPI, this time leading off with a paper by one author of a National Academy of Sciences report on the topic, with a couple of comments. (Full disclosure: I've worked as Managing Editor of JEP since 1986.)

6) I had not known that what was then called the Bureau of Labor did a fairly detailed study of family expenditures and retail prices well from 1888 to 1890.


"At the time, the federal government was accumulating large budget surpluses, and both Democrats and Republicans identified tariff policy as the preferred tool for reducing the surpluses. ... Congressional Republicans eventually won the debate and passed the Tariff Act of 1890, commonly called the McKinley Tariff, which increased the average tariff level from 38 percent to 49.5 percent. Concerned with the effect that this new tariff law would have on the cost of production in key industrial sectors, Congress requested the Bureau of Labor to conduct studies on wages, prices, and hours of work in the iron and steel, coal, textile, and glass industries. ...From 1888 to 1890, expenditure data were collected from 8,544 families associated with the aforesaid industries. Retail prices were collected on “215 commodities, including 67 food items, in 70 localities,” from May 1889 to September 1891."

7) I had also not known that there was a detailed study on expenditures and prices around 1903.

"After successfully fulfilling special requests for smaller, industry-specific statistical studies for the Congress and the President, the Bureau endeavored to conduct a comprehensive study of the condition of working families throughout the country. A survey of family expenditures from 1901 to 1903 was the first step in constructing a comprehensive index of retail prices. Bureau agents surveyed 25,440 families that were headed by a wage earner or salaried worker earning no more than $1,200 annually in major industrial centers in 33 states; with inclusivity in mind, the Bureau included African American and foreign-born families in its survey. Agents collected 1 year’s data on food, rent, insurance, taxes, books and newspapers, and other personal expenditures. Using data on the income and expenditures of 2,500 families, the Bureau derived expenditure weights, particularly for principal food items, from this study. The second step in the construction of the retail price index was the collection of retail prices on the goods reported in the expenditure survey. The Bureau collected these data from 800 retail merchants in localities which were representative of the data collected in that survey. The prices collected spanned the years from 1890 to 1903. The 3-year study culminated in the publication of a price index called the Relative Retail Price of Food, Weighted According to the Average Family Consumption, 1890 to 1902 (base of 1890–1899), the first weighted retail price index calculated and published by the Bureau of Labor. The index included monthly quotations for the average prices and relative prices (averages weighted by consumption) of 30 principal food items. The Bureau expanded this survey to include 1,000 retail outlets in 40 states, and the index ran through 1907 before ceasing publication."
8) No single price index is ever going to be perfect for all uses So before using a price index, think about how it was constructed and if it's the right tool for your purposes. I was delighted to see that one of the BLS articles ended by quoting a 1998 article from the Journal of Economic Perspectives on this point, by Katharine G. Abraham, John S. Greenlees, and Brent R. Moulton. They wrote: 

"It is, in fact, commonplace to observe that there is no single best measure of inflation. It is evident that the expanding number of users of the CPI have objectives and priorities that sometimes can come into conflict. The BLS response to this situation has been to develop a “family of indexes” approach, including experimental measures designed to provide information that furthers assessment of CPI measurement problems, or to focus on certain population subgroups, or to answer different questions from those answered by the CPI. All of these measures are carefully developed but have their own limitations. Those who use the data we produce should recognize these limitations and exercise judgment accordingly concerning whether and how the data ought to be used." 








Monday, April 28, 2014

Airplane Boarding Time: Managing a Scarce Resource

For economists, all efforts to face the tradeoffs when allocating scarce resource are part of the broader subject--like allocating the time of passengers when boarding airlines.

It's intuitively clear, at least if you mull it over a bit, that boarding a plane from front to rear will probably be the most time-consuming and thus inefficient method, because everyone in the back of the plane will be waiting in line while those in the front of the plane are getting seated. But at least for me, it is much less intuitive that what is that boarding a plane from rear to front may be the second-most time-consuming way. R. John Milne and Alexander R. Kelly explain why, and offer their own proposal, in "A new method for boarding passengers onto an airplane," recently published in the Journal of Air Transport Management (34, 93-100).

Milne and Kelly offer an improvement on the method of airline boarding discussed by Jason H. Steffen in his 2008 article, "Optimal boarding method for airline passengers," also in the Journal of Air Transport Management (v. 14, pp. 146-150). Steffen refers to earlier similations of passenger boarding which found that allowing passengers to board in random order actually was faster than the then-traditional back-to-front method. The key insight here that what slows down airline boarding is people sliding their luggage into overhead compartment and scooting in and out of seats (like when the person sitting by the window is last to arrive, so the middle and aisle occupants need to scoot out to accommodate that person). When an airplane is loaded from back to front, people end up standing in the aisles while these delays are happening in the back, and in the meantime, no one is getting settled in the middle seats. With random boarding, at least some people are getting settled throughout the plane at any given time, which is why it turns out to be faster.

Steffen's insight was that if airlines were to seat people in the window seats first, and also offer people enough room to hoist up their luggage to the overhead rack without getting in anyone else's way, then boarding could be much more rapid. Here's a figure from Milne and Kelly showing how the Steffen method would work. The diagram shows seats in a hypothetical plane with 20 rows, six seats per row, and the front of the plane at the top. The numbers show the order in which people would board.

In this method, the first 10 passengers are every other window seat on the right of the plane. These folks walk in, load their luggage, and sit. They find it easy to load their luggage, because no one in the row immediately behind is getting on. Notice that with this method, 10 bags are being loaded simultaneously--which is 10 mini-delays being averted--and 10 window-seat passengers slide in without hindrance--which is 10 other mini-delays being averted. The next 10 passengers are every other row of the window seats on the left-hand-side of the plane, back to front. Then  back to the right-hand side for the remaining window seats on that side; back to the left-hand row for the remaining window seats; then on to the same method for middle seats and then aisle seats. In experimental studies that Steffen's later conducted, this method is about 25% faster than random boarding (which, remember, beats back-to-front boarding).

Milne and Kelly offer an additional twist: assign seats to passengers in a way that spreads carry-on baggage throughout the plane. Sort passengers into three categories: those with two bags, one bag, or no bags. In the Steffen method, it's possible that some of those who get on, say, with the first 10 passengers will have two carry-on bags, one bag, or no bags. It will take these people different amounts of time to get their luggage stowed and to get seated. But if all those with two bags got on at the same time, and took about the same amount of time to store their bags, then boarding could proceed more quickly. Based on mathematical simulations (with assumptions about how long it takes to walk up the aisle, store a bag, and so on), they find that their method is about 10 seconds faster for boarding a plane in than the Steffens method.

So does any of this really matter? After all, because these results are based on models and experiments of how people boarding planes behave, it's easy to come up with real-world issues that would need to be addressed. This plan is based on single travellers, but what about people travelling together or families travelling with children? How would airlines gather information about the number of carry-on bags people have: would you have to tell them your number of carry-on bags, or would the airline try to infer the number from other information about you (for example, a frequent flyer who will only be staying in another city for a day might be more likely to be a business traveler with a roll-along bag)? How would the airline communicate the boarding pattern to those at the gate: for example, would people's boarding pass have a number for boarding the plane, or would different lines be set up, or what?

Of course, people will never board planes with the sort of mechanical efficiency shown in these computer models. But changes in how airlines board their passengers have already started arriving, and more are probably on their way.

The potential cost savings for the airlines are real. Milne and Kelly cite estimates that it costs an airline about $30/minute when a plane is sitting on the ground. There are about 8.3 million U.S. domestic flights annually. If it's possible to save 2 minutes/flight with faster boarding times, then that's worth almost $500 million per year (that is, 8.3 million x 2 minutes x $30/minute).

The potential savings already have airlines tweaking their methods. Eric Chemi, writing at BloombergBusinessWeek, reviews some of these issues and notes: American Airlines spent two years studying its boarding process and landed on a randomized, zone-based system. Last year it introduced a tweak that gives a slightly higher priority to passengers who have no carry-ons for the overhead bin. United uses an “outside-in” boarding process by which people with window seats board ahead of those on the aisle."

Moreover, it's worth remembering that the results of economic analysis have affected airline behavior before. Of course, the deregulation of US airlines back in 1978 was encouraged by economic studies that showed how greater competition would lead to lower fares and more consumer choice. Last week, I posted an example of an economist arguing for improving competition in the U.S. airline market by letting foreign-owned airlines carry passengers in the domestic U.S.

But my personal favorite example of how work by an academic economist affected the day-to-day practices of the airlines is a 1968 paper by Julian Simon called "An Almost Practical Solution to Airline Overbooking." Back in 1968, if a flight was overbooked and you arrived after the seats were filled, you were just bumped to a later flight: no recourse, no compensation. Simon suggested that as passengers arrive for their flight, they would all be handed an envelope and a bid form, where they would write down the amount of money in exchange for which they would be willing to be bumped to a later flight. If the flight was overbooked, the airline would just open the envelopes and choose the low bidder.  Of course, Simon's method didn't literally come to pass (and remember, he called it an "almost practical solution), but the now-common practice of planes offering a free ticket on a later flight for those who were bumped was a way of implementing the concept.

Similarly, I expect that in the next few years, airlines will try to find practical ways to implement the suggestions of writers like Steffens, Milne, and Kelly. From the outside, I expect these new methods of boardeing  will sometime appear even more arbitrary than the current methods. But if they get everyone from the gate to being seated on the plane a little faster, I'm all for it.



Friday, April 25, 2014

Grocery Shopping in France, the UK, and the US

Food purchases are different in the United States, France, and the United Kingdom. "For example, US households purchase more calories per person. A greater percentage of those calories comes in the form of carbohydrates, and a lower share in the form of proteins. A higher share of expenditure is on drinks and prepared foods, and a lower share is on fruits and vegetables." Pierre Dubois, Rachel Griffith, and Aviv Nevo dig into these questions in their paper, "Do Prices and Attributes Explain
International Differences in Food Purchases?" which appears in the March 2014 issue of the American Economic Review (104:3, pp. 832-867). The AER isn't freely available on-line, but many readers will have access through library subscriptions. For me,  the paper is a useful exercise in showing how statistics about averages can still paint a picture of real people behave.

The authors have information from household surveys in which thousands of households in each country recorded all their purchases of food for consumption at home. (One limitation of this data is that it doesn't look at food consumed in restaurants.) As they write: "We have information on quantities, prices, and characteristics of the products purchased at the level of the individual food product, as defined by the barcode or what is called the Universal Product Code (UPC) in the United States. The characteristics include nutritional characteristics such as calories, proteins, fats, and carbohydrates, as shown on nutritional labels." Here's are some overall patterns, expressed in terms of average per adult per day:




For example, note that Americans spend less per person on food each day, and also consume more calories. This pattern is of course no surprise to economists, because food prices in the U.S. are on average lower, so a basic economic model would expect quantity demanded to be higher. Notice also that compared with the French, Americans on average consume less fat and less protein, but more carbohydrates.

With their detailed data set, Dubois, Griffith, and Nevo  can also break down these differences further, into the expenditure categories shown below. They write: "The UK and US expenditure patterns are more similar, while the French numbers are different. The average French household spends less on processed food, such as drinks and prepared foods, and more on basic ingredients such as meats, dairy, fruits, and vegetables, both in dollar terms and as a fraction of overall expenditure. The average UK and US household spends less than French households on meats, and the United Kingdom spend more on grains, while the average US household spends less on dairy and more on drinks and prepared foods."



They can also look not at what is spent on categories of food, but also at the quantities of food purchased for home consumption. This table shows quantities consumed, and share of calories, calculated per adult for three-month period. Americans get a higher share of their (higher level of) calories from prepared foods and from drinks. Apparently, the French are much more likely to purchase water in the drinks category. The typical American consumes a smaller quantity of vegetables than the French or the British, and a smaller quantity of dairy. The Americans and French consume about the same quantity of meat, both more than the British.  The authors write: "Generally, the French tend to purchase less processed food, such as drinks and prepared foods, and more basic ingredients such as meats, dairy, and vegetables. This is especially true compared to the US purchasing patterns. The UK and US purchasing patterns are more similar, but even here there are differences, with the average UK household consuming more vegetables, grains, and dairy, and the average US household consuming
more meat and drinks."




One final way to slice the data is to look at the nutritional content of these various categories. As noted a moment ago, "drinks" means something different depending on whether people are more likely to buy water or full-calorie sweetened beverages. In the chart below, for example, the typical 100 grams of drink purchased by the French has 27 calories from carbohydrates, while the typical 100 grams of drink purchased by an American has 69 calories from carbohydrates. They explain: "For example, the meat products that US households buy have on average much more fat and carbohydrate than the meat products that French households purchase, which are more protein intensive. Another example: we saw above that the higher fraction of calories from prepared foods in the United States is consistent with prepared foods in the United States being more calorie dense relative to UK prepared foods. The difference in calories from prepared foods seems to come from the differences in carbohydrates and fats. Drinks are also much more carbohydrate intense in the United States than in the
United Kingdom, and even more than in France."



The main focus of the paper is not just to report these patterns, but to estimate an economic model of demand for different products, which lets the authors tackle the question of what causes these kinds of differences across countries. Differences in food prices across countries? Differences in the characteristics of the available food? Differences in preferences and eating habits? Here are a few of their findings:

"Price differences mostly explain the large difference in caloric intake between the average French and US household. However, nutrient characteristics are important when comparing to the United Kingdom, and differences in preferences and eating habits are generally quite important, and in some cases can offset the influences of the economic environment. For example, we find that UK households have healthier purchasing patterns than US households despite the prices and product offering they face, not because of them. ... 
The French have the highest relative preference for fats and proteins in dairy and meat. And the Americans have the highest preference for proteins in prepared food, and the lowest for fats in prepared foods. The ratio of the fats coefficients to the carbohydrates coefficient is the highest in France and the lowest in the United States, while the ratio of proteins to carbohydrates tends to be higher in the United States compared to France and the United Kingdom (this is mostly driven by the coefficient for the prepared category)."
Of course, given the public health issues posed by obesity, an obvious question is the extent to which public policy might seek to shape eating habits. But for now, I would just emphasize that the common food choices even among high-income countries vary by more, and in some different ways, than I would have guessed.







Thursday, April 24, 2014

Comparing Electricity Production Costs: Fossil Fuels, Wind, Solar

To compare the costs of producing electricity in various ways, the U.S. Energy Information Administration uses what is called "levelized cost." The idea is to consider the cost of building a new electricity-generating facility, thus using the most recent technology, and then using that plant to produce electricity for 30 years. Of course, some methods of producing electricity like solar and wind will have a high up-front cost, but then no additional cost for fuel. Other methods of producing electricity like coal or natural gas might have lower costs up-front, but then need to pay for fuel in the future. Looking at the levelized cost over 30 years is a framework that takes such differences into account.

Here are two sets of levelized estimates for producing electricity that I've put together from an April 2014 EIA report. The first column shows levelized costs, expressed in 2012 dollars per megawatt/hour, of producing electricity for a plant where construction is started now, and the plant is ready to produce full-scale in 2019. The second column, again expressed in 2012 dollars per megawatt/hour, is the estimated cost for a plant that would be started in about 20 years, and would begin producing in 2040. Thus, the second column incorporates estimates of how costs of various fossil fuels and technological progress in electricity production will happen in the next couple of decades.



Here are some thoughts about these numbers:

1) The table is divided into "dispatchable" and "non-dispatchable" technologies. Basically, dispatchable technologies produce electricity when you want it. Non-dispatchable technologies produce electricity when nature is willing: that is, when the wind is blowing, the sun is shining, and there's water in the dam to flow through the turbines. For understandable reasons, those responsible for running electrical grids have some preference for dispatchable energy, because they know it can be there when they want it. However, this advantage is not taken into account in the levelized cost estimates.

2) These estimates also take into account the "capacity factor," which is what proportion of the time is the facility actually producing eletricity. Coal, natural gas, and biomass have capacity factors in the range of 83-87%. (The exception here is the "turbine" approaches to natural gas. These are smaller-scale plants meant to be run only at times of peak demand when electricity is most-needed, so their "capacity factor" is 30%--which is why their costs of generating electricity are comparatively high.) Nuclear has a capacity factor of 90%. Wind has a capacity factor of 35-37%; solar has a capacity factor of 20-25%; and hydroelectric has a capacity factor of 53%,

3) The cost estimates refer to building the electricity production capacity at an appropriate location. Thus, while geothermal is the cheapest way of producing electricity of the options here, the locations where geothermal electricity can be produced at this low cost are somewhat limited. It probably makes sense to keep looking for new places to produce geothermal electricity, but the reason it costs more in the 2040 projections than in the 2019 projections is based on the belief that future locations for geothermal will be more costly than current ones.

4) An obvious question about these comparisons is the extent to which they take environmental differences into account: in particular, what about the carbon emissions from burning fossil fuel? The EIA writes: "3 percentage points are added to the cost of capital when evaluating investments in greenhouse gas (GHG) intensive technologies like coal-fired power and coal-to-liquids (CTL) plants without carbon control and sequestration (CCS). In LCOE terms, the impact of the cost of capital adder is similar to that of an emissions fee of $15 per metric ton of carbon dioxide (CO2) when investing in a new coal plant without CCS, which is representative of the costs used by utilities and regulators in their resource planning. The adjustment should not be seen as an increase in the actual cost of financing, but rather as representing the implicit hurdle being added to GHG-intensive projects to account for the possibility that they may eventually have to purchase allowances or invest in other GHG-emission-reducing projects to offset their emissions. As a result, the LCOE values for coal-fired plants without CCS are higher than would otherwise be expected." Of course, what sort of cost adjustment is appropriate for carbon-emitting sources of electricity can be disputed, but there is some adjustment built into these numbers.

5) For the 2019 estimates, natural gas is the cheapest of the fossil fuel approaches. Of course, this is in part because natural gas prices in the U.S. have fallen; further, because natural gas cannot easily be shipped around the world, the US price can remain lower than in other countries.  Other research suggests that when taking the sum of private costs of production and the environmental costs into account, natural gas is the low-cost choice.

5) Wind and solar photovoltaics are expected to become cheaper ways of generating electricity over time, as you can see from comparing the 2019 and 2040 columns. But the locations for cost-effective use of wind resources are limited. And at least according to the U.S. Energy Information Administration, solar electricity will still be more costly than electricity from fossil fuels by 2040.

6) Meanwhile, the price of generating electricity from coal is projected to keep falling, too.


Wednesday, April 23, 2014

U.S. Airline Deregulation: The Next Step

The philosophy of the U.S. domestic airlines over the last decade or so is simple: fewer flights, packed with more passengers. Consider the data from the Bureau of Transportation Statistics. The total number of domestic U.S. flights was 8.3 million in 2013, down from 10 million in 2005. The number of available seat-miles was 693 million in 2013, down from 740 million in 2005. However the number of passengers dropped by much less: there were 645 million domestic passengers on U.S. flights in 2013, down only a bit from 657 million in 2007. And the "revenue passenger-miles"--that is, number of passengers times distance flown--actually rose slightly, from 571 billion in 2005 to 578 billion in 578 billion in 2013.

It has been possible to have fewer flights but more revenue passenger-miles because the average flight is fuller. The "load factor" which is calculated as the actual passenger-miles traveled as a proportion of the available seats, was 83.5% in 2013, up from 77% in 2005, and 70% in 2002. In other words, US airlines have been competing to jam more people into fewer flights.

This background is part of Kenneth Button's case for "Really Opening Up the American Skies," in the Spring 2014 issue of Regulation magazine. Button points out that while airline deregulation back in 1978 led to lower prices and additional service (though hub-and-spoke route systems), those patterns have been changing in recent years: the number of U.S. domestic routes has been contracting and airfares have stopped declining.   Button writes:

"The deregulation of the 1970s, by removing entry quantitative controls, led to a considerable increase in services. It also increased the capability of individuals to access a wider range of destinations from their homes via the hub-and-spoke system of routings that emerged. This pattern has been reversed since 2007. The largest 29 airports in the United States lost 8.8 percent of their scheduled flights between 2007 and 2012, but medium-sized airports lost 26 percent and small airports lost 21.3 percent. ...
The advent of jet and wide-bodied aircraft lowered costs in the 1960s and 1970s, and the 1978 Airline Deregulation Act caused the trend to continue in the 1980s and 1990s. Since then, real airline fares within the United States have largely plateaued;they fluctuate as fuel prices and economic growth oscillate and the temporary effects of mergers are felt. The challenge is to get the fare curve moving down again. The issue is not simply a matter of fares, but also the number and nature of services that are provided. People who no longer have ready access to air services are confronted with an infinite airfare—a fact not reflected in the airline airfare statistics."
What's the answer? Button argues that it's time for the next step in U.S. airline deregulation: that is, letting airlines from other countries enter the U.S.  market and deliver U.S. passengers between U.S. cities. He writes:
"In sum, the 1978 Airline Deregulation Act only partially liberalized the U.S. domestic airline market. One important restriction that remains is the lack of domestic competition from foreign carriers. The U.S. air traveler benefited from the country being the first mover in deregulation, and this provided lower fares and consumer-driven service attributes some 15–20 years before they were enjoyed in other markets; the analogous reforms in Europe only fully materialized after 1997. But the world has
changed, and so have the demands of consumers and the business models adopted by the airlines. ...  But remaining regulations still limit the amount of competition in the market and, with this, the ability of travelers to enjoy even lower fares and a wider range of services."
Back when U.S. airline deregulation was being considered in the 1970s, one of the power examples for advocates of deregulation was that if you looked at airfares between cities within a certain state--say, within Texas or within California--they were much lower than airfares between similar cities in different states. The reason was that airfares and routes on within-state flights weren't federally regulated, and so it could be seen that competition offered a better deal for customers. In a similar spirit, Button offers some examples of fares for European carriers like Ryanair or easyJet compared with U.S. carriers like Southwest or JetBlue. For flights of similar distance, the European airlines are often charging a lot less.



The objections to allowing foreign airlines into the U.S. domestic market tend to fall into two broad categories. One argument is that the foreign airlines will provide inferior service and don't have a sense of what U.S. customers want, so they won't attract much business. A cynic might answer that inferior service and ignorance about what customers are not exactly unknown characteristics among the current U.S. airlines. Also, while this concern over how foreign airlines might suffer financial losses must needs touch a tender chord of throbbing emotion in every American breast, frankly, the foreign airlines can look after themselves. The other argument is that the foreign airlines will be so successful that the workers of U.S. airlines will suffer. But most of the people working at airports, like baggage handlers and ground crew, will continue to be Americans. And the aftermath of the 1978 airline deregulation teaches that if more efficient practices and lower fares bring a new surge of airline customers, then the industry as a whole--and the American workers in the airline industry broadly defined--will expand.

My wife and I have three children. I favor more competition and lower airfares.


Tuesday, April 22, 2014

Earth Day: A Baptists and Bootleggers Story

Earth Day was first celebrated on April 22, 1970. It is now observed in 192 countries, and is coordinated by the Earth Day Network. Bruce Yandle offers a hard-eyed look at how the original Earth Day affected U.S. environmental legislation in "How Earth Day Triggered Environmental Rent Seeking," which appeared in the Summer 2013 issue of the Independent Review.

One of Yandle's signature insights is the idea of a "Baptists-and-bootleggers" coalition. Who favored prohibition of alcohol sales? Baptists, on moral grounds, and bootleggers, because government prohibition would limit competition and boost their profits. He makes a strong argument that Earth Day led to a similar environmentalists-and-industrialists coalition, in which environmentalists pushed for laws to reduce pollution, and industrialists pushed for anti-pollution laws that would hinder their competition.

Before the passage of the Clean Air Act and Clean Water Act in 1970, pollution was often restricted by common law cases brought through the courts. From the point of view of incumbent business, these court cases were an unpleasant way to deal with environmental problems. Court decisions could be inconsistent, and sometimes harshly punitive. But in addition, common law court decisions offered no way to inhibit competition by raising the costs of new entrants and rival producers. Thus, many large companies saw opportunities to limit competition in the idea of federal environmental laws.

In some ways, the use of anti-pollution laws to limit competition was pretty obvious. For example, the new environmental laws commonly grandfathered in existing plants, but required new plants to meet much stricter standards.

In other ways, the methods of restricting competition were less obvious. Consider that there are essentially three ways to set environmental standards. One is to use economic incentives like pollution taxes and tradeable pollution permits. A second is to set performance standards for how much pollution can be emitted, but to leave firms the flexibility to decide how to meet the standards in the most cost-effective way. The third way is a technological standard which requires that every firm use the same method for reducing pollution. When a technological standard is required, then firms which could have reduced pollution more cheaply are not allowed to gain a competitive advantage from doing so--because all must follow the prescribed standard.

For several decades after 1970s, one could at least argue that most environmental indicators were moving in the right direction. But after a review of the more limited progress against air and water pollution in the last couple of decades, Yandle argues, "These data strongly suggest we have hit the cleanup limits of a top-down, command-and-control, technology-based pollution-control system. We know we can do better, and so do EPA managers."

Thus, the environmental authorities have been pushing away from technology-based standards, and toward offering flexibility in meeting environmental goals. In the case of water pollution, Yandle reports: "In 1991, the EPA began to push hard to develop watershed-based nutrient trading communities where publicly owned treatment works and other dischargers are allowed to exchange discharge offsets. In some cases, farmers and land developers are included in the larger trading communities. When trades take place, the incremental cost of reducing pollution falls dramatically."

In the case of air pollution, flexible pollution permit trading arrangements were used to reduce lead emissions in the 1980s, and sulfur dioxide emissions since the 1990s. Yandle writes: "For
the nation, as of 2011 there are 242 nonattainment counties for ozone, 121 for PM2.5. But get this, there are just 9 nonattainment counties, which are those that have not achieved EPA National Ambient Air Quality Standards, for sulfur dioxide, the only criteria pollutant managed by markets. Indeed, since 1990, sulfur dioxide emissions have been reduced 65 percent at an EPA estimated cost of from
$1.17 to $2 billion. If command-and-control had been used instead of markets, the estimated cost would have ranged from $7.5 to$11.5 billion ..."

For those interested in learning more about these flexible systems for reducing pollution with tradeable permits, the Winter 2013 of the Journal of Economic Perspectives had a symposium on the subject. It starts with an overview paper by Lawrence H. Goulder, "Markets for Pollution Allowances: What Are the (New) Lessons?" There are then three papers on specific applications. Richard Schmalensee and Robert N. Stavins discuss "The SO2 Allowance Trading System: The Ironic History of a Grand Policy Experiment"; Richard G. Newell, William A. Pizer and Daniel Raimi tackle "Carbon Markets 15 Years after Kyoto: Lessons Learned, New Challenges"; and Karen Fisher-Vanden and Sheila Olmstead explore "Moving Pollution Trading from Air to Water: Potential, Problems, and Prognosis."
As always, all papers in the JEP back to the first issue in 1987 are freely available, courtesy of the American Economic Association. (Full disclosure: I've been Managing Editor of JEP since 1987, too.)

Is there some reason that the environmentalists and the industrialists will be willing to move away from the technology-based and performance-based environmental rules standards and embrace a more flexible incentive-based approach? Yandle offers the following argument: "At some point, the environmental Baptists will see that they are losing ground. The system they have supported no longer delivers the goods they desire. As we have seen, major elements of environmental progress are dead in the water. And the bootleggers? At some point, global competition becomes so severe that regulatory rent seeking no longer pays. For durable regulation to survive, bootleggers and Baptists must be singing off the same page. For now, the music has stopped."





Monday, April 21, 2014

Behind the Long-Term Rise in U.S. Health Care Costs

There is ongoing controversy over where U.S. health care costs are headed next. Has the rate of growth slowed, and if so when and why? Did it just slow briefly during the aftermath of the Great Recession and now is speeding up again? Health care expenditures in the U.S. economy were 5% of GDP in 1960, and have risen steadily to 17% of GDP. Of course, if health care is getting a bigger slice of the GDP pie, then other desireable areas of spending, both for households and for government, must be getting a smaller slice. Indeed, the projections for rising health care costs are by far the largest factor that drives the projections of expanding federal budget deficits in the long run.

Louise Sheiner offers some useful "Perspectives on Health Care Spending Growth"  in a paper recently written for the Engelberg Center on Health Care Reform at the Brookings Institution. She makes the point that even as health care costs have been rising, public and private health care insurance has been expanding so that Americans have been paying a lower share of those costs out of pocket.

Indeed, given the rising in health care costs as a share of GDP, but the fact that Americans are paying a lower share of those expenses in out-of-pocket in health care costs, the overall balance is that out-of-pocket health care costs as a share of GDP haven't risen for several decades. To put it another way: Back in 1960, health care spending was 5% of GDP, and Americans paid about half of that--2.5% of GDP--in the form of out-of-pocket costs. Now health care spending is 17% of GDP, but only 2% of GDP is being paid in out-of-pocket health care costs--with public and private insurance paying for the rest.



As Sheiner writes: "As [health care] spending rise as a share of income, two things happen: insurance contracts change to insulate people from the risk of large expenses if they become ill, and public programs expand to help maintain access to health services for lower income. Both of these changes fuel increased adoption of health technology. ... It is clear that it is the combination of technological
innovation and a continued willingness-to-pay for that technology that has allowed health spending to rise faster than income for so long. For example, without the dramatic decline in the share of health expenditures paid out-of-pocket, many Americans would simply not have been able to afford the new technologies when they became ill. It is inevitable that this willingness-to-pay will diminish at some point, but we have very little ability to predict when that will be."

What does this mean for the future path of health care spending? Sheiner analyzes patterns of GDP growth over time compared with health care costs. Like other analysts, she observes that the rise in health care costs started slowing down about a decade ago--that is, well before the Patient Protection and Affordable Care Act was enacted in 2009. She cautions against reading too much into this slower rise of health care costs: "[T]he slowdown in health spending growth observed since 2002 is largely the result of the two recessions that occurred in the last decade ... [I]t would be hard to argue that a few years of slower growth should be viewed as a turning point, particularly given that the recent slowdown occurred during unusual times: a decade of very slow economic growth and very low inflation (which made it harder for firms to pass on health insurance costs to their employees and may have required larger adjustments than usual), a major health reform that was accompanied by much confusion and fear, and a huge runup in budgets deficits that intensified attention on the need for future spending cuts."

Friday, April 18, 2014

When Technology Spreads Slowly

One of the most important issues in thinking about the economic growth potential for the U.S. economy is this question: Has the U.S. economy already seen most of the economic growth that will result from the innovations in information and communication technology, including the web, the cloud, robotics, and so on? Or is the U.S. economy perhaps only a fraction of the way--perhaps even less than halfway--through its adaptation to the potential for productivity gains from these technologies, and thus has stronger prospects for future growth?

When confronted by these kinds of questions, hindsight is clearer than foresight. And among economic historians, it is actually a standard insight that major new technologies can take decades to diffuse through the economy. Rodolfo E. Manuelli and Ananth Seshadri offer an example in "Frictionless Technology Diffusion: The Case of Tractors," which appears in the April 2014 issue of the American Economic Review. (The article is not freely available on-line, but many readers will have access through library subscriptions. Full disclosure: the AER is published by the American Economic Association, which also publishes the Journal of Economic Perspectives, where I work as Managing Editor.) They point out that in simple economic models, a firm just chooses a technology--and can choose a new technology at any time it wants. But in the real world, new technologies often take time to diffuse. They note that surveys of dozens of new technologies often find that it takes 15-30 years for a new technology to go from 10% to 90% of the potential market. But some major inventions take longer.

Here's how the tractor slowly displaced horses and mules in the U.S. agricultural sector from 1910 to 1960. Horses and mules, shown by the black dashed line and measured on the right-hand axis, declined from about 26 million in 1920 to about 3 million by 1960. Conversely, the number of tractors, shown  by the blue solid line, rose from essentially zero in 1910 to 4.5 million by 1960.



What factors might explain why it would take a half-century for tractors to spread? Lots of answers have been proposed: farmers needed time and experience to learn about the new technology; older farmers preferred not to learn, but gradually died off; some farmers didn't have large enough farms to make tractors economically viable; some farmers didn't have the financial ability to invest in a tractor; there was a lack of information about the benefits of tractors; established interests like the horse and mule industry pushed back against tractors where possible. Manuelli and Seshadri offer another explanation: During much of this time, the quality of tractors was continually improving, and also during the earlier part of this time period (like the Great Depression) wages for farm workers were not rising by much. Thus, it made some sense for a number of farmers to avoid buying the early generations of tractors. Let someone else work out the kinks! But as the quality of tractors improved and wages of farmworkers rose, investing in a tractor began to look like a better and better deal.

My own personal favorite example of the slow diffusion of technology was laid out by Paul David in "Computer and Dynamo: The Modern Productivity Paradox in a Not-too-Distant Mirror," which appeared in a 1991 OECD book called Technology and Productivity: The Challenge for Economic Policy.  At the time David was writing, the U.S. was still mired in a productivity slowdown that had started in the 1970s. However, there had clearly been a lot of computerization during that time, leading to a much-repeated comment from Robert Solow: "We see computers everywhere except in the economic statistics." David harked back to the historical example of using dynamos to produce electricity, explaining that this innovation was around for decades, sometimes at what seemed to be very large scale, before it showed up in productivity gains.

Dynamos had been producing electricity that was used for illumination since the 1870s. This technology was well-known enough that the Paris Exhibition of 1900 included many examples of electrical machinery, run from the power generated by dynamos that were 40 feet tall. But the Paris Exhibition also used electric light on a more widespread basis in public spaces than ever before. David wrote: "Although Europeans already knew of electric lighting for decades, never before Paris 1900 had it been used to illuminate a whole city--in such a way that outdoor festivals could continue into the night."

However, despite the demonstrated technological capabilities of generating and using electricity, and what seemed like a strong array of technological and scientific breakthroughs, productivity growth in the U.S. and the UK economies was actually relatively slow for about two decades after 1890. It's not until the 1920s that productivity growth based on electrification really took off. In retrospect, the reasons why are clear enough. Although the technology was already well-known, it took time for electrification to become widespread. Here's one figure showing diffusion of electrification in the household sector, and another showing the industrial sector. You could illuminate Paris with electrical light in 1900, but most places in the US didn't have access to electricity then.


But it wasn't just the spread of electricity. It was also the changes that industry and households needed to make to take advantage of it. Factories had long run on a "group drive" principle, where a single source of power (like water power or steam engines) powered everything through a series of gears. A "group drive" arrangement set constraints on the location of the factory and the organization of the machines. Electrification made "unit drive" possible, where factories had much more freedom to choose their location and set up their machines, but it took time and learning to figure out the best ways of doing this. More broadly, electricity changed everything from the lighting in factories to the fire safety, along with changes in the ability to develop new chemical and heating processes, and much more. For US households, it took time--really up into the 1920s--until they had both a source of electricity and also a supply of new household appliances like the vacuum cleaner, radio, washing machines, dishwasher, and all the changes of lifestyle that came with reliable indoor electric light.

In the mid-1990s, several years after Paul David's essay was published, a US productivity resurgence rooted in making and using information and communications technology did occur. It didn't happen on the time and schedule that many had been expecting . But as David wrote, many people "lose a proper sense of the complexity and historical contingency of the processes involved in technological change and the entanglement of the latter with economic, social, political, and legal transformations. There is no automaticity in the implentation of a new technological paradigm, such as that which we presently discern is emerging from the confluence of advances in computer and communications technologies."

In my own mind, examples like the slow spread of the tractor and electrification suggest the possibility that we may be only a moderate portion of the way through the social gains from the information and communications technology revolution. One of the reasons that tractors spread slowly was that the capabilities of tractors were steadily rising, which made them more attractive over time. In a much more extreme way way, the power of information and computing technology continues to rise, which keeps opening new horizons of potential uses and applications. One of the reasons that electrification spread slowly is that it took time for producers to rethink and revise their processes in a fundamental way, and time for spread and power of electricity to increase, and time for the invention and spread of household appliances related to electricity. In a similar way, my sense is that many firms are still very much in the process of rethinking and revising their processes in response to the developments in information and communications technology, the capabilities of that technology (like faster wireless speeds and computational power) continue to evolve, and the range of new household products using that technology (in areas from automated homes to entertainment to driverless cars and  roboticscontinue to expand.

Ultimately, of course, many of us are a little schizophrenic about the future of technological change. Some days we worry that technological change will be too slow, and that as a result the U.S. economy is headed for a future of slow growth and a stagnant standard of living. Other days we worry that technological change will be so rapid as to lead to massive disruption of jobs and workplaces across the economy. It is unlikely that both of these fears will come true! On my optimistic days, I hope that a flexible society and economy can find ways to adapt to an ongoing pattern of robust technological change and economic growth.

Thursday, April 17, 2014

What Happened to the Great Moderation?

In the 1990s and into the early years of the 2000s, it was common to hear economists speak of a "Great Moderation" in the U.S. economy. After the economic convulsions of the 1970s and early 1980s, in particular, the path of the U.S. economy seemed to have smoothed. To be sure, there was an 8-month recession in 1990-91, and another 8-month recession in 2001. But both recessions were fairly mild: unemployment topped out at 7.8% in the aftermath of the 1990-81 recession, and reached only 6.3% in the aftermath of the 2001 recession. And the recessions seemed more scarce: the average length of an economic upswing since World War II has been 58 months, but the upswing before the 1990-91 recession was 92 months, and the upswing before the 2001 recession was 120 months.

Of course, after 2007 when the Great Recession had crashed the party, talk of a Great Moderation seemed disconnected from reality. Jason Furman, chair of President Obama's Council of Economic Advisers, has taken on the question of "Whatever Happened to the Great Moderation?" in an April 10 speech.

Furman makes the interesting point that even now, including the Great Recession and its aftereffects in the data, the level of short-term volatility in economic statistics like quarterly GDP or monthly job growth seems to be lower than it was from the 1950s to the 1970s, not only in the United States but also in other high-income countries. (Of course, "less volatile" doesn't mean "healthy growth rate.")

Peering into the inner workings of the US economy, Steven J. Davis and James A. Kahn provided an overview of the evidence in the Fall 2008 issue of the Journal of Economic Perspectives in "Interpreting the Great Moderation: Changes in the Volatility of Economic Activity at the Macro and Micro Levels."  (The article, like all articles in  JEP, is freely available on-line courtesy of the American Economic Association. Full disclosure: I've been Managing Editor of the journal since its inception in 1987.) They find that the drop in short-term volatility of GDP can largely be traced to a drop in the volatility of production of durable goods. The volatility of production of nondurable goods falls only a little, and production of service was never that volatile to begin with. Volatility of production inventories declined substantially, too.

Furman points out an intriguing pattern here: "From 1960 to 1984, inventories were quite volatile, and were also procyclical, meaning that when sales increased, inventories also increased, further contributing to the volatility of production. During the post-1984 Great Moderation period, inventory investment itself became much less volatile, and the previous relationship between inventories and sales reversed, so that the two became negatively correlated. Focusing specifically on durable goods, the change in the covariance between inventories and sales accounts for nearly half of the decline in the variance in durable goods output. However, including the Great Recession, it appears that the relationship between output, sales and inventories partially reverted to the pre-Great Moderation pattern. The covariance of inventories and sales turned positive again, suggesting that improved inventory management was not enough to cushion the massive blow of the Great Recession, and in fact exacerbated it." Furman is careful to note that the argument that inventories have become procyclical is based on only a few years of data.  But if the pattern continues, it will need exploring and explaining.

Another pattern here is that consumption patterns have continued to show less short-term volatility, even through the Great Recession. Furman writes: "Disaggregating the GDP data, the reduced volatility of consumption is one of the major sources of the Great Moderation—and this reduced volatility has continued to hold up during and after the Great Recession, especially in consumer durables. The continued stability in consumption stands in contrast to other components of GDP like business fixed investment, which became less volatile during the initial Great Moderation but has since at least partially reverted to its earlier volatility."

Improvements in macroeconomic policy offer another potential explanation for the Great Moderation: that is, monetary policy was less disruptive after the mid-1980s than it had been in, say, the 1970s. The use of fiscal policy to stimulate the economy during downturns arguably became more purposeful and effective. Indeed, as Furman points out, one can make a case that monetary and fiscal policies helped to prevent the Great Recession from being even greater (citations omitted here, and  throughtout):

"Improvements in monetary and fiscal policy have likely contributed to the patterns in the high-frequency data originally identified as the Great Moderation, although one could debate the share of the credit they deserve. I believe policy steps have also played a critical role at lower frequencies as well, with the best example being the Great Recession itself, which in many ways started off looking like it could be as bad or worse than the Great Depression. To appreciate this point, consider that the plunge in stock prices in late 2008 proved similar to what occurred in late 1929, but was compounded by sharper home price declines, ultimately leading to a drop in overall household wealth that was substantially greater than the loss in wealth at the outset of the Great Recession. . . .Moreover, Alan Greenspan (2013) has argued that short-term credit markets froze more severely in 2008 than in 1929, and to find a comparable episode in this regard one has to go back to the panic of 1907. However, in large part because of an aggressive policy response, the unemployment rate increased 5 percentage points, compared to a more than 20 percentage point increase in the Great Depression from 1929 to 1934. And real GDP per working age population returned to its pre-recession peak more quickly in the United States than in other countries that also experienced systemic crises in 2007-08."
The pattern that emerges from Furman's discussion is that the Great Moderation was quite real as measured by smaller short-term fluctuations in GDP, employment, consumption, production of durable goods, and inventories. Even more surprisingly, many of these factors (although not inventories) have continued to show lower short-term volatility in the aftermath of the Great Recession. But of course, this lower level of short-term quarter-to-quarter or month-to-month economic fluctuations did not protect the economy from the enormous economic blow of the Great Recession, which lasted 18 months, spiked the unemployment rate from under 5% in mid-2007 to 10% in October 2009,m and then has been followed by years of frustrating sluggish (and without a lot of short-term volatility) recovery.

One possible interpretation here is that the Great Moderation is real, and the Great Recession was a sort of perfect storm, best understood as a one-off divergence from the long-run trend. Another possible interpretation is when short-term volatility is lower and when recessions become milder and less common, firms and households become less wary of risk, and more willing to take chances--which in turn leads to the kind of risky conditions that can create the underlying conditions for a deeper recession.  And yet another interpretation is that while the old vulnerabilities that led to the economic volatility of smokestack industries back in the 1950s and 1960s have declined, the U.S. and world economy how face some new vulnerabilities due to changes in technology, globalization, and the financial sector. In this view, the Great Recession was only a first foretaste of the kinds of disruptive interactions that can occur in this new economic configuration.


Wednesday, April 16, 2014

Demand for Sand

These are boom times for the sand industry, which is actually a mixed blessing, resulting in high prices and even environmental risks. The Global Environmental Alert Service of the United Nations Environment Programme tells some of the story in a March 2014 report, "Sand, rarer than one thinks." As the report notes (citations omitted for readability): "Globally, between 47 and 59 billion tonnes of material is mined every year, of which  sand and gravel, hereafter known as aggregates, account for both the largest share (from 68% to 85%) and the fastest extraction increase ..."

To get a sense of the volume here,  consider this comparison: "A conservative estimate for the world consumption of aggregates exceeds 40 billion tonnes a year. This is twice the yearly amount of sediment carried by all of the rivers of the world, making humankind the largest of the planet’s transforming agent with respect to aggregates ..." Or to look at it another way, one major use of aggregates like sand and gravel is for concrete. "Thus, the world’s use of aggregates for concrete can be estimated at 25.9 billion to 29.6 billion tonnes a year for 2012 alone. This represents enough concrete to build a wall 27 metres high by 27 metres wide around the equator."  Sand and gravel are also used land in reclamation, shoreline developments, road embankments, asphalt, and by industries including glass, electronics, and aeronautics.

Dredging sand and gravel from oceans and rivers causes environmental disruption, which can in some cases become severe, leading to problems with erosion, greater vulnerability to storm surges, and destruction of habitat for plant and animals. "Lake Poyang, the largest freshwater lake in China, is a distinctive site for biodiversity of international importance, including a Ramsar Wetland. It is also the largest source of sand in China and, with a conservative estimate of 236 million cubic metres a year of sand extraction, may be the largest sand extraction site in the world. ... Sand mining has led to deepening and widening of the Lake Poyang channel and an increase in water discharge into the Yangtze River. This may have influenced the lowering of the lake’s water levels, which reached a historically low level in 2008 ..." (The Ramsar Convention is the nickname for the Convention on Wetlands of International Importance, which is an intergovernmental treaty for protection of key wetlands.) In general, economic growth in China has been one of the major reasons for the expansion of sand and gravel mining in the last decade.

Or to choose a more extreme case: "In some extreme cases, the mining of marine aggregates has changed international boundaries, such as through the disappearance of sand islands in Indonesia."
The qualities of sand and gravel matter for their eventual use. For example, "If the sodium is not removed from marine aggregate, a structure built with it might collapse after few decades due to corrosion of its metal structures. Most sand from deserts cannot be used for concrete and land reclaiming, as the wind erosion process forms round grains that do not bind well."

With a combination of research and development into alternative materials, along with different materials methods of landfill and construction, the use of sand and gravel could be reduced. Some possible alternative materials for various uses include quarry dust, incinerator ash, recycled concrete and glass, perhaps finding ways to use desert sand.

According to data from the U.S. Geological Survey, the U.S. economy used about 46 million tons of sand and gravel for industrial purposes in 2012, which represents nearly a doubling since 2003. In addition, the price of sand and gravel for industrial use rose from $18.30/ton in 2003 to $52.80/ton in 2012. Essentially, this kind of sand has a high silicon dioxide content, and a large portion of this run-up in demand is because this kind of sand is used in hydraulic fracturing, which now consumes about 62% of this kind of sand in the U.S.

Use of sand and gravel for construction purposes was much greater in the U.S economy, about 842 million tons in 2012. However, this was down from about 1,200 million tons per year during the housing and construction boom of the years leading up to the Great Recession. The USGS reports: "It is estimated that about 44% of construction sand and gravel was used as concrete aggregates; 25% for road base and coverings and road  stabilization; 13% as asphaltic concrete aggregates and other bituminous mixtures; 12% as construction fill; 1% each for concrete products, such as blocks, bricks, and pipes; plaster and gunite sands; and snow and ice control; and the remaining 3% for filtration, golf courses, railroad ballast, roofing granules, and other miscellaneous uses."

With all due apologies to the good people and productive firms working in this industry, it's a little difficult for me to imagine a more boring product than sand and gravel. As a first step toward getting out of my ivory tower and getting over this prejudice, I close here with some comments from a 1999 report by the U.S. Geological Survey, "Natural Aggregates—Foundation of America’s Future."

"Natural aggregates, which consist of crushed stone and sand and gravel, are among the most abundant natural resources and a major basic raw material used by construction, agriculture, and industries employing complex chemical and metallurgical processes. Despite the low value of the basic products, natural aggregates are a major contributor to and an indicator of the economic well-being of the Nation. Aggregates have an amazing variety of uses. Imagine our lives without roads, bridges, streets, bricks, concrete, wallboard, and roofing tiles or without paint, glass, plastics, and medicine. Every small town or big city and every road connecting them were built and are maintained with aggregates. More than 90 percent of asphalt pavements and 80 percent of concrete are aggregates. Paint, paper, plastics, and glass also require sand, gravel, or crushed stone as a constituent. When ground into powder, limestone is used as an important mineral supplement in agriculture, medicine, and household products. ... On the basis of either weight or volume, aggregates accounted for more than two-thirds of about 3.3 billion metric tons of nonfuel minerals produced in the United States in 1996."




Tuesday, April 15, 2014

When Government Pre-Fills Income Tax Returns

As Americans hit that annual April 15 deadline for filing income tax returns, they may wish to contemplate how it's done in Denmark. Since 2008, in Denmark the government sends you a tax assessment notice: that is, either the refund you can receive or the amount you owe. It includes an on-line link to a website where you can look to see how the government calculated your taxes. If the underlying information about your financial situation is incorrect, you remain responsible for correcting it. But if you are OK with the calculation, as about 80% of Danish taxpayers are, you send a confirmation note, and either send off a check or wait to receive one.

This is called a "pre-filled" tax return. As discussed in OECD report Tax Administration 2013: Comparative Information on OECD and Other Advanced and Emerging Economies: "One of the more significant developments in tax return process design and the use of technology by revenue bodies over the last decade or so concerns the emergence of systems of pre-filled tax returns for the PIT [personal income tax]."  After all, most high-income governments already have data from employers on wages paid and taxes withheld, as well as data from financial institutions on interest paid. For a considerable number of taxpayers, that's pretty much all the third-party information that's needed to calculate their taxes. The OECD reports:

"Seven revenue bodies (i.e. Chile, Denmark, Finland, Malta, New Zealand, Norway, and Sweden) provide a capability that is able to generate at year-end a fully completed tax return (or its equivalent) in electronic and/or paper form for the majority of taxpayers required to file tax returns while three bodies (i.e. Singapore, South Africa, Spain, and Turkey) achieved this outcome in 2011 for between 30-50% of their personal taxpayers. [And yes, I count four countries  in this category, not three, but so it goes.] In addition to the countries mentioned, substantial use of pre-filling to partially complete tax returns was reported by seven other revenue bodies -- Australia, Estonia, France, Hong Kong, Iceland, Italy, Lithuania, and Portugal. [And yes, I count eight countries in this category, not seven, but so it goes.] Overall, almost half of surveyed revenue bodies reported some use of  prefilling ..."
For the United States, the OECD report notes that in 2011, zero percent of returns were pre-filled. Could pre-filling work in the U.S.?  Austan Goolsbee provided a detailed proposal for how prefilling might work for the United States in a July 2006 paper, "The Simple Return: Reducing America's Tax Burden Through Return-Free Filing." He wrote: 

"Around two-thirds of taxpayers take only the standard deduction and do not itemize. Frequently, all of their income is solely from wages from one employer and interest income from one bank. For almost all of these people, the IRS already receives information about each of their sources of income directly from their employers and banks. The IRS then asks these same people to spend time gathering documents and filling out tax forms, or to spend money paying tax preparers to do it. In essence, these taxpayers are just copying into a tax return information that the IRS already receives independently. The Simple Return would have the IRS take the information about income directly from the employers and banks and, if the person's tax status were simple enough, send that taxpayer a return prefilled with the information. The program would be voluntary. Anyone who preferred to fill out his own tax form, or to pay a tax preparer to do it, would just throw the Simple Return away and file his taxes the way he does now. For the millions of taxpayers who could use the Simple Return, however, filing a tax return would entail nothing more than checking the numbers, signing the return, and then either sending a check or getting a refund. ... The Simple Return might apply to as many as 40 percent of Americans, for whom it could save up to 225 million hours of time and more than $2 billion a year in tax preparation fees. Converting the time savings into a monetary value by multiplying the hours saved by the wage rates of typical taxpayers, the Simple Return system would be the equivalent of reducing the tax burden for this group by about $44 billion over ten years."
Most of this benefit would flow to those with lower income levels. The IRS would save money, too, from not having to deal with as many incomplete, erroneous, or nonexistent forms.  

For the U.S., the main  practical difficulty that prevents a move to pre-filling is that with present arrangements, the IRS doesn't get the information about wages and interest payments from the previous year quickly enough to prefill income tax forms, send them out, and get answers back from people by the traditional April 15 timeline. The 2013 report of the National Taxpayer Advocate has some discussion related to these issues in Section 5 of Volume 2. The report does not recommend that the IRS develop pre-filled returns. But it does advocate the expansion of "upfront matching," which means that the IRS should develop a capability to tell taxpayers in advance, before they file their return, about what their parties are reporting to the IRS about wages, interest, and even matters like mortgage interest or state and local taxes paid. If taxpayers could use this information when filling out their taxes in the first place, then at a minimum, the number of errors in tax returns could be substantially reduced. And for those with the simplest kinds of tax returns, the cost and paperwork burden of doing their taxes could be substantially reduced. 






Saturday, April 12, 2014

How Milton Friedman Helped Invent Income Tax Withholding

In one of the great ironies, the great economist Milton Friedman--known for his pro-market, limited government views--helped to invent government withholding of income tax. It happened early in his career, when he was working for the U.S. government during World War II. Of course, the IRS opposed the idea at the time as impractical. Friedman summarized the story in a 1995 interview with Brian Doherty published in Reason magazine. Here it is:

"I was an employee at the Treasury Department. We were in a wartime situation. How do you raise the enormous amount of taxes you need for wartime? We were all in favor of cutting inflation. I wasn't as sophisticated about how to do it then as I would be now, but there's no doubt that one of the ways to avoid inflation was to finance as large a fraction of current spending with tax money as possible.
In World War I, a very small fraction of the total war expenditure was financed by taxes, so we had a doubling of prices during the war and after the war. At the outbreak of World War II, the Treasury was determined not to make the same mistake again.
You could not do that during wartime or peacetime without withholding. And so people at the Treasury tax research department, where I was working, investigated various methods of withholding. I was one of the small technical group that worked on developing it.
One of the major opponents of the idea was the IRS. Because every organization knows that the only way you can do anything is the way they've always been doing it. This was something new, and they kept telling us how impossible it was. It was a very interesting and very challenging intellectual task. I played a significant role, no question about it, in introducing withholding. I think it's a great mistake for peacetime, but in 1941-43, all of us were concentrating on the war.
I have no apologies for it, but I really wish we hadn't found it necessary and I wish there were some way of abolishing withholding now."

Friday, April 11, 2014

Is the IRS Unravelling?

The central function of the Internal Revenue Service is intrinsically difficult. In 2013, the IRS processed  146 million individual income tax returns; 2.2 million corporate income tax forms, and overall about 10.5 million business tax returns (including C corporations, S corporations, and partnerships); nearly 30 million employment tax forms (on which employers, including the self-employed, report the income paid to employees and taxes withheld); 1.2 million excise tax forms from the businesses required to collect federal excise taxes on cigarettes, alcohol, and gasoline; and about 275,000 gift or estate tax forms. All of this needs to happen in a mobile and evolving U.S. economy, against the backdrop of an ever-more globalized world economy, and with a tax code that approaches 74,000 pages in length and changes every year.

Just in case this isn't enough, we have been loading up the IRS with additional major tasks, too. For example, a number of major income transfer programs like the Earned Income Tax Credit and the Child Tax Credit are now administered through refundable tax credits--that is, payments from the IRS to eligible families. U.S. policymakers count on the progressivity of the U.S. income tax--higher marginal tax rates on those with higher incomes--as a way to limit the rise in income inequality. The U.S. Department of Education often uses the IRS to withhold tax refunds as a way of repaying out-of-date student loans. The Patient Protection and Affordable Care Act was written to require that those who did not have health insurance would make a payment to the IRS, along with many other provisions affecting various business and investment taxes. The IRS is in the center of campaign financing issues with the decisions it makes (and how it makes those decisions) on what groups are eligible for certain kinds of tax-free status.

And while this is happening, funding and personnel levels at the IRS have been cut in the last few years. This combination of higher responsibilities and lower resources was a main focus of the Nina E. Olson, who holds the position of National Taxpayer Advocate, in the 2013 annual report of her office to Congress. The report goes into detail on many concerns, but for me, the heart of the report is that on one hand the IRS should follow a "Taxpayer Bill of Rights" which would enshrine the assumptions under which the tax collectors should operate, and on the other hand, the IRS needs to be funded and supported if it is going to operate to the desired standard.

Here's a list of what Olson would propose including in a Taxpayer Bill of Rights. She writes: "At their core, taxpayer rights are human rights. They are about our inherent humanity. Particularly when an organization is large, as is the IRS, and has power, as does the IRS, these rights serve as a bulwark
against the organization’s tendency to arrange things in ways that are convenient for itself,
but actually dehumanize us. Taxpayer rights, then, help ensure that taxpayers are treated in a humane manner."



Olson writes of difficulties the IRS faced in 2013, including the problems arising from the 16-day government shutdown as well as when the IRS found "itself mired in a scandal relating to tax-exempt organizations, resulting in the resignation or retirement of the acting Commissioner and other members of the IRS senior leadership." But she argues: "[A]ll of these short-term crises mask the major problem facing the IRS today — unstable and chronic underfunding that puts at risk the IRS’s ability to meet its current responsibilities, much less articulate and achieve the necessary transformation to an effective, modern tax agency. ... [W]ithout adequate funding, the IRS will fail at its mission."

Here are some statistics from inside the IRS to back up Olson's judgement. "Since fiscal year (FY) 2010, the IRS budget has been cut by nearly eight percent. Over the same period, inflation has risen by about six percent, further eroding the IRS’s resources. ... The IRS workforce has been reduced from nearly 95,000 full-time equivalent employees in FY 2010 to about 87,000 in FY 2013, a decrease of eight percent."

Between the increased in tasks and the drop in resources, standards of service from the IRS are slipping. Here's a figure showing the record on phone service. The blue line shows what share of calls reach a Customer Service Representative: it was 87% back in 2004, but now is 61%. For those who get through, the average waiting time is up from about 2 minutes in 2004 to 17 minutes.

What about answering the mail? "When the IRS sends a taxpayer a notice proposing to increase his or her tax liability, it typically gives the taxpayer an opportunity to present an explanation or documentation supporting the position taken on the return. Each year, the IRS typically receives around ten million taxpayer responses, known collectively as the “adjustments inventory.” The IRS has established timeframes for processing taxpayer correspondence, generally 45 days. During the final week of FY 2004, the IRS failed to process 12 percent of its adjustments correspondence within its timeframes. During the final week of FY 2013, the IRS was unable to process 53 percent of its adjustments correspondence within the timeframes. As a corollary, the number of pending pieces of adjustments correspondence in open inventory increased as well. At the end of FY 2004, open inventory stood at about 348,000 letters. At the end of FY 2013, it consisted of about 1.1 million letters."

What about ongoing training for IRS employees so that they can understand all the changes in tax law? As part of the budget cuts, the training budget has been cut, too. "Per-employee spending dropped from nearly $1,450 per full time equivalent employee in 2009 to less than $250 in 2013. Most of the IRS operating divisions that interact directly with taxpayers fared worse than the agency as a whole."


Other issues abound. [T]he IRS has abandoned return preparation in its walk-in sites, which was already limited to the most vulnerable populations of taxpayers — the elderly, the disabled, and the low income. It also has shut down tax law assistance on the phones after April 15, and has significantly limited the scope of questions it is willing to answer during the filing season. Thus, in the United States today, tax preparation and filing assistance is now, for the most part, privatized. That is, for a taxpayer to comply with his or her requirement to file a tax return, the taxpayer generally must pay for assistance, pay for software, and pay for advice. This is an unprecedented change in tax administration and it is not a good one. It is particularly devastating when one considers that over 50 percent of prepared individual returns are completed by unenrolled return preparers— the very preparers the IRS is now hamstrung over regulating because of pending litigation in the federal courts. So while we hash out this issue in the courts, millions of taxpayers are exposed to the risk of incompetent and even fraudulent return preparers."

Of course, no one weeps over budget cuts at the IRS. But such cuts are foolish. Those at the IRS point out that if they have more money, they can do more enforcement, and will collect greater tax revenue. While this point is probably true, it requires an extremely stunted sense of public relations to think that the citizenry will make a clarion call for more tax enforcement.

To me, the more central problem is that 98% of the revenue collected by the IRS comes from voluntary efforts by citizens, and only 2% from enforcement actions.  The IRS collected $2.86 trillion in revenue last year. The 98% that is from volnntary compliance would be about $2.8 trillion. If the poor and declining service from the IRS leads to a drop in voluntary compliance of even 1%, that would be a revenue loss of $28 billion--and the total IRS budget is about $11 billion. If the voluntary compliance system were to break down more thoroughly, the costs and difficulties of rebuilding that system would be enormous. Nobody needs to love the IRS, but it only can function if most people who are complying voluntarily believe that it is focused on its job in an even-handed way. Both by becoming enmeshed in politics, and through declining service levels, the IRS is in danger of losing that minimally necessary level of public goodwill.