Friday, March 23, 2018

Contingent Valuation and the Deepwater Horizon Spill

Economists are often queasy about the idea that preferences can be measured by surveys. It's easy for someone to say that they value organic fruits and vegetables, for example, but when they go to the grocery, how do they actually spend their money?

However, in some contexts, prices are not readily available. A common example is an oil spill, like the BP Deepwater Horizon Oil Spill in the Gulf of Mexico in 2010, or the Exxon Valdez oil spill in Alaska back in 1989. We know that such spills cause economic costs to those who use the waters directly, like the tourism and fishing industries. But is there some additional cost for "non-use" value? Can I put a personal value on protecting the environment in a place where I have not visited, and am not likely to visit?  There are various ways to measure these kinds of environmental damages. For example, one can include the costs of clean-up and remediation. But another method is to try to design a survey instrument that would get people to reveal the value that they place on this environmental damage, which is called a "contingent valuation" survey.

Such a survey has been completed for the BP Deepwater Horizon Oil spill. Richard C. Bishop and 19 co-authors provide a quick overview in "Putting a value on injuries to natural assets: The BP oil spill" (Science, April 21, 2017, pp. 253-254). For all the details, like the actual surveys used and how they were developed, you can go to the US Department of the Interior website (go to this link, and then type "total value" into the search box).

The challenge for a contingent valuation study is that it would obviously be foolish just to walk up to people and ask: "What's your estimate of the dollar value of the damage from the BP oil spill?" If the answers are going to be plausible, they need some factual background and some context. Also, they need to suggest, albeit hypothetically, that the person answering the survey would need to pay something directly toward the cost. As Bishop et al. write:
"The study interviewed a large random sample of American adults who were told about (i) the state of the Gulf before the 2010 accident; (ii) what caused the accident; (iii) injuries to Gulf natural resources due to the spill; (iv) a proposed program for preventing a similar accident in the future; and (v) how much their household would pay in extra taxes if the program were implemented. The program can be seen as insurance, at a specified cost, that is completely effective against a specific set of future, spill-related injuries, with respondents told that another spill will take place in the next 15 years. They were then asked to vote for or against the program, which would impose a one-time tax on their household. Each respondent was randomly assigned to one of five different tax amounts: $15, $65, $135, $265, and $435 ..." 
Developing and testing the survey instrument took several years. The survey was administered to a nationally-representative random sample of household by 150 trained interviewers. There were 3,646 respondents. They write: "Our results confirm that the survey findings are consistent with economic decisions and would support investing at least $17.2 billion to prevent such injuries in the future to the Gulf of Mexico’s natural resources."

One interesting permutation of the survey is that it was produced in two forms: a "smaller set of injuries" and a "larger set of injuries" version.
"To test for sensitivity to the scope of the injury, respondents were randomly assigned to different versions of the questionnaire, describing different sets of injuries and different tax amounts for the prevention program. The smaller set of injuries described the number of miles of oiled marshes, of dead birds, and of lost recreation trips that were known to have occurred early in the assessment process. The larger set included the injuries in the smaller set plus injuries to bottlenose dolphins, deep-water corals, snails, young fish, and young sea turtles that became known as later injury studies were completed  ..." 

Here's a sample of the survey results. The top panel looks at those who had the survey with the smaller set of injuries. It shows a range of how much taking steps to avoid the damage would personally (hypothetically) cost the person taking the survey. You can see that a majority were willing to pay $15, but the willingness to pay to prevent the oil spill declined as the cost went up. The willingness to pay was higher for the larger set of injuries, but at least my eye, not a whole lot larger.

It should be self-evident why the contingent evaluation approach is controversial. Does the careful and extensive process of  constructing and carrying out the survey lead to more accurate results? Or does it in some ways shape or predetermine the results? The authors seem to take some comfort in the fact that their estimate of $17.2 billion is roughly the same as the value of the Consent Decree signed in April 2016, which called for $20.8 billion in total payments. But is it possible that the survey design was tilted toward getting an answer similar to what was likely to emerge from the legal process? And if the legal process is getting about the same result, then the contingent valuation survey method is perhaps a useful exercise--but not really necessary, either.

I'll leave it to the reader to consider more deeply. For those interested in digging deeper into the contingent valuation debates, some useful starting points might be:

The Fall 2012 issue of the Journal of Economic Perspectives had three-paper symposium on contingent valuation with a range of views:

H. Spencer Banzhaf has just published  "Constructing Markets: Environmental Economics and
the Contingent Valuation Controversy," which appears in the Annual Supplement 2017 issue of the History of Political Economy (pp. 213-239). He provides a thoughtful overview of the origins and use of contingent valuation methods from the early 1960s ("estimated the economic value of outdoor recreation in the Maine woods") up to the Exxon Valdez spill in 1989.

Harro Maas and Andrej Svorenčík tell the story of how Exxon organized a group of researchers in opposition to contingent valuation methods in the aftermath of the 1989 oil spill in "`Fraught with Controversy': Organizing Expertise against Contingent Valuation," appearing in the History of Political Economy earlier in 2017 (49:2, pp. 315-345). 

Also, Daniel McFadden and Kenneth Train edited a 2017 book called Contingent Valuation of Environmental Goods, with 11 chapters on various aspects of how to do and think about contingent valuation studies. Thanks to Edward Elgar Publishing, individual chapters can be freely downloaded.

Thursday, March 22, 2018

Opioids: Brought to You by the Medical Care Industry

There's a lot of talk about the opioid crisis, but I'm not confident that most people have internalized just how awful it is. To set the stage, here are a couple of figures from the 2018 Economic Report of the President.

The dramatic rise in overdose deaths, from about 7000-8000 per year in the late 1990s to more than 40,000 in 2016, is of course just one reflection of social problem that includes much more than deaths.

However, the nature of the opioid crisis is shifting. The rise in overdose deaths from 2000 up to about 2010 was mainly due to prescription drugs. The more recent rise is overdose deaths is due to heroin and synthetic opioids like fentanyl.

It seems clear that the roots of the current opioid crisis are in prescribing behavior: to be blunt about it, US health care professionals made the decisions that created this situation. The Centers for Disease Control and Prevention notes on its website: "Sales of prescription opioids in the U.S. nearly quadrupled from 1999 to 2014, but there has not been an overall change in the amount of pain Americans report. During this time period, prescription opioid overdose deaths increased similarly."

The CDC also offers a striking chart showing differences in opioid prescriptions across states. Again from the website: "In 2012, health care providers in the highest-prescribing state wrote almost 3 times as many opioid prescriptions per person as those in the lowest prescribing state. Health issues that cause people pain do not vary much from place to place, and do not explain this variability in prescribing."
Some states have more opioid prescriptions per person than others. This color-coded U.S. map shows the number of opioid prescriptions per 100 people in each of the fifty states plus the District of Columbia in 2012. Quartile (Opioid Prescriptions per 100 People): States: 52-71: HI, CA, NY, MN, NJ, AK, SD, VT, IL, WY, MA, CO; 72-82.1: NH, CT, FL, IA, NM, TX, MD, ND, WI, WA, VA, NE, MT; 82.2-95: AZ, ME, ID, DC, UT, PA, OR, RI, GA, DE, KS, NV, MO; 96-143: NC, OH, SC, MI, IN, AR, LA, MS, OK, KY, WV, TN, AL. Data from IMS, National Prescription Audit (NPATM), 2012.
But although the roots of the opioid crisis come from this rise in prescriptions, the problem of opioid abuse itself is more complex. What seems to have happened in many cases is that as opioids were prescribed so freely that there was a good supply for friends, family, and to sell. Here's one more chart from the CDC, this one showing where those who abuse opioids get their drugs. Three of the categories are: given by a friend or relative for free; stolen from a friend or relative; and bought from a friend or relative.
 Source of Opioid Pain Reliever Most Recently Used by Frequency of Past-Year Nonmedical Use[a]
For example, a study published in JAMA Surgery in November 2017 found that among patients who were prescribed opioids for pain relief after surgery, 67-92%  ended up not using their full rescription.

This narrative of how the medical profession fueled the opioid crises has gotten some pushback from doctors. For example, Nabarun Dasgupta, Leo Beletsky, and Daniel Ciccarone wrote (The Opioid Crisis: No Easy Fix to Its Social and Economic Determinants" in the February 2018 issue of the American Journal of Public Health (pp. 182-186). After briskly acknowledging the evidence, the paper veers into the importance of "the urgency of integrating clinical care with efforts to improve patients’ structural environment. Training health care providers in “structural competency” is promising, as we scale up partnerships that begin to address upstream structural factors such as economic opportunity, social cohesion, racial disadvantage, and life satisfaction. These do not typically figure into the mandate of health care but are fundamental to public health .As with previous drug crises and the HIV epidemic, root causes are social and structural and are intertwined with genetic, behavioral, and individual factors. It is our duty to lend credence to these root causes and to advocate social change."

Frankly, that kind of essay seems to me to me an attempt the fact that the health care profession made extraordinarily poor decisions. We had root causes back in 1999. We have root causes now. It isn't the root causes that brought the opioid crisis down on us.

As another example, Sally Satel contributed an essay on "The Myth of What’s Driving the Opioid Crisis: Doctor-prescribed painkillers are not the biggest threat," to Politico (February 21, 2018).  She makes a number of reasonable points. Tthe current rise in opioid deaths is being driven by heroin and fentanyl, not prescription opioids. Only a very small percentage of those who are prescribed prescription opioids become addicts, and many of those had previous addiction problems.

As Satel readily acknowledges:
In turn, millions of unused pills end up being scavenged from medicine chests, sold or given away by patients themselves, accumulated by dealers and then sold to new users for about $1 per milligram. As more prescribed pills are diverted, opportunities arise for nonpatients to obtain them, abuse them, get addicted to them and die. According to SAMHSA, among people who misused prescription pain relievers in 2013 and 2014, about half said that they obtained those pain relievers from a friend or relative, while only 22 percent said they received the drugs from their doctor. The rest either stole or bought pills from someone they knew, bought from a dealer or “doctor-shopped” (i.e., obtained multiple prescriptions from multiple doctors). So diversion is a serious problem, and most people who abuse or become addicted to opioid pain relievers are not the unwitting pain patients to whom they were prescribed.
But her argument is that even though it was true 5-10 years ago that three-quarters of the heroin addicts showing up at treatment centers said they had got their start using presciption opioids, more recent evidence is that addicts are starting with heroin and fentanyl directly. Ultimately, Satel writes:
What we need is a demand-side policy. Interventions that seek to reduce the desire to use drugs, be they painkillers or illicit opioids, deserve vastly more political will and federal funding than they have received. Two of the most necessary steps, in my view, are making better use of anti-addiction medications and building a better addiction treatment infrastructure.
This specific recommendation makes practical sense, and it sure beats a ritual invocation of "root causes," but I confess it still rubs me the wrong way. We didn't have these demand-side interventions back in 1999, either, but the number of drug overdoses was much lower. Sure, the nature of the opioid crisis has shifted in recent years. But prescription opioids are still being prescribed at triple the level of 1999. And given that the medical profession lit the flame of the current opioid crisis, it seems evasive to seek a reduced level of blame by pointing out that the wildfire has now spread to other opioids. .

For a list of possible policy steps, one starting point is the President's Commission on Combating Drug Addiction and the Opioid Crisis, which published its report in November 2017. The 56 recommendations make heavy use of terms like "collaborate," "model statutes," "accountability," "model training program," "best practices," "a data-sharing hub," "community-based stakeholders," "expressly target Drug Trafficking Organizations," "national outreach plan," "incorporate quality measures," "the adoption of process, outcome, and prognostic measures of treatment services,"" prioritize addiction treatment knowledge across all health disciplines." "telemedicine," "utilizing comprehensive family centered approaches," "a comprehensive review of existing research programs," "a fast-track review process for any new evidence-based technology," etc. etc. There are probably some good suggestions embedded here, like fossils sunk deeply into a hillside. Hope someone can disinter them.

Tuesday, March 20, 2018

The Distribution and Redistribution of US Income

The Congressional Budget Office has published the latest version of its occasional report on "The Distribution of Household Income, 2014" (March 2018). It's an OK place to start for a fact-based discussion of the subject. Here is one figure in particular that caught my eye.

The vertical axis of the figure is a Gini coefficient, which is a common way of summarizing the extent of inequality in a single number. A coefficient of 1 would mean that one person owned everything. A coefficient of zero would mean complete equality of incomes.

In this figure, the top line shows the Gini coefficient based on market income, rising over time.

The green line shows the Gini coefficient when social insurance benefits are included: Social Security, the value of Medicare benefits, unemployment insurance, and worker's compensation. Inequality is lower with such benefits taken into account, but still rising. It's worth remembering that almost all of this change is due to Social Security and Medicare, which is to say that it is a reduction in inequality because of benefits aimed at the elderly.

The dashed line then adds a reduction in inequality due to means-tested transfers. As the report notes, the largest of these programs are "Medicaid and the Children’s Health Insurance Program (measured as the average cost to the government of providing those benefits); the Supplemental Nutrition Assistance Program (formerly known as the Food Stamp program); and Supplemental Security Income." What many people think of as "welfare," which used to be called Aid to Families with Dependent Children (AFDC) but for some years now has been called Temporary Assistance to Needy Families (TANF), is included here, but it's smaller than the programs just named. 

Finally, the bottom  purple line also includes the reduction in inequality due to federal taxes, which here includes not just income taxes, but also payroll taxes, corporate taxes, and excise taxes. 

A few thoughts: 

1) As the figure shows, the reduction in inequality for programs aimed at the elderly--Social Security and Medicare--is about as large as the total reduction in inequality due to all the reduction in inequality that happens from mean-tested spending and federal taxes. 

2) Moreover, a large share of the reduction in inequality shown in this figure is a result of "in-kind' programs that do not put any cash in the pockets of low-income people. This is true of the health care programs, like Medicare, Medicaid, and the Children's Health Insurance Program, as well as of the food stamp program. These programs do benefit people by covering a share of health care costs or helping buy food, but they don't help to pay for other costs like the rent, heat, or electricity.  

3) Contrary to popular belief, federal taxes do help to reduce the level of inequality. This figure shows the average tax rate paid by those in different income groups. The calculation includes all federal taxes: income, payroll, corporate, and excise. It is the average amount paid out of total income, which includes both market income and Social Security benefits. 

4) Finally, to put some dollar values on the Gini coefficient numbers, here is the average income for each of these groups in 2014. (Remember, this includes both cash and in-kind payments from the government, and all the different federal taxes.)
Figure 8.
Average Income After Transfers and Taxes, by Income Group, 2014
Thousands of Dollars  
Lowest Quintile 31,100
Second Quintile 44,500
Middle Quintile 62,300
Fourth Quintile 87,700
Highest Quintile 207,300
81st to 90th Percentiles 120,400
91st to 95th Percentiles 159,100
96th to 99th Percentiles 251,500
Top 1 Percent 1,178,600
Source: Congressional Budget Office.
(I'm a long-standing fan of CBO reports. But n the shade of this closing parenthesis, I'll add in passing that the format of this report has changed, and I think it's a change for the worse. Previous versions had more tables, where you could run your eye down columns and across rows to see patterns. This figure is nearly all figures and bar charts. It's quite possible that I'm more in favor of seeing underlying numbers and tables than the average reader. And it's true that you can go to the CBO website and see the numbers behind each figure. But in this version of the report, it's harder (for me) to see some of the patterns that were compactly summarized in a few tables in previous reports, but are now spread out over figures and bar graphs on different pages.) 

Monday, March 19, 2018

What if Country Bonds Were Linked to GDP Growth?

What if countries could have some built-in flexibility in repaying their debts: specifically, what if the repayment of the debt was linked to whether the domestic economy was growing? Thus, the burden of debt payments would fall in a recession, which is when government sees tax revenues fall and social expenditures rise. Imagine, for example, how the the situation of Greece with government debt would have been different if the country's lousy economic performance had automatically restructured its debt burden in away that reduced current payments. Of course, the tradeoff is that when the economy is going well, debt payments are higher--but presumably also easier to bear.

There have been some experiments along these lines in recent decades, but the idea is now gaining substantial interest,  James Benford, Jonathan D. Ostry, and Robert Shiller have edited a 14-paper collection of papers on Sovereign GDP-Linked Bonds: Rationale and Design (March 2018, Centre for  Economic Policy Research, available with free registraton here).

For a taste of the arguments, here are a few thoughts from the opening essay: "Overcoming the obstacles to adoption of GDP-linked debt," by Eduardo Borensztein, Maurice Obstfeld, and Jonathan D. Ostry.  They provide an overview of issues like: Would borrowers have to pay higher interest rates for GDP-linked borrowing? Or would the reduced risk of default counterbalance other risks? What measure of GDP would be used as part of such a debt contract? They write:
"Elevated sovereign debt levels have become a cause for concern for countries across the world. From 2007 to 2016, gross debt levels shot up in advanced economies – from 24 to 89% of GDP in Ireland, from 35 to 99% of GDP in Spain, and from 68 to 128% of GDP in Portugal, for example. The increase was generally more moderate in emerging economies, from 36 to 47% of GDP on average, but the upward trend continues. ...

"GDP-linked bonds tie the value of debt service to the evolution of GDP and thus keep it better aligned with the overall health of the economy. As public sector revenues are closely related to economic performance, linking debt service to economic growth acts as an automatic stabiliser for debt sustainability. ..  While most efforts to reform the international financial architecture over the past 15 years have aimed at facilitating defaults, for example through a sovereign debt restructuring framework (SDRM), the design of a sovereign debt structure that is less prone in the first place to defaults and their associated costs  would be a more straightforward policy initiative. GDP-linked debt is an attractive instrument for this purpose because it can ensure that debt stays in step with the growth of the economy in the long run and can create fiscal space for countercyclical policies during recessions. ...
"The first lesson is to ensure that the payout structure of the instrument reflects the state of the economy and is free from complexities or delays that can make payments stray from their link to the economic situation. To date, GDP-linked debt has been issued primarily in the context of debt restructuring operations, from the Brady bond exchanges that began in 1989 to the more recent cases of Greece and Ukraine. ...  This feature, however, gave rise to structures that were not ideal from the point of view of debt risk management. For example, some specifications provided for large payments if GDP crossed certain arbitrary thresholds or were a function of the distance to GDP from those thresholds. In addition, some payout formulas were sensitive to the exchange rate, failed to take inflation into account, or were affected by revisions of population or national account statistics. All these mechanisms resulted in payments that were  disconnected from the business cycle and the state of public finances, detracting from the value of these GDP-linked instruments for risk management (see Borensztein 2016).
"The second lesson is that the specification of the payout formula can strengthen the integrity of the instruments. GDP statistics are supplied by the sovereign, and there is no realistic alternative to this arrangement. This fact is often held up as an obstacle to wide market acceptance of the instruments. However, the misgivings seem to have been exaggerated, as under-reporting of GDP growth is not a politically attractive idea for a policymaker whose success will be judged on the strength of economic performance. ... 
"[T]he main source of reluctance regarding the use of GDP-linked debt, or insurance instruments more generally, may not stem from markets but from policymakers. Politicians tend to have relatively short horizons, and would not find debt instruments attractive that offer insurance benefits in the medium to long run but are costlier in the short run, as they include an insurance premium driven by the domestic economy’s correlation with the global business cycle. In addition, if the instruments are not well understood, they may be perceived as a bad choice if the economy does well for some time. The value of insurance may come to be appreciated only years later, when the country hits a slowdown or a recession, but by then the politician may be out of office. While this problem is not ever likely to go away completely, multilateral institutions might be able to help by providing studies on the desirability of instruments for managing country risk, and how to support their market development, in analogy to work done earlier in the millennium promoting emerging markets’ domestic-currency sovereign debt markets."
Back in 2015, the Ad Hoc London Term Sheet Working Group decided to produce a hypothetical model example of how a specific contract for GDP-linked government agreement might work, with the ideas that the framework could then be adapted and applied more broadly. This volume has a short and readable overview of the results by two members of the working group, in "A Term Sheet for GDP-linked bonds," by Yannis Manuelides and Peter Crossan. I'll just add that in the introduction to the book, Robert Shiller characterizes the London Term Sheet approach in this way:
"The kind of index-linked bond described in the London Term Sheet in this volume is close to a conventional bond, in that it has a fixed maturity date and a balloon payment at the end. The complexities described in the Term-Sheet are all about inevitable details and questions, such as how the coupon payments should be calculated for a GDP-linked bond that is issued on a specific date within the quarter, when the GDP data are issued only quarterly. The term sheet is focused on a conceptually simple concept for a GDP-linked  bond, as it should be. It includes, as a special case, the even simpler concept – advocated recently by me and my Canadian colleague Mark Kamstra – of a perpetual GDP-linked bond, if one sets the time to maturity to infinity. Perpetual GDP-linked bonds are an analogue of shares in corporations, but with GDP replacing corporate earnings as a source of dividends. However, it seems there are obstacles to perpetual bonds and these obstacles might slow the acceptance of GDP-linkage. The term-sheet here gets the job done with finite maturity, shows how a GDP-linkage can be done in a direct and simple way, and should readily be seen as appealing.
"The London Term Sheet highlighted in this volume describes a bond which is simple and attractive, and the chapters in this volume that spell out other considerations and details of implementation, have the potential to reduce the human impact of risks of economic crisis, both real crises caused by changes in technology and environment, and events better described as financial crises. The time has come for sovereign GDP-linked bonds. With this volume they are ready to go."

Friday, March 16, 2018

An NCAA Financial Digression During March Madness

I'm an occasional part of the audience for college sports, both the big-time televised events like basketball's March Madness and college football bowl games, as well as sometimes going to baseball and women's volleyball and softball games here at the local University of Minnesota. I enjoy the athletes and the competition, but I try not to kid myself about the financial side.

 Big-time colleges and universities do receive substantial sports-related revenues. But the typical school has sports-related expenses that eat up all of that revenue and more besides. For data, a useful starting point is the annual NCAA Research report called "Revenues and Expenses, 2004-2016," prepared by Daniel Fulks. This issue was released in 2017; the 2018 version will presumably be out in a few months.

For the uninitiated, some terminology may be useful here. The focus here is on Division I athletics, which is made up of about 350 schools that tend to have large student attendance, large participation in intercollegiate athletics, and lots of scholarships. Division I is then divided into three groups. The Football Bowl Subdivision is the most prominent schools, in which the football teams participate in bowl games at the end of the season. In the FBS group, Alabama beat Georgia 26-23 for the championship in January. The Football Championship series is medium-level football programs. Last season, North Dakota State beat James Madison 17-13 in the championship game at this level. And the Division I schools without football programs include many well-known universities that have scholarship athletes and prominent programs in other sports: Gonzaga and Marquette are two examples.

Since 2014, the Football Bowl Division is further divided into two groups, the Autonomy Group and the Non-Autonomy Group. The Autonomy Group is the 65 schools that are most identified with big-time athletics. They are in the "Power Five" conferences: the Atlantic Coast Conference, Big Ten, Big 12, Pac 12 and Southeastern Conference. Under the 2014 agreement, they have autonomy to alter some rules for the group as a whole: for example, this group of schools offer scholarships that cover the "full cost" of attending the university, which pays the athletes a little more, and coaches are no longer (officially) allowed to take a scholarship away because a player isn't performing as hoped. The Non-Autonomy Schools are allowed to follow these rule changes, but are not required to do so.

With this in mind, here are some facts from the NCAA report about the big-time Football Bowl Division schools.
Net Generated Revenues. The median negative net generated revenue for the AG is $3,600,000 (i.e., the median loss for a program in the AG), which must be supplemented by the institution; for the NA is $19,900,000; and for all FBS is $14,400,000. ...
Financial Haves and Have-nots. A total of 24 programs in the AG showed positive net generated revenues (profits), with a median of $10,000,000, while the remaining 41 of the AG lost a median of $10,000,000; the 64 NA programs lost a median of $20,000,000; the total FBS loss is a median of $18,000,000. Net losses for women's programs were $14,000,000 for AG, $6,500,000 for NA, and $9,000,000 for FBS.
For the Football Bowl Championship schools, the magnitude of the losses is smaller, but the pattern remains the same:
Net Generated Revenues. The result is a median net loss for the subdivision of $12,550,000; men's programs = $5,022,000 and women's programs = $4,089,000. These medians are up only slightly from 2015. ...
Losses per Sport: Highest losses incurred were in gymnastics and basketball for women's programs and football and basketball for the men.
And for the non-football Division I schools, where the big-time revenue sport is usually basketball, the pattern of losses continues:
Median Losses. The median net loss for the 95 schools in this subdivision was $12,595,000 for the 2016 reporting year, compared with $11,764,000 in 2015, and $5,367,000 in the 2004 base year. ... 
Programmatic Results. Five men's basketball programs reported positive net generated revenues, with a median of $1,742,000, while the remaining 90 schools reported a median negative net generated revenue of $1,573,000. The median loss for women's basketball was $1,415,000. These losses are up slightly from 2015 and more than double from 2004.

There's an ongoing dispute about whether big-time colleges and universities should pay their players. When I listen to sports-talk radio, a usual comment is along these lines: "These college athletes are making millions of dollars for their institutions. They deserve to be paid, and more than just a scholarship and some meal money." I'm sympathetic. But the economist in me always rebels against the assumption that there is a Big Rock Candy Mountain made of money just waiting to be handed out.  I want to know where the money is going to come from, and how the wages will be determined.

The median school is losing money on athletics. I know of no evidence that donations from alumni are sufficient to counterbalance these losses. So if the payment for athletes is going to come from schools, there will be a tradeoff. Should costs be cut by eliminating sports that don't generate revenue (and the scholarships for those athletes)? The NCAA Report notes that salaries are about one-third of total expenses for college sports programs, and maybe some of that money could be redistributed to student-athletes. It seems implausible that the median school is going to substantially increase its subsidies to the athletics department.

What if the money for paying students came from outside sponsors? Some decades ago, top college athletes sometimes were compensated via make-work or no-show jobs. It would be interesting to observe how a single rich alum or a group of local businesses, could collaborate with a coaching staff to raise money for paying athletes--and what the athletes might need to endorse in return.

It's easy to say that student-athletes should get "more," but it's not obvious that they would or should all get the same. For example, would all student-athletes get the same pay, regardless of revenue generated by their sport? Even within a single sport, would the star players get the same play as the backups? Would the amount of pay be the same between first-years and seniors? Would the pay be adjusted year-to-year, depending on athletic performance? Would players get bonuses for championships or big wins? 

I don't have a clear answer to the economic issues here, and so I will now turn off this portion of my brain and return to watching the games in peace. For those who want more, Allen R. Sanderson and John J. Siegfried wrote a thoughtful article," The Case for Paying College Athletes." which appeared in the Winter 2015 issue of the Journal of Economic Perspectives (where I work as Managing Editor).

Thursday, March 15, 2018

The Skeptical View in Favor of an Antitrust Push

Is the US economy as a whole experiencing notably less competition? Of course, pointing to a few industries where the level of competition seems to have declined (like airlines or banking) does not prove that competition as a whole has declined. In his essay, "Antitrust in a Time of Populism," Carl Shapiro offers a skeptical view on whether overall US competition has declined in a meaningful way, but combines this critique with an argument for the ways in which antitrust enforcement should be sharpened. The essay is forthcoming in the International Journal of Industrial Organization, which posted a pre-press version in late February. A non-typeset version is available at Shapiro's website

(Full disclosure: Shapiro was my boss for a time in the late 1980s and into the 1990s as a Co-editor and then Editor of the Journal of Economic Perspectives, where I have labored in the fields as Managing Editor since 1987.)

Shapiro points to a wide array of articles and reports from prominent journalistic outlets and think tanks that claim that the US is experiencing a wave of anti-competitive behavior. He writes:
"Until quite recently, few were claiming that there has been a substantial and widespread decline in competition in the United States since 1980. And even fewer were suggesting that such a decline in competition was a major cause of the increased inequality in the United States in recent decades, or the decline in productivity growth observed over the past 20 years. Yet, somehow, over the past two years, the notion that there has been a substantial and widespread decline in competition throughout the American economy has taken root in the popular press. In some circles, this is now the conventional wisdom, the starting point for policy analysis rather than a bold hypothesis that needs to be tested. ...
"I would like to state clearly and categorically that I am looking here for systematic and widespread evidence of significant increases in concentration in well-defined markets in the United States. Nothing in this section should be taken as questioning or contradicting separate claims regarding changes in concentration in specific markets or sectors, including some markets for airline service, financial services, health care, telecommunications, and information technology. In a number of these sectors, we have far more detailed evidence of increases in concentration and/or declines in competition."
Shapiro makes a number of points about competition in markets. For example, imagine that national restaurant chains are better-positioned to take advantage of information technology and economies of scale than local producers. As a result. national restaurant chains expand and locally-owned eateries decline. A national measure of aggregation will show that the big firms have a larger share of the market. But focusing purely on the competition issues, local diners may have essentially the same number of choices that they had before.

A number of the overall measures of growth of larger firms don't show much of a rise. As one example, Shapiro points to an article in the Economist magazine which divided the US economy into 893 industries, and found that the share of the four largest firms in each industry had on average risen from 26% to 32%. Set aside for a moment the issues of whether this is national or local, or whether it takes international competition into account. Most of those who study competition would say that a market where the four largest firms combine to have either 26% or 32% of the market is still pretty competitive. For example, say the top four firms all have 8% of the market. Then the remaining firms each have less than 8%, which means this market probably has at least a dozen or more competitors.

The most interesting evidence for a fall in competition, in Shapiro's view, involves corporate profits. Here's a figure showing corporate profits over time as a share of GDP.

And here's a figure showing the breakdown of corporate profits by industry.
Thus, there is evidence that profit levels have risen over time. In particular, they seem to have risen in the Finance & Insurance sector an in the Health Care & Social Assistance area. But as Shapiro emphasizes, antitrust law does not operate on a presupposition that "big is bad" or "profits are bad." The linchpin of US antitrust law is whether consumers are benefiting.

Thus, it is a distinct possibility that large national firms in some industries are providing lower-cost services to consumers and taking advantage of economies of scale. They earn high profits, because it's hard for small new firms without these economies of scale to compete. Shapiro writes:
"Simply saying that Amazon has grown like a weed, charges very low prices, and has driven many smaller retailers out of business is not sufficient. Where is the consumer harm? I presume that some large firms are engaging in questionable conduct, but I remain agnostic about the extent of such conduct among the giant firms in the tech sector or elsewhere. ... As an antitrust economist, my first question relating to exclusionary conduct is whether the dominant firm has engaged in conduct that departs from legitimate competition and maintains or enhances its dominance by excluding or weakening actual or potential rivals. In my experience, this type of inquiry is highly fact-intensive and may necessitate balancing procompetitive justifications for the conduct being investigated with possible exclusionary effects. ...
"This evidence leads quite naturally to the hypothesis that economies of scale are more important, in more markets, than they were 20 or 30 years ago. This could well be the result of technological progress in general, and the increasing role of information technology on particular. On this view, today’s large incumbent firms are the survivors who have managed to successfully obtain and exploit newly available economies of scale. And these large incumbent firms can persistently earn supra-normal profits if they are protected by entry barriers, i.e., if smaller firms and new entrants find it difficult and risky to make the investments and build the capabilities necessary to challenge them."
What should be done? Shapiro suggests that tougher merger and cartel enforcement, focused on particular practices and situations, makes a lot of sense. As one example, he writes:

"One promising way to tighten up on merger enforcement would be to apply tougher standards to mergers that may lessen competition in the future, even if they do not lessen competition right away. In the language of antitrust, these cases involve a loss of potential competition. One common fact pattern that can involve a loss of future competition occurs when a large incumbent firm acquires a highly capable firm operating in an adjacent space. This happens frequently in the technology sector. Prominent examples include Google’s acquisition of YouTube in 2006 and DoubleClick in 2007, Facebook’s acquisition of Instagram in 2012 and of the virtual reality firm Oculus CR in 2014, and Microsoft’s acquisition of LinkedIn in 2016.  ... Acquisitions like these can lessen future competition, even if they have no such immediate impact."
Shapiro also makes the point that a certain amount of concern about large companies mixes together a range of public concerns: worries about whether consumers are being harmed by a lack of competition is mixed together with worries about whether citizens are being harmed by big money in politics, or worries about rising inequality of incomes and wealth, or worries about how locally-owned firms may suffer from an onslaught of national chain competition. He argues that these issues should be considered separately.
"I would like to emphasize that the role of antitrust in promoting competition could well be undermined if antitrust is called upon or expected to address problems not directly relating to competition. Most notably, antitrust is poorly suited to address problems associated with the excessive political power of large corporations. Let me be clear: the corrupting power of money in politics in the United States is perhaps the gravest threat facing democracy in America. But this profound threat to democracy and to equality of opportunity is far better addressed through campaign finance reform and anti-corruption rules than by antitrust. Indeed, introducing issues of political power into antitrust enforcement decisions made by the Department of Justice could dangerously politicize antitrust enforcement. Antitrust also is poorly suited to address issues of income inequality. Many other public policies are far superior for this purpose. Tax policy, government programs such as Medicaid, disability insurance, and Social Security, and a whole range of policies relating to education and training spring immediately to mind. So, while stronger antitrust enforcement will modestly help address income inequality, explicitly bringing income distribution into antitrust analysis would be unwise."

In short, where anticompetitive behavior is a problem, by all means go after it--and go after it more aggressively than the antitrust authorities have done in recent decades. But other concerns over big business need other remedies. 

Tuesday, March 13, 2018

Interview with Jean Tirole: Competition and Regulation

"Interview: Jean Tirole" appears in the most recent issue of Econ Focus from the Federal Reserve Bank of Richmond (Fourth Quarter 2017, pp. 22-27). The interlocutor is David S. Price. Here are a few comments that jumped out at me.

How did Tirole end up in the field of industrial organization?
"It was totally fortuitous. I was once in a corridor with my classmate Drew Fudenberg, who's now a professor at MIT. And one day he said, "Oh, there's this interesting field, industrial organization; you should attend some lectures." So I did. I took an industrial organization class given by Paul Joskow and Dick Schmalensee, but not for credit, and I thought the subject was very interesting indeed.
"I had to do my Ph.D. quickly. I was a civil servant in France. I was given two years to do my Ph.D. (I was granted three at the end.) It was kind of crazy."
Why big internet firms raise competition concerns
"[N]ew platforms have natural monopoly features, in that they exhibit large network externalities. I am on Facebook because you are on Facebook. I use the Google search engine or Waze because there are many people using it, so the algorithms are built on more data and predict better. Network externalities tend to create monopolies or tight oligopolies.
"So we have to take that into account. Maybe not by breaking them up, because it's hard to break up such firms: Unlike for AT&T or power companies in the past, the technology changes very fast; besides, many of the services are built on data that are common to all services. But to keep the market contestable, we must prevent the tech giants from swallowing up their future competitors; easier said than done of course ...
Bundling practices by the tech giants are also of concern. A startup that may become an efficient competitor to such firms generally enters within a market niche; it's very hard to enter all segments at the same time. Therefore, bundling may prevent efficient entrants from entering market segments and collectively challenging the incumbent on the overall technology.
"Another issue is that most platforms offer you a best price guarantee, also called a "most favored nation" clause or a price parity clause. You as a consumer are guaranteed to get the lowest price on the platform, as required from the merchants. Sounds good, except that if all or most merchants are listed on the platform and the platform is guaranteed the lowest price, there is no incentive for you to look anywhere else; you have become a "unique" customer, and so the platform can set large fees to the merchant to get access to you. Interestingly, due to price uniformity, these fees are paid by both platform and nonplatform users — so each platform succeeds in taxing its rivals! That can sometimes be quite problematic for competition.
"Finally, there is the tricky issue of data ownership, which will be a barrier to entry in AI-driven innovation. There is a current debate between platform ownership (the current state) and the prospect of a user-centric approach. This is an underappreciated subject that economists should take up and try to make progress on."

The economics of two-sided platforms
"We get a fantastic deal from Google or credit card platforms. Their services are free to consumers. We get cashback bonuses, we get free email, Waze, YouTube, efficient search services, and so on. Of course there is a catch on the other side: the huge markups levied on merchants or advertisers. But we cannot just conclude from this observation that Google or Visa are underserving monopolies on one side and are preying against their rivals on the other side. We need to consider the market as a whole.
"We have learned also that platforms behave very differently from traditional firms. They tend to be much more protective of consumer interests, for example. Not by philanthropy, but simply because they have a relationship with the consumers and can charge more to them (or attract more of them and cash in on advertising) if they enjoy a higher consumer surplus. That's why they allow competition among applications on a platform, that's why they introduce rating systems, that's why they select out nuisance users (a merchant who wants to be on the platform usually has to satisfy various requirements that are protective of consumers). Those mechanisms — for example, asking collateral from participants to an exchange or putting the money in an escrow until the consumer is satisfied — screen the merchants. The good merchants find the cost minimal, and the bad ones are screened out.
"That's very different from what I call the "vertical model" in which, say, a patent owner just sells a license downstream to a firm and then lets the firm exercise its full monopoly power.
"I'm not saying the platform model is always a better model, but it has been growing for good reason as it's more protective of consumer interest. Incidentally, today the seven largest market caps in the world are two-sided platforms."