Risking development? Further thoughts on Cost Benefit Analysis for global development challenges

Introduction

In this post, I look again at the use of Cost Benefit Analysis (CBA) for ranking and prioritizing global development challenges. While it was written in the context of the ongoing debate over Bjorn Lomborg and his Copenhagen Consensus Center (CCC), it is not written as a critique of that specific approach. Rather, I am seeking to engage with the general methodological issues around development priorities, and I do so in this post with a particular focus on the issue of risk. Nonetheless, given the context and the fact that I have been reading through the CCC output, it is my clear and explicit referent for the discussion.

Incorporating Risk into CBA

All investments, including development policy initiatives, have an element of risk to them; the value of your investment, as every UK financial service advertisement states, may go down as well as up. Some investments are clearly riskier than others but if they have the potential to yield higher returns, we might consider them worthwhile. In calculating Benefit-Cost Ratios, this needs to be taken in account. Typically, this is done through calculating expected benefits as the probability-weighted average of all outcomes.

A generic problem with this is that it assumes that the investor is risk neutral. For instance, a risk neutral investor would not differentiate at all between $1 investment that returns $100 and a $1 investment that has a 50% chance of $200 returns and a 50% chance of zero returns. Both have an expected return of $100. A risk-averse investor, however, more likely to plump for the guaranteed $100 investment, while her risk-seeking counterpart will more likely go for the second investment.

Now, there are well-established ways to model risk aversion in economics than can be fed into CBA. Typically, financial returns get transformed into a utility function that is shaped according to risk preferences. We do not need the fine details of this, but a useful concept to introduce is the certainty equivalent. The certainty equivalent is the amount of guaranteed return that, for a given risky investment, the investor would be indifferent between the two options. When facing our investment choice above, a risk averse investor might have a certainty equivalent of $40 – he would value this bet as equivalent to a certain return of $40 – and would hence refuse the bet in favour of the guaranteed $50. In terms of CBA, the benefit of the risky investment for our risk averse investor (expressed as equivalent utility) is $40, even if the expected financial returns are $50. In contrast, a risk-seeking investor might prefer the risky investment over a guaranteed return of up to, for instance, $60. In this case, the risk-seeking investor is, in effect, getting $10 worth of utility from the sheer fun of the gamble.

The same logic, of course, applies to the cost element of the Benefit/Cost Ratio. While the cost side of an investment might be easier to predict than its benefits, it is by no means certain: the Scottish Parliament at Holyrood was originally estimated to cost between £10m and £40 in 1997; its final price tag when completed 10 years later was £414m. Decisions over prioritizing investment or policies should likewise reasonably factor in certainty over costs that could be weighted according to risk aversion. Hence, for instance, a project with a BCR of 4 might be preferred over a project with a BCR of 5 if they both have guaranteed benefits but the costs of the second project are considered more liable to escalate than those for the first project.

We should note that escalating costs could have one a range of consequences, depending on the budgetary context. Escalating costs might be (reluctantly) covered in order to bring the project to completion, reducing the realized BCR compared to the predicted BCR. But in the context of a limited budget (or limited willingness to continue funding the project beyond a certain level), escalating costs may necessitate abandonment of the project altogether. Depending on the nature of the project, this might again result in a decreased realized BCR if partial implementation of the project still yielded some returns (e.g. building half the planned flood defences). But if the project is an all-or-nothing project only yielding any benefits if completed, then abandonment would result in a realized BCR of 0. With a flexible budget, the latter type of project might be more liable to the hazard of additional costs: if I have already spent $900m building an airport and it unexpectedly requires an additional $400m to complete, it may make economic sense for me to cough that up, even if a total price tag of $1,300m would have deterred me from investing in the first instance. Hence, the type of project must be factored into the risk assessment on the cost side.

These are all well-established principles and methods in economic analysis, and could easily be factor in to CBA. They are a generic challenge to CBA, but by no means a killer blow. In addition to the possibility of factoring risk-weighted utility into the CBA of any given project or investment, there are portfolio techniques that can be used to balance different investment with different risk levels, as anyone who has had financial advice on their own investment will no doubt be aware. There are, however, I think two additional challenges concerned with risk when we use CBA for ranking development projects. The first is whether or not even expected benefit (as opposed to expected utility) is a meaningful concept when dealing with global development challenges. The second is that if we do incorporate risk evaluation – and, I will argue, we necessarily do – then who gets to decide on the risk aversion parameters? I will deal with each of these in turn.

Is Expected Benefit Meaningful in Global Development Initiatives?

Let us leave aside from the moment the issue of expected utility versus expected benefit and assume that we are simply going to evaluate and rank projects according to expected financial benefits. As we have seen, there is still a degree of risk involved in all projects, and the formula used to calculate benefits is hence the probability-weighted average of different returns. Now I want to suggest that in certain circumstances this is not a meaningful figure.probdens2

Consider the schematic distributions pictured. All three have the same average value. But while it seems intuitively reasonable in the case of the blue (normal) distribution to state that the average value is also the expected value, it seems harder to defend that in the case of the red (bimodal) distribution. Indeed, the Stanford physicist Leonard Susskind refuses to use the standard terminology of ‘expected value’ for such distributions (which are common in quantum mechanics) because while it may be the average value of the distribution, it is precisely the least expected value. A similar logic can be applied to the green distribution, where the expectation in all but a tiny fraction of cases is zero returns.

Now it could be argued that I am confusing the precise economic sense of ‘expected benefits’ with a more natural language equivalent. But this won’t do, precisely because CBA is a real world tool used to provide real world rankings and inform real world policy decisions.

In general, this turns out not to present much of a problem for CBA, because of a fundamental statistical theorem the Central Limit Theorem. It is easiest to demonstrate with reference to the green distribution, which I will term here the ‘lottery distribution’.

If you buy a ticket in national lottery system such as that in the UK, the chances are very high that you will win nothing at all, but in a tiny proportion of cases you will win a huge amount. That is, the probability distribution of returns to each individual ticket looks like pretty much like the green distribution, except even more exaggerated. In the UK, the total payout pot from the National Lottery is stipulated as a proportion of value of the tickets bought that week, around 45%, so the lottery has an explicitly defined BCR of 0.45.

My assertion is that this is not meaningful when applied to the individual purchase of an individual ticket. Certainly, it makes sense on an aggregate level. If multiple people buy multiple tickets then (and I really geek out on this) the Central Limit Theorem tells us not only that average returns from multiple purchases for a given individual will tend towards the underlying average (0.45) the more tickets she buys, but that the distribution of total winnings among individuals will be normally distributed (blue, in the figure) around 0.45, whatever the underlying probability distribution of returns to a given single ticket (green-ish).

But for an individual ticket purchase, I cannot see a meaningful interpretation of the 0.45 BCR as an expected benefit. Certainly, if we factor in individual risk preference and transform this into an expected utility (as discussed in the previous section), then we could see why a risk-seeking person might buy an individual ticket. Indeed, we would have to do this to explain why anyone buys a lottery ticket at all since the raw financial BCR is so clearly unattractive. But without turning to expected utility rather than purely financial returns, the underlying distribution – very high chance of no return and very low chance of massive return – is not meaningfully represented by the average return. For a single ticket purchase, the average benefit is not the expected benefit.

Why does this matter for development investments aimed at combatting global development challenges, particularly climate change? Such projects, it seems to me, can carry a high risk of failure but may promise good returns if they succeed. The carbon tax, for instance, requires the implementation of coordinated policies across the all different countries of the world to be effective, a massive political challenge that has invigorated game theoretical analysis of climate policies. They are lottery ticket-type distributions, but that can be played only once. The clearest example, which I will discuss in more depth here, is climate engineering.

The Copenhagen Consensus and Bjorn Lomborg have consistently championed climate engineering, particularly Solar Radiation Management, as cost-effective investments for dealing with climate change. Certainly, this seems to be a very exciting area of research with potentially huge gains. But I doubt that CBA can be meaningfully applied to it.

Solar Radiation Management (SRM) is the climate engineering technique that the CCC has focused on. The scientific details of the idea do not need to detain us, which is just as well as they lie well beyond my capacity to explain. The point is this: the proposal is not to deploy SRM, because we do not yet have the capacity to deploy it. The proposal is a Research and Development project to investigate the feasibility of SRM. The authors suggest that $5 billion would fund such an R&D programme. The potential returns are staggering. The authors calculate a range of scenarios for if the project were successful, depending on damage the SRM technology itself might do that reduces its net benefit. Their worst case scenario, where SRM does damage equivalent to 3% of gross world product, provides a BCR of around 250; their best case scenario, where SRM does no damage, provides a BCR of 2000. They plump for a figure roughly in the middle, with a BCR of around 1000.

But, and this is a point that the authors acknowledge, these figures are conditional on the research being successful. It might not be. But the potential returns are so large, that this seems to outweigh the chance of failure. In their own illustrative example, if there were only a 10% chance of success, the overall BCR for the investment would still be 100 to 1.

But would it be? Is it reasonable to average out this distribution to create an ‘expected’ return when the probability of success is low? If this were an investment that we could run multiple times, such averaging to expectation might be defensible; if we were to buy 100 investments each of which had the same distribution of likely returns, it would seem reasonable to talk about an expected return of 100 to 1. But, alas, we do not have 100 climates over which to average our likelihood of success.

Of course, as noted above, there are also portfolio techniques that could be applied to balance a risky proposition such as this with less risky but lower return investments (such as, in the climate case, climate adaptation and CO2 abatement policies). And, indeed, the authors of the climate engineering paper discuss such a couple of potential ‘portfolios’ of SRM with other abatement policies. This is, indeed, one of the methodological criticisms that has been leveled against the CCC – that while the Expert Panel agree a ‘portfolio’ of investments, this is an ex post exercise based on the individual CBAs, rather than a fully risk-weighted analysis of the whole range of policy investments.

In addition, and this is a point I will return to later, even allowing for us to ‘average out’ the returns conditional on success with the likelihood of success, what is the rationale for the 10% likelihood selected here? Now, it is quite clear in the full paper that the authors are using the 10% figure illustratively: they want to make the more general point that even with a very low likelihood of success, the average/expected BCR is still very high. But in order to generate a single BCR (or range of BCRs), one would need to pick a figure (or range of figures) to calculate the ‘expected’ return on the investment.

In principle, there are various ways one could approach this without just picking a figure out of the air. One could look at the success rate of R&D projects in cognate areas. One could factor in the extent to which existing technology approaches that which is required. These are all reasonable approaches to estimating the likelihood of success in R&D in general. But, again, we are not concerned with the general, we are concerned with a specific, one-shot game: will SRM work or not?

Where have we got to? The point I have been trying to make is that while SRM seems to me (in my scientific ignorance) to be an exciting area of research as part of a portfolio of responses to climate change, it is impossible to generate a meaningful BCR for SRM on its own, especially in the absence of a risk-weighted expected utility function. The distribution of potential returns is discontinuous, with a chance of zero returns and a corresponding chance of massive returns. This in itself would be problematic for generating a single ‘expected’ return, but we don’t even have a realistic way of estimating where that discontinuity occurs.

With such a discontinuous distribution in the likely returns to SRM, the only meaningful way to calculate a BCR for comparison with other possibilities is to explicitly factor in the degree of risk aversion (or risk seeking). To do so requires the selection of a risk aversion parameter. Who, and how, is this to be done?

Who Decides on Risk Tolerance?

So far, I have asserted that tools exist for incorporating risk aversion into CBA and that it may be necessary to do so for global challenges where a binomial outcome (success/failure) renders the average benefit less meaningful if interpreted as an expected benefit. Indeed, I have argued that the failure to explicitly incorporate risk aversion into such calculations actually implicitly implies risk neutrality – that a very risky proposition with an average BCR of, say, 4 is not differentiable from a guaranteed proposition with the same BCR.

But it we are to include explicit risk functions into CBA for global development challenges, who and how should we decide on the risk function?

This issue dovetails with the issue of intergenerational equity that I discussed in my previous post because, when we are dealing with global challenges, our attitude towards risk now affects the benefits and options accruing to future generations.

Indeed, any discount rate that includes a dimension of pure time preference (over and above the expected returns of capital) can be interpreted as intergenerational risk-seeking – the further we look into the future, the more discounting reduces the variation in costs and benefits. For instance, many models of the impact of climate change on the global economy suggest that the baseline scenario (business as usual) will result in a small increase in global output for several decades (although even this unevenly distributed globally) followed by a long decline into ever-increasing economic damage. From a risk averse perspective, we would take probably take steps now to be sure to mitigate that long-term decline, even if it means sacrificing the small increase that we might enjoy over the next few decades. But factoring pure time preference into our calculation would proportionately reduce the baseline costs of climate damage and, hence, reduce the incentive for immediate mitigation.

In a second parallel with my previous post, I would also argue that when we are faced with deciding over risk functions, the question of who decides and who is affected becomes paramount. To recapitulate that argument, I suggested that when CBA is applied to decision that affect only the deciders themselves, the distributional consequences are less important, but when they apply to other people – and other generations – it is ethically imperative to consider distributional impacts. The same applies to risk. If a single person without any dependents decides to blow their entire income every month on lottery tickets, we may consider it insanely risk-seeking behaviour, but the consequences are born entirely by that person, so there it could be argued that there is no ethical reason to intervene. But if that person had power of attorney over a vulnerable relative and choses – on the basis of a genuine, though risk-seeking, cost-benefit calculation – to blow their entire income on lottery tickets, we would see a much stronger ethical reason to intervene, particularly if we knew that the person in question was, themselves, risk averse.

These are, of course, extreme examples, but they make the point clear: when there are significant risks involved in policy decisions that affect the world’s poor, both now and in future generations, we should not arrogate to ourselves the ability to decide on their risk preferences. This is important because most studies suggest that poor people are risk averse. And to reiterate the point made earlier, the refusal to incorporate risk into our analysis does not sidestep the problem, because it is an implicit acceptance of a risk neutral stance.

In this post and my previous post, I have suggested that simplistic CBA that does not include parameters for risk aversion and inequality aversion are not an appropriate method for ranking and deciding on global development priorities. Put simply, they collapse too much important information about risk and distribution into a single figure (or relatively narrow range). Moreover, and I will blog about this later, CBAs also incorporate a whole range of other value judgments (including, for instance, the value of human life) which are lost in the process of collapsing down to a single ratio.

It might be argued that CBAs, including those undertaken for the Copenhagen Consensus, often produce a range of BC ratios, and that this caters for the kind of uncertainties and value judgments I have drawn attention to. But I do not think this sufficient, for two reasons. Firstly, while a BCR range is more realistic than a spot estimate, I am unconvinced that the kinds of ranges produced approach anything like the actual variation that one would see if one looked at a range of plausible risk- and inequality-aversion parameters.

To give one example, the social cost of carbon is typically estimated as lying the range of tens of dollars per tonne to hundreds of dollars. A recent sensitivity analysis, however, looked systematically at how the cost varies with different parameters for pure time preference and risk aversion (but not geographical distribution), and found a range between $0 and $120,000 per tonne. When matched with estimates for observed behaviour, their estimates converged towards $60 per tonne. When adjusted for geographical income distribution (inequality aversion), it increased again to $200 per tonne. While this study is an impressive technical achievement, however, it should be noted both that the ‘observed behaviour’ they use are derived from macroeconomic indicators, not from the risk- and inequality- preferences of those at risk, and that it is contestable whether ‘observed behaviour’ is the best ethical estimator for risk aversion. But nonetheless, this clearly demonstrates the extent that value judgments matter.  And, of course, the social cost of carbon is only one input into Cost-Benefit calculations for climate policies; other inputs may be subject to similar sensitivity to value judgment parameters.

So far, I have argued that simplistic CBAs are inappropriate for prioritizing development initiatives and that if they are applied in more sophisticated ways that incorporate parameters for risk- and inequality-aversion, they need to do so by engaging with those people most affected by the decisions taken. Even here, however, I remain concerned that many important value judgments get hidden in the process of collapsing CBAs into Benefit Cost Ratios. In the remainder of this post I discuss two defences that might be put forward (and I’m sure there are more): that CBAs help inform public discussion by communicating development priorities in a clear way; and that CBAs may not be perfect, but that they are better than the current modalities.

Communicating Development Priorities

If CBAs don’t work to provide a rigorous ranking of development priorities for actual assistance, is there a case to be made that they can at least inform and invigorate public understanding and debate over development and climate change?

I certainly think this is a strong defence and, indeed, is in many ways what the CCC was designed to achieve. There was never any pretense that it was anything other than a hypothetical exercise, based on an arbitrary budget. And Bjorn Lomborg certainly does a very good job of feeding its findings into the public sphere through talks, movies, newspaper columns and interviews, and TV appearances.

My concern over this aspect, however, is precisely what is lost in this communication – the value judgments, the ethical decisions (implicit or explicit) over risk and distribution – that feed into the process. To take the example I have used above, the CCC papers on climate engineering are quite technical and probably inaccessible for a lay audience, but within a more specialist realm they are clear and thoughtful reflections on the potential of SRM to combat climate change. They include BC calculations for different scenarios in the event that SRM have negative externalities in addition to its positive impact on climate change. They discuss the risk of failure. In both cases, they are unable to quantify these risks precisely because the technology is not yet developed. Their middle case scenario, however, where SRM does some damage but not extensive yield a BC ratio conditional on technological success of 1,000. As noted above, they point out indicatively (without being prescriptive) that if there is only a ten percent chance of technological success, this would still yield an expected BCR of 100 (although I have disputed whether this is a meaningful expectation).

But when this information get communicated by Lomborg and CCC, all of this subtlety is lost. The popular précis of the 2012 CCC consists of an extended discussion authored by Lomborg, the presentation of the Expert Panel agreed rankings along with their individual rankings, and short extracts from each of the working papers. In his own discussion, Lomborg’s treatment of climate engineering is primarily an introduction to the scientific ideas behind it, followed by the assertion that ‘if climate change should suddenly get much worse… geoengineering appears to be the only technology that could quickly cool the Earth’. The individual Expert discussions are generally highly enthusiastic about it, although typically noting that it should be thought of as a ‘short-term’ or ‘portfolio’ investment to complement more risk averse measures. And the extract of the paper itself starts by presenting the BC estimate ranges as described above and then provides a detailed rebuttal of concerns over the technology.

Moreover, as the communication of the CCC spreads wider, even this nuance is lost. In 2009, on the back of a different CCC exercise that specifically focused on the climate but that included a similar paper on the same topic by the same authors, Lomborg did a round a interviews and op-eds. In the New York Times, he confidently declared that ‘the most cost-effective and technically feasible approach [to climate change] is through geoengineering’. In Foreign Policy magazine he stated that with geoengineering ‘we could actually pretty much offset all of global warming in the 21st century. The total cost of that would be about $6 billion to $7 billion in total’. And in Slate magazine after the 2012 CCC exercise, he baldly states of geoengineering that ‘each dollar spent could create roughly $1,000 of benefits in economic terms’. More broadly, the rhetoric of ‘smart policies’ and ‘smart targets’ that the CCC and Lomborg employ feed into a sense of objective, established returns to investment and downplay the assumptions and value judgments that generate these returns.

This, then, is the concern with CBA as a means for invigorating public debate over development and climate change. Because it focuses attention on purportedly objective ratios and rankings, it can mask the value judgments that go in to generating such rankings. And, I would suggest, these are precisely the issues we should be debating in the public sphere. How much inequality and poverty are we willing to tolerate around the world? How much are we willing to sacrifice now in order to minimize an uncertain risk from climate change?

To be clear, this is not an argument against climate engineering per se. As I have said, the CCC papers on climate engineering are both fascinating and compelling. And, indeed, to some extent the proposal loses out in the CCC process because while the authors ‘bid’ for $6 billion dollars for Research and Development, the Expert Panel chose to allocate it only $1 billion dollars because their hypothetical pot was limited to $75 billion. Military expenditure in the US was $685 billion in 2012; I would be quite happy to dictatorially reassign six days of US military spending to fund the project in its entirety.

Is there an alternative?

The second defence of CBA for ranking global challenges is that while it is not perfect, it is an improvement on the current situation. Government policy processes are often seen as subject to capture by corporate interests, such as President Bush’s notorious Energy Bill, which critics saw as ‘the sum of all lobbies’, virtually gifting big energy companies everything they sought. Governments are also subject to citizen pressure, particularly in democracies, which is not always progressive and often misinformed on development issues. And in the end, government do rank and prioritize development initiatives through budgetary allocation. CBA, however imperfect, provides a clear ranking that cuts through such distortions in the policy process.

Again, I think this is a strong argument, but I am again not convinced. Certainly, the kind of scientific and economic evidence provided in the CCC papers should inform government policy, but of course it already does. Governments in developed countries have offices full of scientists and economists providing briefings and analysis of different policy options. The 700-page Stern Review on the economic impact of climate change was precisely such a report, commissioned by the UK government.

But the job of government, it seems to me, at least in democratic states, is precisely to take such inputs and to make the difficult decisions about risk, inequality, and other values judgments that are necessary for budgetary allocation. Sometimes, ethical imperatives can be seen to override benefit cost calculations whether in response to particular emergencies or in prioritizing particular issues that are seen as ethically prior irrespective of cost, such as, it could be argued, gender equality. And, indeed, Lomborg is clear in the formal CCC publications, if not in his broader media engagement, that he sees cost-benefit analysis as only one factor into government decision-making. But this, again, simply raises the question of what is lost or hidden in collapsing the various value judgments into a BCR simply to open them back up again.

Increasing public awareness and understanding of development issue is, of course, of vital importance to the policy process. As is reducing the influence of corporate lobbies. But CBA is not a substitute for these tasks.

Conclusion

I have argued here that CBA is not an appropriate method for evaluating and ranking global development challenges. Although I have focused on the treatment of risk, the overall arguments are more general, and parallel my previous post on distribution.

This is not, however, an argument against CBA entirely. CBA, it seems to me, is a very powerful tool for evaluating and ranking different investments, including policy. But the domain over which it is useful is not unlimited and its use diminishes the larger and more diverse the population affected, the longer the time horizon over which it is looking, the more critical the risks involved, and the more value judgments that need to be made to compute benefits and returns. Indeed, in such circumstances I have argued that not only does its use diminish but it can become counterproductive, hiding or underplaying the vital ethical, social and political decisions that are needed in order to rank development priorities.

One thought on “Risking development? Further thoughts on Cost Benefit Analysis for global development challenges

  1. Hi Graham

    Interesting posts on BCA. A few quick comments.

    The issue of risk attitudes in public decision making has been discussed extensively in the economics literature. Probably the seminal paper is K. J. Arrow and R. C. Lind, (1970). “Uncertainty and the Evaluation of Public Investment Decisions,” American Economic Review, 60, 364-378. They concluded that, under certain assumptions, governments should base their policy decisions on a risk-neutral decision rule, even if the affected people are risk-averse. This paper is so famous and influential it even has its own Wikipedia page. The reasoning, loosely speaking, is that governments invest in such a huge variety of different things with outcomes that are essentially uncorrelated, so that the contribution to overall risk of any one project is approximately zero due to the portfolio of investments being so diversified. Subsequently, others have argued that risk attitudes should be included to some extent, but in practice they usually aren’t. If one were to do so, I think it would be hard to get it right. It would be wrong to just apply a risk aversion parameter to calculate the certainty equivalent value of an individual project, because of the diversification of risk across many projects. (The same thing applies to individual decision makers – in principle they should evaluate risk based on a project’s impact on the riskiness of their whole wealth, not the separate riskiness of individual projects.) So even if one holds that risk-aversion should be included, I think it would be very challenging to do it in practice in a way that was theoretically defensible. Because of diversification, its overall impacts on the optimal portfolio of projects would probably not be that large anyway. The situation where Arrow and Lind’s argument really doesn’t work is where a project is “large” relative to the whole portfolio.

    You argue against the use of expected values because of the risk of unusual distributions (e.g. bimodal or highly skewed). You seem to imply that this problem would be fixed by including risk aversion, but I don’t think that’s right. Expected utility is still an expected value. It still condenses a whole distribution down to a single number. Personally I’m not worried about this issue of different distributions. The argument for using expected net benefits as the criterion is not that it gets every decision right, but that it performs well across a large portfolio of investments.

    You might be interested to know that the idea of climate policy as a lottery style investment has currency in the economics literature. Martin Weitzmann (perhaps my favourite economist) has argued that we should not use standard BCA for climate change policy because the outcomes are so unpredictable. Instead we should think of it as like an insurance policy – a small probability of a really bad outcome that we should insure against. This seems to be essentially the same as your lottery idea.

    You emphasise the fact that climate policy is a one-shot game. However, that’s not particularly a problem for the use of expected net benefits, provided there are enough other investments in the portfolio to provide sufficient diversification of the risks. What would be more relevant would be if the investment in climate policy was large. Maybe it will be one day, but clearly not yet.

    The idea of weighting for inequality has also been around for quite a long time, although again in practice it is not often done. You might be interested in the work of Steven Schilizzi in my School at UWA. He has developed a new approach for factoring in “equity” when potential projects are being evaluated and ranked. He recognised that there are various dimensions to “equity” (including wealth inequality) and this different people put different emphasis on the various dimensions. He’s been working this year on a practical demonstration of his approach.

    You argue that, because of the concerns you raise, simple approaches to BCA should not be used to rank global development projects. Their failure to capture risk and inequality mean that they could do more harm than good. I don’t have any in-depth knowledge of how investment priorities for global development are made in practice, but I do for environmental projects. My observation in that realm is that the quality of existing decision processes is, in general, unbelievably bad. Most environmental organisations are very happy with their investment prioritisation process, but most of their processes have fundamental weaknesses that mean they are about as good as pulling projects out of a hat. BCA doesn’t provide a simple prescription, by any means, but it does help decision makers focus much more strongly on outcomes than typically seems to happen otherwise. It provides a platform to integrate technical, social and financial information, which otherwise doesn’t happen in any serious way. In my observation, far from BCA crowding out consideration of equity, decision makers sometimes have such a strong focus on equity that arguments for efficiency struggle to get heard. Finally, I’d note that many of the projects that actually get funded are terrible projects. This is not in anyone’s interest. BCA can help to reduce the number of clunker projects that waste program resources and everybody’s time. How directly these observations carry over into the development field I don’t know, but I wouldn’t be at all surprised if they were relevant to some degree. Could be a good subject for a chat over coffee.

    Cheers
    Dave

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s