StreetEYE Blog

On ‘fake news’, market designs, and the fascist/libertarian nexus

Arbitrary power is most easily established on the ruins of liberty abused to licentiousness. – George Washington

To suppose that any form of government will secure liberty or happiness without any virtue in the people, is a chimerical idea. – James Madison

Reportedly, a CNN screenshot?

Reportedly, a CNN screenshot?

It seems richly ironic that the utopian, nominally libertarian visionaries of Silicon Valley, the folks of the Whole Earth Catalog and Think Different, in creating a many-to-many Internet platform where everyone can communicate with everyone via social media, have provided the tools for mass manipulation, mass surveillance, and mob rule on a mass scale.

Social media is turning into a cesspool of tar-and-pitchfork mobs, hate, false but truthy viral memes, spam, and lame celebrity and corporate shilling.

Surely we can agree that it would be a bad thing for democracy if a demagogue, through effective control of mass media, were to persuade people he won an election when he didn’t, and that people who say otherwise are unreliable and that only he can solve the country’s problems, etc., etc.?

Is that actually happening? Seems a question of degree. If you post that Clinton won the popular vote, some moron will come back to say she didn’t really because of illegals, or uncounted absentee ballots, or some other crazy argument that boils down to, “we won all the votes if you don’t count the people who voted for her, who shouldn’t count anyway.”

Which sounds like a great excuse for further disenfranchisement, convenient purges of the voter rolls, voter ID laws. “Always accuse your adversary of whatever it is you yourself are attempting.”

Which sounds like an un-American disrespect for our democracy, and an excuse for further cynicism, and decay.

Then, if people want to take common sense steps to prevent the spread of lies, people cry media manipulation.

If someone wants to be a troll and write that Trump won the popular vote, I certainly don’t advocate censoring them or sending them to jail. If someone wants to pay Facebook to run an ad pointing to a site writing similar nonsense, that’s also fine, as long as it’s obvious it’s an ad and Facebook is not presenting it as journalism.

However, once a lot of people start sharing that nonsense as news and Facebook presents it as news, that’s a real problem. Facebook is presenting false and misleading fiction as news adhering to journalism school norms.

At that point Facebook is misleading people. Facebook should let people flag it as false. Their tools to do that are shite, protestations notwithstanding. And if something is consistently flagged as false, and someone is sharing it as news, Facebook should show a disclaimer, ‘the accuracy of this item/site has been disputed, do you want to learn more before you share?’ with stats on who and how many people are flagging it, and the links to evidence they cite. A little nudge goes a long way with most decent people. And if people still want to share it, then fine. I’m not advocating Chinese style memory-holing Tienanmen Square here. Just don’t present conspiracy theories, wingnut fiction, and blatantly manipulative propaganda in context as real news.

Commingling false propaganda with news diminishes the credibility of all news media, it reduces the value of Facebook’s platform, and it’s malware infecting the operating system of democracy.

If you want to believe nonsense, like that Trump won the popular vote or that the moon landing was faked and it’s made of green cheese, it’s your own problem. But if everyone believes nonsense, it’s a sick society. Mark Zuckerberg and Facebook need to decide if the ad money, engagement, and sidestepping criticism are worth poisoning the well.

There is no valid argument against a minimal news filter for fraud and manipulation. Is it OK that people searching for election results get as a the top result a fake page showing Trump won the popular vote, and Google should just let it slide? We take it for granted that Google should care about search quality.

It’s impossible to avoid applying a minimal filter to avoid spam, illegal or offensive content. Surely blatantly false and misleading content shouldn’t be given a pass, while napalm children are blocked.

Facebook already optimizes for engagement, they need to tune it a bit for anti-spam.

The objections boil down to, the only truth that matters is, if Facebook were to stop presenting nonsense under the rubric of news, it would hurt one party more than another. Truth is what serves the party line and the Great Leader.

The argument that it’s a slippery slope to Stalinist media control is a fallacy. The alternative to simple common sense, not calling demonstrably false things news, looks exactly like Pravda. (And I don’t think it’s a coincidence.) And if you follow the slippery slope argument, you can never do anything to improve markets, or help a starving person, because it’s a slippery slope to socialism, communism and dependence.

The good-faith argument that we should err on the side of freedom of speech is a good instinct. But you can’t allow lies to drown out truth, and dark, extreme forces and foreign actors to manipulate narratives so easily.

A problem I have with libertarian arguments in general is that they take free markets as given, when in fact they are extraordinarily complex institutions which depend on norms and laws enforced by government.

Before you can have a free market, you need a market design that works. And this applies to the marketplace for ideas as well as the free market for goods and services. (And it’s a key function of a news aggregator like StreetEYE.)

Otherwise, the libertarian argument boils down to, “I’ve got the money, I’ve got the power, I know how to game the system, eff you and your superior attitude and your ‘fairness’ and your market design, I’m going to impose the market design that benefits me.”

When really the libertarian argument should be, how do we create a market design with a minimum of rules and arbitrary government intervention that achieves the objectives of the market, where government can’t abuse its power, and where bad-faith actors and big market players can’t abuse their power.

Which is extremely hard. And takes a complex understanding of government, markets, and the true meaning of liberty as maximum freedom for responsible actors. And willingness to do the hard work of constantly improving markets and rules, instead of throwing up hands and obstructing the people willing to do the work.

And if taken too far, the attitude that laws are always the problem, not the solution, becomes a disease that makes the republic go down the drain, instead of the cure.

Without responsible action, libertarianism sows the seeds of fascism, and the greatest communication tools ever invented become tools to spread the greatest lies ever invented, and eventually the greatest tools for repression.

Decent people should realize that a society where any lie can be the truth, isn’t a society that can lead the world, or one worth having.

Everyone lives in a bubble, and all models are overfitted

I beseech you, in the bowels of Christ, think it possible you may be mistaken. – Oliver Cromwell

Real knowledge is to know the extent of one’s ignorance. – Confucius

So, a lot of people are saying the media/elite got the election wrong because they live in a ‘bubble’.

And some are saying the pollsters and forecasters were all wrong and ‘data science’ is bullshit.

Well, this guy models, sometimes, and if you say that, you’re all as full of shit as the pundits and forecasters, and in most cases far more so.

Sure…media, analysts, Silicon Valley live in a bubble…unlike rural farmers who take time out of each day to consult a broad cross-section of Americans. (via this guy)

Sure…Nate Silver is full of BS…but an unemployed coal-miner who thinks Obama’s birth certificate is fake and Trump is going to build a wall and bring back his job is keeping it real.

The only way anyone can make sense of reality is by filtering it. We all create our own bubbles.

The only reality anyone knows is socially constructed, and subject to our own heuristics and behavioral biases.

Any sufficiently powerful model will overfit to past (in-sample) data, compared to the future (out-of-sample). The only way to prevent that is to pad the error bars for what you don’t know you don’t know, and build in a bias toward uncertainty.

What forecasters do is fight the curse of dimensionality and try to find the bias/variance sweet spot.

Fighting dimensionality means trying to explain a lot of variables with a few, finding simple patterns in complex data. The problem is, reality is incredibly messy, and the messier it is, the easier it is to find spurious patterns.

You can have an ultra-simple model, like a simple linear regression, and it may not fit the data very well, because the underlying reality is not linear, or because multiple predictors impact the response.

These models are biased in the sense that they start from an opinion about the data, a linear response to a single predictor, that is not true in reality.

So then you add variables and relax the linearity assumption, and if you do a good job your model starts to fit the in-sample data well and predicts the future a little better.

But if you add enough complexity to your model, you start to fit the quirks in your data too well, and your out-of-sample prediction gets worse. Instead of being overly wedded to the bias of your a priori model, you are overly sensitive to random variance in the data you happen to have encountered.

The tradeoff looks like this, via Scott Fortmann-Roe:

biasvariance

The data scientist is looking for that trough in the black line, the right balance between underfitting and overfitting, and trying to understand reality as well as possible to make the trough as deep as possible.

One thing data scientists do is divide data into training and test sets. You fit a model on the training set and then use the test set to measure the training error. Now maybe you go back and try a bunch of models and see which one performs best. Well, guess what, now you’ve selected a model using the test set, so you are in that sense fitting to the test set. By regression to the mean, you are more likely to select a model that got at least a little lucky, and your future performance will not match the test set.

So now you split into 3 sets, training to fit your model, cross-validation to select and fine-tune the model, and a test set, which you never look at until you are ready to predict your out-of-sample error. That should work in theory. But in practice, after that you will at some point go back and make adjustments based on out-of-sample performance. It’s very hard to stick 100% to that principle, although most scientists do so well enough that most results are correct. (Cough…Not! And yet that’s the nature of good science.)

Pollsters ask questions to determine whether respondents are likely voters. Then they predict whether they will actually vote based on past election data, and adjust sample weights accordingly. And there’s error from sampling variation, from your prediction of likely voters, from your sampling of past polls that you use to estimate the error of that prediction. It’s turtles all the way down. And if one particular type of voter is particularly excited to vote this election based on something that never happened in the past, you’re just not going to catch it. You just hope all the errors cancel out.

One of the funny concepts that I think is recent in machine learning is an emphasis on ‘worse is better.’

  • Regularization can add a penalty to really significant parameters on the theory that the most significant parameters got a bit lucky.
  • Dropout trains a neural network using only e.g. 50% of the neurons each iteration, so that the network develops independent paths to the correct result. Sounds nuts. Maybe it’s like shooting a basketball until you’re so tired you can’t see or feel your fingers, and it becomes automatic.
  • Random forests use an algorithm that builds a large number of decision trees which each use a randomly selected subset of predictors, and have them vote on the outcome. Kind of like a bunch of people who each see a different side of a jar of jelly beans vote on how many black beans there are. Again, sounds nuts until you see it work.

(Machine learning feels like street-fighting statistics. If it works in a well-designed out-of-sample test, use it. Throw out any opinionated model about what data looks like and where it comes from, and don’t worry about proofs or elegance.)

Nate Silver gave Trump 35% and thoroughly explained the limitations of that analysis. He said Trump was within a normal polling error. Does that make him an idiot? If his probability prediction is always perfect, he’s going to be an idiot 1 election out of 3 and a genius 2 out of 3. Others maybe not so much.

The Nate Silvers, and media, and Silicon Valley, are the guys confronting reality, creating it, with the best tools they have. Maybe they’re in a bubble, but they try to make it the most self-aware, attentive, deliberate bubble they can.

Those in the so-called ‘media/elite bubble’ get a lot of flak for both being too mainstream, and too sensitive to minority views. (i.e. both too much bias, and too much variance.)

Everyone’s models of the world are overfitted to their own experience…unless they make an intense, deliberate effort to appreciate others’ experiences…to back off a little from assuming our experience is the complete reality. If we’d been born where they were born, taught what they were taught, and lived what they live, we would live in their bubble, and believe what they believe.

I believe Trump and his followers should, for example, accept the reality of Obama’s birth certificate, and firmly reject the endorsement of the KKK. If they don’t, it seems like they need to get out of their own bubble and tolerate others. My acknowledging and appreciating your reality cannot always extend to conforming mine to yours.

The beautiful thing about science, and markets, and democracies, is that for all their faults, they harness the potential and decision-making of all participants, and when they screw up, they eventually self-correct.

It is a mistake to try to look too far ahead. The chain of destiny can only be grasped one link at a time. – Sir Winston Churchill

(inspiration credit to @firoozye)

A politics bullet-storm / linkfest

  • The Dems didn’t exactly get crushed or demolished in the Presidential. Hillary may have won the popular vote by up to 2%.
  • Trump got the fewest popular votes of any GOP candidate since W in 2000 [edit: this was based on early returns, no longer true]. Low approval. Crushed in home states that know him well. Like Waterloo, the nearest run thing you ever saw in your life.
  • But of course Dems have gotten systematically dismantled in Congress, at the local level.
  • Bill and Hillary Clinton moved Dems to center, curtailed redistributionist rhetoric, aligned more with elites. One would think that should have reduced political polarization and instability, right?
  • Then GOP moved to the right, positioned as anti-elite. Picked up some poor whites.
  • Now you have alt-right racist BS, unstable cynicism-inducing dynamic where left is the party of establishment, right anti-elite.
  • Right positions as anti-elite while pursuing policies that in practice, as a first-order approximation, are not anti-elite at all.
  • Trump comes in and fails, Dems return to anti-elite role, restoring a more traditional left-right dynamic, with both parties now on a far more populist axis.
  • When I say ‘fails’, eventually all political movements fail, sometimes they change the world and eventually peter out, sometimes they are disasters from the get-go.
  • But frankly no one has ever been less prepared, more of a political outsider, than Trump. Reagan was a two-term governor of California, had a team of some cronies and some heavyweights, like Jim Baker (picked up from GW Bush’s team), Don Regan, George Shultz.
  • That was the choice, the ultimate insider machine politician (who can’t even fathom why Obama didn’t handcuff Comey) against the ultimate outsider. You have chosen, poorly IMHO.
  • You never know how a President will govern, but Trump is a real wildcard. Can he really ally with establishment Republicans and movement conservatives and cobble together an agenda that doesn’t betray his populist base? Is he really going to let Ryan and McConnell run the country, kill Obamacare, reform entitlements, privatize Medicare and lay it on him?
  • A tea-party / liberal populist coalition? Between improbable and impossible.
  • Ineffectiveness, gridlock coupled with intensifying populist rhetoric seems a distinct possibility.
  • Finding the lost Apprentice N-word tapes and pushing Trump out seems like an avenue some on both sides of the aisle will vigorously pursue…which would inflame the populists even more.
  • On the Dem side, hard to see how anyone who follows the Clintons would not be more populist, anti-establishment like Warren or Sanders.
  • It’s hard to be party of poor whites and poor blacks at the same time, racial resentment effs up normal right-left dynamics. More so in tough times than prosperity.
  • But Trump seems racist enough that Dems will pick up the decent poor whites, the ‘non-deplorables.’ When you don’t repudiate David Duke, guys who taunt reporters about sending them to the ovens, that’s pretty bad.
  • 2-party politics is kind of like the Hotelling problem, or 2 vendors on a boardwalk.
  • The beach tends to have the vanilla lovers on one side and strawberry lovers on the other.
  • The tendency will be for both vendors to position close to each other in the center.
  • Products will tend to undifferentiate, but the one with better strawberry will be on the side with the strawberry lovers.
  • If the two vendors perversely switch places, word might spread on one side that the strawberry is better on the other side.
  • They walk the extra few steps, while maybe the vanilla lovers just go to the closer one. One vendor gets crushed.
  • Or if one just has crappier product overall, everyone goes to the other.
  • Then maybe the crappier one has to move more to the left or right, its natural side, to get any customers at all.
  • And with 3 vendors there is no stable solution. I don’t know what happens in practice, presumably 2 pair up or one goes out of business.
  • Now, I don’t identify as a liberal, I don’t like identifying as anything or joining any movements, I tend to be more of an economic realist than many liberals, but I share liberal values. I think one has to balance economic efficiency with fairness and freedom. And I’m not pleased at this election outcome.

    Trump is a promoter, not an operator. He’s all id and ego, no superego. You just can’t take the politics out of politics. Like “The Wire,” “the game is the game.” The things that breed cynicism are to some degree built into the game. It’s a divided country and the things Trump had to do to get his base more excited than Hillary’s base make it impossible to hire decent people or get anything done, or at least anything that doesn’t piss off more people than it makes happy. So Trump is already running away from his campaign at top speed.

    I’m not really a believer in the “Wall Street Trump bro relief rally.” Some of his proposed policies are highly stimulative, some are highly recessionary (rolling back globalization, tariffs etc.). Crisis could come from a variety of places and the system is fragile. If Trump doesn’t bring the yuge growth and jobs and America winning that he promised, what’s next? Will he get even more populist? If he goes down, how extreme will the true populists be who come after Trump?

    I’m hoping for the best but bracing for the worst.

    A few good links, don’t agree 100% with any of them but worth thinking about:

Safe Retirement Spending Using Certainty Equivalent Cash Flow and TensorFlow

This is not investment advice! This is a historical study/mad science experiment. It may not be applicable to you, it is a work in progress, and it may contain errors.

Certainty equivalent value is the concept of applying a discount to a stream of cash flows based on how variable or risky the stream is…like the inverse function of the risk premium.

TensorFlow is a machine learning framework that Google released last November 2015. TensorFlow is a powerful tool to find optimal solutions to machine learning problems, like neural networks in Google’s search platform.1

In this post we’ll use the concept of certainty equivalent cash flow to construct an optimized asset allocation and withdrawal plan for retirement using TensorFlow.

It’s an interesting problem; maybe it’s an interesting and/or original solution, and if nothing else it’s a starter code example for how one can use TensorFlow to solve an optimization problem like this.

1) The solution.

To cut to the chase, here is an estimate of the asset allocation and spending plan for a 30-year retirement, that would have maximized certainty-equivalent cash flow for a somewhat risk-averse retiree over the last 59 years:

Spending paths, 30-year retirements, 1928-1986, γ = 8

Spending paths, 30-year retirements, 1928-1986, γ = 8

The black line is the mean outcome. We also show the best case, worst case, the -1 and +1 standard deviation outcomes that should bracket ~68% of outcomes, and the spending path for each individual 30-year retirement cohort 1928-1986.

Year const_spend var_spend stocks bonds spend_mean spend_min spend_max
1 $1.803374 2.742079% 85.995595% 14.004405% 4.740133 3.604047 5.817874
2 $1.803374 2.895269% 85.919296% 14.080704% 4.954235 3.239628 6.990869
3 $1.803374 2.998764% 85.473425% 14.526575% 5.102155 3.338954 7.131929
4 $1.803374 3.056361% 85.205353% 14.794647% 5.231042 3.328030 8.398137
5 $1.803374 3.146801% 84.754303% 15.245697% 5.401859 3.355214 8.518801
6 $1.803374 3.278922% 84.731613% 15.268387% 5.640160 3.220717 9.473607
7 $1.803374 3.381219% 84.674124% 15.325876% 5.843391 3.262251 10.780489
8 $1.803374 3.519538% 83.207330% 16.792670% 6.106355 3.482782 11.021653
9 $1.803374 3.732725% 81.886883% 18.113117% 6.406250 3.268162 11.260031
10 $1.803374 3.953266% 81.886883% 18.113117% 6.743933 3.372960 13.164527
11 $1.803374 4.171134% 81.787428% 18.212572% 7.143623 3.348702 14.116656
12 $1.803374 4.420522% 81.082294% 18.917706% 7.594771 3.409277 14.611927
13 $1.803374 4.711470% 80.935846% 19.064154% 8.120312 3.246629 17.223627
14 $1.803374 4.937597% 80.915248% 19.084752% 8.577191 3.273653 18.662278
15 $1.803374 5.134573% 80.869082% 19.130918% 8.939804 3.422242 20.469916
16 $1.803374 5.421706% 79.235040% 20.764960% 9.347002 3.205772 21.213132
17 $1.803374 5.717492% 78.697519% 21.302481% 9.727362 3.317734 24.702617
18 $1.803374 6.080383% 78.566021% 21.433979% 10.143674 3.450519 27.386865
19 $1.803374 6.505818% 77.606353% 22.393647% 10.531063 3.401243 25.689757
20 $1.803374 6.976587% 77.226108% 22.773892% 10.881431 3.542791 25.921611
21 $1.803374 7.350485% 76.418702% 23.581298% 11.038810 3.749375 25.284224
22 $1.803374 7.920143% 75.957408% 24.042592% 11.355122 3.645121 23.647565
23 $1.803374 8.721912% 75.066357% 24.933643% 11.747497 3.685455 24.225980
24 $1.803374 9.574536% 74.210995% 25.789005% 11.987134 3.848685 26.258617
25 $1.803374 10.919882% 68.810212% 31.189788% 12.405500 3.660422 29.108868
26 $1.803374 12.781818% 68.212009% 31.787991% 12.913144 3.878824 28.843689
27 $1.803374 15.488311% 66.794616% 33.205384% 13.578350 3.829700 27.946354
28 $1.803374 20.138642% 66.270963% 33.729037% 14.794993 3.828073 30.237365
29 $1.803374 29.915170% 65.979618% 34.020382% 17.323086 3.581394 37.396578
30 $1.803374 100.000000% 61.499307% 38.500693% 35.762553 3.056128 85.260856

In this example, you allocate your portfolio between two assets, stocks and 10-year Treasurys. (We picked these 2, but could generalize to any set of assets.)

  • Column 1: A fixed inflation-adjusted amount you withdraw by year. In this example we start with a portfolio of $100, so each year you withdraw $1.803, or 1.803% of the starting portfolio. This amount stays the same in inflation-adjusted terms for all 30 years of retirement. (All dollar numbers in the model are constant dollars after inflation. In a real-world scenario, you would initially withdraw 1.803% of your starting portfolio and increase nominal withdrawal by the change in CPI to keep purchasing power constant.)
  • Column 2: A variable % of your portfolio you withdraw by year, which increases over time. So in year 25 you would spend $1.803 in constant dollars plus 10.92% of the current value of the portfolio.
  • Column 3: The percentage of your portfolio you allocate to stocks by year, which declines over time.
  • Column 4: The amount allocated to Treasurys, which increases over time (1 – stocks).
  • Column 5: The mean amount you would have been able to spend by year if you had followed this plan, you retired in years 1928-1985 and you enjoyed a 30-year retirement.
  • Column 6: The worst case spending across all cohorts by year.
  • Column 7: The best case spending by year.

This is a numerical estimate of a plan that would have maximized certainty equivalent cash flow over all 30-year retirement cohorts for a moderately risk-averse retiree, under a model with a few constraints and assumptions.

To view how optimal plan estimates change under various values of γ, go here.

2. How does it work? What is certainty-equivalent cash flow (the value we are maximizing)?

Certainty-equivalent cash flow takes a variable or uncertain cash flow and applies a discount based on how risk-averse you are, and how volatile or uncertain the cash flow is.

Suppose you have a choice between a certain $12.50, and flipping a coin for either $10 or $15. Which do you choose?

People are risk averse (in most situations). So most people choose a certain cash over a risky coin-flip with the same expected value2.

Now suppose the choice is between a certain $12, and flipping the coin. Now which do you choose?

This time, on average, you have a bit more money in the long run by choosing the coin-flip. You might take the coin-flip, which is a slightly better deal, or not, depending on how risk-averse you are.

  • If you’re risk-averse, you may prefer the coin-flip (worth $12.50) at $12 or below. (You get paid on average $0.50 to flip for it.)
  • If you’re even more risk-averse, and you really like certain payoffs, the certain payoff might have to decrease further to $11 before you prefer the coin-flip worth $12.50. (You need to get paid $1.50 to flip for it.)
  • If you’re risk neutral, anything below $12.50 and you’ll take the $12.50 expected-value coin-flip. (You don’t care at $12.50, and flip every time for $0.01.)

We’ll refer to that number, at which you’re indifferent between a certain cash flow on the one hand, and a variable or uncertain cash flow on the other, as the ‘certainty equivalent’ value of the risky stream.

We will use constant relative risk aversion (CRRA). CRRA means that if you choose $12 on a coin-flip for $10/$15, you will also choose $12,000 on a coin-flip for $10,000/$15,000. It says your risk aversion is scale invariant. You just care about the relative values of the choices.

How do we calculate certainty-equivalent cash flow? For a series of cash flows, we calculate the average CRRA utility of the cash flows as:

U=\frac{1}{n}\sum_{i}\frac{C_i^{1-\gamma}-1}{1-\gamma}

Using the formula above, we

  • Convert each cash flow to ‘utility’, based on the retiree’s risk aversion γ (gamma)
  • Sum up the utility of all the cash flows
  • And divide by n to get the average utility per year.

Then we can convert the utility back to certainty equivalent cash flow using the inverse of the above formula:

CE = [U(1-\gamma) + 1] ^ {\frac{1}{1-\gamma}}

This formula tells us that a variable stream of cash flows Ci over n years is worth the same to us as a steady and certain value of CE each year for n years.

No need to sweat the formula too much. Here’s a plot of what CRRA utility looks like for different levels of γ.

CRRA utility vs. cash flow for selected values of γ

CRRA utility vs. cash flow for selected values of γ

You can look at 1 as a reference cash flow with a utility of 0. As you get more cash flow above 1, your utility goes up less and less. As you get less cash flow below 1, your utility goes down more and more. As γ goes up, this convexity effect increases. (But recall that levels don’t change choices with CRRA and same can be said for any point on the curve. Trust us, or try it in Excel!)

The key points are:

  • We use a CRRA utility function to convert risky or variable cash flows to a utility, based on γ the risk aversion parameter.
  • After summing utilities, we convert utility back to cash flows using the inverse function.
  • This gives the certainty equivalent value of the cash flows, which discounts the cash flows based on their distribution.
  • γ = 0 means you’re risk neutral. There is no discount, however variable or uncertain the cash flows. The CE value equals the sum of the cash flows.
  • γ = 8 means you’re fairly risk averse. There is a large discount.
  • The higher the variability of the cash flows, the greater the discount. And the higher the γ parameter, the greater the discount.
  • The discount is the same, if you multiply all the cash flows by 2, or 1000, or 0.01, or x. Your risk aversion is the same at all levels of income. That property accounts for the somewhat complex formula, but it describes a risk aversion that behaves in a relatively simple way.

If we think that, to a reasonable approximation, humans are risk averse, they make consistent choices about risky outcomes, and their risk aversion is scale invariant over the range of outcomes we are studying, CE cash flow using a CRRA utility function seems like a reasonable thing to try to maximize.

In our example, we maximize certainty-equivalent cash flow for a retiree over 30 years of retirement, over the historical distribution of outcomes for the 59 30-year retirement cohorts 1928-1986. The retiree’s risk aversion parameter is 8. This is risk-averse (but not extremely so).

Maximizing CE spending means the retiree plans to spend down the entire portfolio after 30 years. Presumably the retiree knows how long he or she will need retirement income. Perhaps the retiree is 75 and 30 seems like a reasonable maximum to plan for, perhaps the retiree has an alternative to hedge longevity risk, like an insurance plan or tontine.

3. How does this work in TensorFlow?

TensorFlow is like a spreadsheet. You start with a set of constants and variables. You create a calculation that uses operations to build on the constants and variables, just like a spreadsheet. The calculation operations you define are represented by a computation graph which tracks which operations depend on which. You can tell TensorFlow to calculate any value you defined, and it will only recompute the minimum necessary operations to supply the answer. And you can program TensorFlow to optimize a function, i.e. find the variables that result in the best value for an operation.

We want to set the values for these 3 variables, in order to maximize CE cash flow:

1: Constant spending (a single value): A constant inflation-adjusted amount you withdraw each year in retirement. This is like the 4% in Bengen’s 4% rule. The inflation-adjusted value of this annual withdrawal never changes.

2: Variable spending (30 values, one for each year of retirement, i.e. a list or vector): A variable percentage of your portfolio value you withdraw each year. In contrast to the Bengen 4% rule, we’re, saying, if the portfolio appreciates, you can safely withdraw an additional amount based on the current value of the portfolio. Your total spending is the sum of 1) constant spending and 2) variable spending.

3: Stock allocation (30 values, one for each year): We are going to study a portfolio with 2 assets: S&P 500 stocks and 10-year Treasurys.3

Our key constants are:

  • γ = 8. (a constant because we are not optimizing its value, unlike the variables above).
  • A portfolio starting value: 100.
  • Inflation-adjusted stock returns 1928-2015 (all numbers we use are inflation-adjusted, and we maximize inflation-adjusted cash flow).
  • Inflation-adjusted bond returns 1928-2015.

Operations:

  • Calculate 59 30-vectors, each one representing the cash flow of one 30-year retirement cohort 1928-1986, using the given constant spending, variable spending, and stock allocation.
  • Calculate the certainty equivalent cash flow of each cohort using γ.
  • Calculate the certainty equivalent cash flow over all cohorts using γ.
  • Tell TensorFlow to find the variables that result in the highest CE spending over all cohorts.

We initialize the variables to some reasonable first approximation.

TensorFlow calculates the gradient of the objective over all variables, and gradually adjusts each variable to find the best value.

See TensorFlow / python code on GitHub.

Below, you can click to set the value of γ and see how the solution and outcome evolves.

          

          

          
const_spend var_spend stocks bonds spend_mean spend_min spend_max
0 2.25168 2.203127 81.789336 18.210664 4.604840 3.724279 5.431478
1 2.25168 2.287617 81.584539 18.415461 4.731527 3.418391 6.254532
2 2.25168 2.349896 81.042750 18.957250 4.826188 3.513405 6.347958
3 2.25168 2.400992 80.498949 19.501051 4.932219 3.517121 7.302567
4 2.25168 2.457950 79.886688 20.113312 5.049210 3.505759 7.408146
5 2.25168 2.529678 79.489911 20.510089 5.197089 3.386326 7.941039
6 2.25168 2.606445 79.154073 20.845927 5.351562 3.420979 8.891322
7 2.25168 2.713869 78.390047 21.609953 5.557626 3.522455 9.081053
8 2.25168 2.836364 77.651215 22.348785 5.743047 3.348000 9.203257
9 2.25168 2.981631 77.651215 22.348785 5.980422 3.458124 10.540080
10 2.25168 3.137015 77.086559 22.913441 6.282104 3.425701 11.225243
11 2.25168 3.303466 76.476785 23.523215 6.610572 3.473727 11.643038
12 2.25168 3.478377 76.048041 23.951959 6.969665 3.317807 13.439483
13 2.25168 3.625880 75.627575 24.372425 7.310505 3.343283 15.059415
14 2.25168 3.770352 75.205921 24.794079 7.616833 3.459124 16.401976
15 2.25168 3.936515 74.476987 25.523013 7.908735 3.274292 17.074267
16 2.25168 4.134133 73.971955 26.028045 8.230050 3.366878 20.061546
17 2.25168 4.377164 73.565393 26.434607 8.588685 3.435252 22.118598
18 2.25168 4.646451 72.994245 27.005755 8.918598 3.395044 21.213925
19 2.25168 4.954221 72.523218 27.476782 9.251004 3.521019 21.152092
20 2.25168 5.292311 71.995758 28.004242 9.595167 3.708905 21.061695
21 2.25168 5.753135 71.535585 28.464415 10.053624 3.599248 20.643662
22 2.25168 6.370145 71.022398 28.977602 10.585032 3.637109 22.381867
23 2.25168 7.174056 70.527878 29.472122 11.207199 3.846862 23.833581
24 2.25168 8.313613 69.984383 30.015617 11.968715 3.682980 27.458563
25 2.25168 9.946128 69.518839 30.481161 12.956893 3.900901 29.879319
26 2.25168 12.422349 68.999420 31.000580 14.327986 3.908337 30.120498
27 2.25168 16.581733 68.496478 31.503522 16.463549 3.948743 35.387707
28 2.25168 24.927346 68.006998 31.993002 20.297396 3.750516 46.393183
29 2.25168 100.000000 67.503853 32.496147 54.071078 2.945589 135.483633

 

4. Comments and caveats.

The results above are just an approximation to an optimal solution, after running the optimizer for a few hours. However, I believe that it’s close enough to be of interest and I believe that in this day and age of practically unlimited computing resources, we can likely calculate this number to an arbitrary level of precision in a tractable amount of time. (Unless I overlooked some particularly ill-behaved property of this calculation.)

Numerical optimization works by hill climbing. Start at some point; for each variable determine its gradient, i.e. how much changing the input variable changes the objective; update each variable in the direction that improves the objective; repeat until you can’t improve the objective.

It’s a little like climbing Mount Rainier, by just looking at the very local terrain and always moving uphill. It’s worth noting that if you start too far from your objective, you might climb Mt. Adams.

Similarly, in the case of optimizing CE cash flow, we might have just found a local optimum, not a global optimum. If the shape of the solution surface isn’t convex, if the slopes are flat in more than one place, we might have found one of those and not the global optimum. So this solution is not an exact solution, but finding a very good approximation of the best solution seems tractable with sufficiently smart optimization (momentum, smarter adaptive learning rate, starting from a known pretty good spot via theory or brute force).

We see that in good years, spending rises rapidly in the last few years. The algorithm naturally tries to keep some margin of error to not run out of money, and then also naturally tries to maximize spending by spending everything in the last couple of years.

As γ increases, constant spending increases, stocks decrease, and bonds increase.

It’s worth noting that we added some soft constraints: keep allocations between 0 and 100%, i.e. you can’t go short. Keep spending parameters above zero, you can’t save more now and spend more later. Also, we constrained the stock allocation to decline over time. The reason is that a worst case of running out of money has a huge impact on CE cash flow. The worst year to retire is 1966, and the most impactful year is 1974, when stocks were down > 40%. So an unconstrained solution reduces stocks in year 9 and then brings them back up. While we laud the optimizer for sidestepping this particular worst case scenario, this is probably not a generalizable way to solve the problem. We expect stock allocation to decline over time, so we added that as a constraint, and avoid whipping the stock allocation up and down.

How the optimization handles this historical artifact highlights the contrast between a historical simulation and Monte Carlo. Using a historical simulation raises the possibility that something that worked with past paths of returns may not work in all cases in the future, even if future return relationships are broadly similar. Monte Carlos let us generate an arbitrary amount of data from a model distribution, eliminating artifacts of a particular sample.

However, a Monte Carlo simulation assumes a set of statistical relationships that don’t change over time. In fact, it seems likely that the relationships over the last 59 cohorts did change over time.

  • Policy regimes, i.e. the fiscal and monetary response to growth and inflation changes under constraints like the gold standard, schools of thought that dominate policy.
  • Expectations regimes, whether investors expect growth and inflation, based on how they may have conditioned by their experience and education.
  • Environment regimes, changes in the world as there are wars, depressions, economies become more open.

Pre-war, dividend yields had to be higher than bond yields because stocks were perceived as risky. Then it flipped. Growth was seen as predictable, companies re-invested earnings, taxes made them less inclined to distribute. Today, once again, dividends are often higher than bond yields.

For 3 decades post-war inflation surprised to the upside, for the last 3 decades it surprised to the downside.

The beauty of a historical simulation is it answers a simple question: what parameters would have worked best in the past? Monte Carlo simulations can give you a more detailed picture, if you can only believe their opinionated assumptions about a well-behaved underlying distribution.

One has to be a bit cautious with both historical simulations, which depend on the idiosyncrasies of the past, and Monte Carlos, which assume known, stable covariances. It would be wise to look at both historical simulation and Monte Carlos, do a few Monte Carlos with the range of reasonable covariance matrix estimates, use the worst case, and run historical simulations over all cohorts, and include a margin of error (especially in the current ZIRP environment which might repeat a 1966 cohort of the damned).

Another assumption in our simulation is that a certain dollar in year 30, when you may be 90, is worth the same as a dollar in year 1.

A dollar may be worth spending on different things at 60 vs. at 90, and, in later years the retiree is more likely to be dead. With respect to the mortality issue, in the same way we are computing certainty equivalent cash flow over a distribution of market outcomes, we can also compute it over a distribution of longevity outcomes. This feature is in the code, but I will leave discussion for a future blog post. The current post is more than complex enough.

Of course, this simulation doesn’t include taxes, expenses.

Finally, there are reasons to choose a less volatile portfolio that doesn’t maximize CE cash flow, if the volatility is stomach-churning in and of itself, or if it leads the retiree to re-allocate at inopportune times or otherwise change plans in a suboptimal way.

5. Conclusion.

Optimizing CE cash flow over historical data might be flawed, it might be simplistic, or it might be useful. It’s just an itch that I’ve wanted to scratch for a while. It may seem complicated, but that’s because the problem is interesting. The one takeaway should be that if you can decide what your utility/cost function is, you can find a way to maximize it using today’s computing tools and resources.

Ultimately, you have to optimize for something. If you don’t know where you want to go, you’re not going to get there. Since we have tools to optimize complex functions, perhaps the discussion should be over what to optimize for. A CRRA framework is a good possibility to start with, although I there are others as well.

This is not investment advice! This is a historical study/mad science experiment. It may not be applicable to you, it is a work in progress, and it may contain errors.

Notes

On 9/25 I updated this post. After running for many additional hours from additional starting points, found a &gamma=8; plan that improved the original by about 1%. The change is small. But it’s important to note that the optimization doesn’t converge on a single solution quickly, and the solution varies a bit depending on the starting point. It appears more work is needed to make this analysis an aide to practical decision-making. Also added the visualization allowing you to click to see how spending plans change as γ changes.

1 TensorFlow lets you definite a calculation sort of like a spreadsheet does, and then run it on on your Nvidia GPU (Graphical Processing Unit). Modern GPUs have more transistors than CPUs, and are optimized to do many parallel floating point calculations. The way you numerically optimize a function is by calculating a gradient vs. each input, and gradually changing the inputs until you find the ones that produce the best output. 100 inputs = 100 gradients that you calculate each step, and GPUs can calculate all 100 simultaneously, and accelerate these calculations quite dramatically. That being said, this optimization seems to run 4-5x faster on CPU than GPU. ¯\_(ツ)_/¯ Without knowing a lot of TensorFlow internals, a single operation that needs to be done on CPU might mean the overhead of moving data back and forth kills the GPU advantage. Or maybe the Amazon g2 GPU instances have some driver issues with TensorFlow. Them’s the breaks in numerical computing.

2 This may beg the question of lotteries, why people gamble, whether homo economicus is a realistic assumption. We’re assuming rational people here. In general in financial markets, the more risky an investment is, the higher expected return it needs to offer to find a buyer. So the assumption people prefer less risky and variable retirement cash flows seems well established. It would also be possible in theory to do the same optimization for any utility function, although some would be more troublesome than others. If we have a cost function that measures the result of a spending plan, we measure how it performs and compare spending plans. If we don’t have such a cost function, we can try different ways of constructing plans and compute the results, but we don’t have a systematic way to compare them.

3 Bengen used intermediate corporates as a bond proxy. They have a higher return than Treasurys. I would use the same data, but it would involve a trip to the library or possibly a Bloomberg. I used this easily available data. At some point I can run an update so it is comparable to Bengen’s result.

The Game Theory of Assholes

The reasonable man adapts himself to the world; the unreasonable one persists in trying to adapt the world to himself. Therefore, all progress depends on the unreasonable man. – George Bernard Shaw

I beseech you, in the bowels of Christ, think it possible you may be mistaken! – Oliver Cromwell

Nassim Nicholas Taleb has a pretty good piece on the tyranny of the stubborn minority.

(more…)

Pokémon economics, secular stagnation, and cognitive dissonance

There are these two young fish swimming along, and they come across an older fish swimming the other way, who nods at them and says, “Morning, boys, how’s the water?”

And the two young fish swim on for a bit, and then eventually one of them looks over at the other and goes, “What the hell is water?” – David Foster Wallace

A physicist, an engineer, and an economist are stranded on an island with nothing to eat. A can of beans washes ashore. The physicist says, “Let’s build a fire and heat the can, the pressure will make it pop open, and we can eat the beans.” The engineer says, “The can will explode and beans will go everywhere. Let’s smash the can open with a rock.” The economist says, “Lets assume that we have a can-opener…” – Original author unknown

Do economists really understand the essence of what’s going on in the economy, or are they like fish who don’t know what water is, assuming can openers to solve what ails it?

Vox had an article on what Pokémon Go says about capitalism.

The gist: all the money from the digital economy goes to a few people in large companies like Apple and Nintendo, and the rest of the world is in a brutal race to the bottom.

Now, that’s not 100% true…Pokémon Go creator Niantic is a startup, if an unusually well-heeled and well-financed startup.

But it feels essentially true.

The reason I started writing this long and digressive rant, is that I posted the Vox story about Pokémon Go in an economics forum, and it got banned for not contributing to the economic discussion. The notion that there could be secular stagnation, and it could have to do with income distribution, and there might be policy implications, was, to some folks, not even a proper subject for analysis and debate.

(more…)

A fun 3D visualization of the financial Twittersphere

Here’s a fun little update of that visualization of the financial Twittersphere I posted in May. This one is in 3D, you can zoom (with scroll wheel) and drag it around (with mouse, also see controls in top right).

It might take a minute to load up, not work too well on older computers/browsers. Just wait out/ignore any popups, warnings about script on page running slowly. If the iframe below is wonky, try this full-page version.
(more…)

Negative interest rates are an unnatural abomination

Mayor: What do you mean, “biblical”?
Dr Ray Stantz: What he means is Old Testament, Mr. Mayor, real wrath of God type stuff.
Dr. Peter Venkman: Exactly.
Dr Ray Stantz: Fire and brimstone coming down from the skies! Rivers and seas boiling!
Dr. Egon Spengler: Forty years of darkness! Earthquakes, volcanoes…
Winston Zeddemore: The dead rising from the grave!
Dr. Peter Venkman: Human sacrifice, dogs and cats living together… mass hysteria!
Mayor: All right, all right! I get the point!
Ghostbusters (1984)

Happy 4th of July weekend! Some macro ‘blinding glimpse of the obvious’ blogging.
(more…)

A weekend Brexit reading list

This business will get out of control. It will get out of control and we’ll be lucky to live through it. – Admiral Josh Painter, The Hunt for Red October
Cly4cy0WYAAJHYq
(more…)

Hillary’s damn emails

The soldier who loses his rifle faces harsher punishment than the general who loses the war. — Anonymous soldier

So, I was reading this, by Kristy Culpepper. She’s smart, you should follow her. I agree with some of it but ultimately I think it’s off base from a tech / security / policy standpoint, like most of the furor on this issue.
(more…)


25 queries in 0.973 seconds.