Shakespeare’s forgotten legacy: hyperbolic numbers

There is a theory that Shakespeare was an accountant. How else to explain the detailed use of bookkeeping metaphors in his writing? “We shall not spend a large expense of time/ Before we reckon with your several loves,” declares Malcolm in Macbeth, “And make us even with you.”

The jailer in Cymbeline compares the hangman’s noose with an accountant reckoning the credits and debits of the condemned man’s life. And The Comedy of Errors refers to a debt as a “thousand marks”, a unit only used by book-keepers in Elizabethan England.

Yet Shakespeare seems to have been rather loose with his economics. Rob Eastaway’s new Shakespearean mathematical miscellany, Much Ado About Numbers, tells us that Shakespeare put Dutch guilders in Anatolia in The Comedy of Errors, situated Italian chequins in Phoenicia in Pericles, described Portuguese crusadoes in Venice in Othello and had Julius Caesar’s will bequeathing Greek drachmas to every Roman. There is something to be learnt from Shakespeare’s attitude to numbers (besides that he’s a poor guide to foreign exchange markets).

As Eastaway explains, Shakespeare’s works are richly adorned with numbers. Hamlet’s “thousand natural shocks/ That flesh is heir to” is just one of more than 300 instances of the word “thousand” in Shakespeare’s work. We are not meant to hear Hamlet’s words as a precise count, of course. By “thousand” he refers to the myriad of misfortunes a person can experience in a lifetime. And by “myriad” I mean “a lot”, rather than its original meaning in classical Greek, “ten thousand”. Large numbers have a way of blurring like that, especially as Shakespeare was writing for an audience who would rarely have any literal use for a thousand. Few people would earn a thousand pounds or travel a thousand miles, although the Globe Theatre might have held three thousand paying customers.

In Timon of Athens, Timon tries to borrow “fifty-five hundred talents” from his friend Lucilius. That’s 120 tonnes of silver, Eastaway tells us. No Elizabethan audience would have grasped what fifty-five hundred talents really meant. Nor, without Eastaway doing our homework for us, do we. (It’s more than $100mn.) But we all get the point: it’s a ludicrous request.

We still share Shakespeare’s love for hyperbolic numbers, but we also need to use big numbers accurately. I’m old enough to remember confusion as to the definition of the word “billion”. These days,

The detours on memory lane

Do you remember where you were when you heard that planes had struck the World Trade Center? That the Challenger shuttle had exploded? Or that Nelson Mandela had been released?

Your memories may be different from mine, but not as different as Fiona Broome’s. I remember watching the live TV footage of Nelson Mandela walking to freedom after 27 years in captivity, while Broome, an author and paranormal researcher, remembers Nelson Mandela dying in prison in the 1980s.

When Broome discovered that she was not the only person to remember an alternative version of events, she started a website about what she dubbed “the Mandela Effect”. On it, she collected shared memories that seemed to contradict the historical record. (The site is no longer online but, never fear, Broome has published a 15-volume anthology of these curious recollections.)

Mandela, of course, did not die in prison. On a recent trip to South Africa, I visited Robben Island, where he and many others were incarcerated in harsh conditions, to speak to former prisoners and former prison guards, and to wander around a city emblazoned with images of the smiling, genial, elderly statesman. How could it be that anyone remembers differently?

The truth is that our memories are less reliable than we tend to think. The cognitive psychologist Ulric Neisser vividly remembered where he was when he heard that the Japanese had launched a surprise attack on Pearl Harbor on December 7 1941. He was listening to a baseball game on the radio when the broadcast was interrupted by the breaking news, and he rushed upstairs to tell his mother. Only later did Neisser realise that his memory, no matter how vivid, must be wrong. There are no radio broadcasts of baseball in December.

On January 28 1986, the Challenger space shuttle exploded shortly after launch; a spectacular and highly memorable tragedy. The morning after, Neisser and his colleague Nicole Harsch asked a group of students to write down an account of how they learnt the news. A few years later, Neisser and Harsch went back to the same people and made the same requests. The memories were complete, vivid and, for a substantial minority of people, completely different from what they had written down a few hours after the event.

What’s stunning about these results is not that we forget. It’s that we remember, clearly,

There is no need to lose our minds over the Jevons paradox

A few years ago, two San Francisco doctors, Mary Mercer and Christopher Peabody, persuaded the busy hospital where they worked to conduct an experiment. They replaced their clunky and inflexible old pagers with a cheaper, more flexible and more powerful system. It’s called WhatsApp.

As the podcast Planet Money reported last year, the pilot was not a success. The chief reason? Messaging became too easy. To interrupt a busy consultant by paging them to demand a return phone call was a serious step, taken with care. But with WhatsApp, why not snap a photograph or even a video message and zip it over just to get a spot of advice? Doctors were soon swamped.

To students of energy economics, this story sounds awfully familiar. It’s the Jevons paradox. William Stanley Jevons was born in 1835 in Liverpool, in a country made rich by a coal-fuelled industrial revolution. He was about to turn 30 when he published the book that made his name as an economist, The Coal Question. Jevons warned that Britain’s coal would soon run out (an eye-catching warning that turned out to be wrong) but, more intriguingly, he warned that energy efficiency was no solution.

“It is wholly a confusion of ideas to suppose that the economical use of fuel is equivalent to a diminished consumption,” he explained. “The very contrary is the truth.”

Imagine developing a more efficient blast furnace, one that would produce more iron for less coal. These more economical furnaces would proliferate. Jevons argued that more iron would be produced, which was a good thing, but the consumption of coal itself would not decline.

Is this right? In a mild form, Jevons’ analysis is certainly correct. When an energy-consuming technology becomes more efficient, we’ll use more of it. Consider light. In the late 1700s, President George Washington calculated that burning a single candle for five hours a night all year would cost him £8. Relative to incomes of the time, that is about $1,000 in today’s money. These fine spermaceti candles were pricey enough to leave even a rich man such as Washington carefully conserving them.

Modern lighting is far more economical and therefore used with abandon. LEDs are many times brighter than candles, and we use much more light and save much less energy than we otherwise could have done.

The stronger form of Jevons’ warning

When your smartphone tries to be too smart

Back in the 1980s, the design expert Donald Norman was chatting to a colleague when his office phone rang. He finished his sentence before reaching for the phone, but that delay was a mistake. The phone stopped ringing and, instead, his secretary’s phone started ringing on a desk nearby. The call had been automatically re-routed. Alas, it was 6pm, and the secretary had gone home. Norman hurried over to pick up the second phone, only to find it stopped too.

“Ah, it’s being transferred to another phone,” he thought. Indeed, a third phone in the office across the hall started to sound. As he stepped over, the phone went silent. A fourth phone down the hall started ringing. Was the call doomed to stagger between phones like a drunkard between lampposts? Or had a completely different call coincidentally come in?

Norman tells the story in The Design of Everyday Things, the opening chapter of which is a collection of psychopathic objects from bewildering telephone systems to rows of glass doors in building lobbies that simply offer no clue whether to push or pull or even where the hinges are.

“Pretty doors,” jokes Norman. “Elegant. Probably won a design award.”

Reading Norman’s book more than three decades after its publication in 1988, it is striking how much the surface of things has changed. We no longer have to deal with incomprehensible telephone systems or VHS recorders. Good design is not a niche luxury now, but viewed as an essential part of business. The world has scrambled to imitate the success of Apple, one of the world’s most valuable and admired companies, which is built on good design: beautiful, easy-to-use products.

And yet I wonder. The aviation safety expert Earl Wiener is famous for “Wiener’s Laws”, which include “whenever you solve a problem you usually create one”. The truth is that modern devices may seem simple and easy to use, whereas they are in fact fantastically complicated. Those complications are elegantly obscured until something goes wrong.

I thought of Wiener and Norman recently as I arrived in Amsterdam, equipped with a Eurostar ticket barcode on my phone. Problem: the Eurostar exit barrier in Amsterdam is also the ticket gate for a variety of metropolitan rail services. As I tried to scan the barcode, the ticket barrier perceived my phone as a wannabe contactless credit card, and charged

The lesson of Loki? Trade less

The pages of the Financial Times are not usually a place for legends about ancient gods, but perhaps I can be indulged in sharing one with a lesson to teach us all.

More than a century ago, Odin, All-father, greatest of the Norse gods, went to his wayward fellow god Loki, and put him in charge of the stock market. Odin told Loki that he could do whatever he wanted, on condition that across each and every 30-year period, he ensured that the market would offer average annual returns between 7 and 11 per cent. If he flouted this rule, Odin would tie Loki under a serpent whose fangs would drip poison into Loki’s eyes from now until Ragnarök.

Loki is notoriously malevolent, and no doubt would love to take the wealth of retail investors and set it on fire, if he could. But when faced with such a — shall we say binding? — constraint, what damage could he really do? He could do plenty, says Andrew Hallam, author of Balance and other books about personal finance. Hallam uses the image of Loki as the malicious master of the market to warn us all against squandering the bounties of equity markets.

All Loki would have to do is ensure the market zigged and zagged around unpredictably. Sometimes it would deliver apparently endless bull runs. At other times it would plunge without mercy. It might alternate mini-booms and mini-crashes; it might trade sideways; it might repeat old patterns, or it might do something that seemed quite new. At every moment, the aim would be to trick investors into doing something rash.

None of that would deliver Loki’s goals if we humans weren’t so easy to fool. But we are. You can see the damage in numbers published by the investment research company Morningstar; last year it found a shortfall in annual returns of 1.7 percentage points between what investors make and the performance delivered by the funds in which they invested.

There is nothing strange about investors making a different return from the funds in which they invest. Fund returns are calculated on the basis of a lump-sum buy-and-hold investment. But even the most sober and sensible retail investor is likely to make regular payments, month by month or year by year. As a result, their returns will be different, maybe better and maybe worse.

Fossil fuels could have been left in the dust 25 years ago

Gordon Moore’s famous prediction about computing power must count as one of the most astonishingly accurate forecasts in history. But it may also have been badly misunderstood — in a way that now looks like a near-catastrophic missed opportunity. If we had grasped the details behind Moore’s Law in the 1980s, we could be living with an abundance of clean energy by now. We fumbled it.

A refresher on Moore’s Law: in 1965, electronics engineer Gordon Moore published an article noting that the number of components that could efficiently be put on an integrated circuit was roughly doubling every year. “Over the short term this rate can be expected to continue, if not increase,” he wrote. “There is no reason to believe it will not remain nearly constant for at least 10 years. That means, by 1975, the number of components per integrated circuit for minimum cost will be 65,000.”

That component number is now well into the billions. Moore adjusted his prediction in 1975 to doubling every two years, and the revised law has remained broadly true ever since, not only for the density of computer components but for the cost, speed and power consumption of computation itself. The question is, why?

The way Moore formulated the law, it was just something that happened: the sun rises and sets, the leaves that are green turn to brown, and computers get faster and cheaper.

But there’s another way to describe technological progress, and it might be better if we talked less about Moore’s Law, and more about Wright’s Law. Theodore Wright was an aeronautical engineer who, in the 1930s, published a Moore-like observation about aeroplanes: they were getting cheaper in a predictable way. Wright found that the second of any particular model of aeroplane would be 20 per cent cheaper to make than the first, the fourth would be 20 per cent cheaper than the second, and every time cumulative production doubled, the cost of making an additional unit would drop by a further 20 per cent.

A key difference is that Moore’s Law is a function of time, but Wright’s Law is a function of activity: the more you make, the cheaper it gets. What’s more, Wright’s Law applies to a huge range of technologies: what varies is the 20 per cent figure. Some technologies resist cost improvements. Others, such as solar photovoltaic modules,

The surprising data behind supercentenarians

If there is a Dog Heaven, what must Bobi be thinking as he gazes down? Bobi’s place in the record books seemed assured when he died in October at the age of 31 years and 165 days — more than two years older than his closest rival for the title of the oldest dog who ever lived. Alas, Guinness World Records has stripped Bobi of his record on the basis that “without any conclusive evidence available to us . . . we simply can’t retain Bobi as the record holder”.

If we cannot believe that Bobi the dog was really as old as was claimed, what are we to make of the claimants to human longevity records? The oldest human ever was Jeanne Calment, who died in 1997 at the age of 122, having met Vincent van Gogh when she was a teenager in Arles in 1888. (Calment recalled that van Gogh was “very ugly. Ugly like a louse.”)

To demonstrate such claims requires good records, which is a problem, because the key fact that needs to be verified — a date of birth — only becomes interesting to most observers a century or so after the event in question. By definition all surviving supercentenarians (110 years and up) were born before the first world war.

“No single subject,” the Guinness Book of Records declared in 1955, “is more obscured by vanity, deceit, falsehood and deliberate fraud than the extremes of human longevity.”

Saul Newman, a demographer at Oxford university, has examined the data describing the population of semi-supercentenarians (aged 105 or more) and of supercentenarians. What might predict such extraordinary longevity? Eating plenty of vegetables, perhaps — or a strong social network?

No. In the UK, Italy, France and Japan, Newman finds instead that “remarkable longevity is . . . predicted by regional poverty, old-age poverty, material deprivation, low incomes, high crime rates, a remote region of birth, worse health”. You read that right. They are all factors that are associated with worse population health and a lower probability of reaching 90.

It seems that the very environments that are least conducive to health are the places where people with claims to astonishing longevity pop up. Tower Hamlets — by several measures the most deprived borough in London — also has the highest proportion of supercentenarians.

Another example is Okinawa. Some parts of Okinawa are super-longevity hotspots for Japan, but

Why, deep down, we’re all ultramarathoners

Jasmin Paris is not built like ordinary mortals. Last month she won a moment of fame after completing the Barkley Marathons, a race so brutal that only 19 men have managed to finish in the past 35 years. Paris is the first woman to complete the race. It is not Paris’s first brush with greatness. Five years ago, she won the Spine Race: 268 miles along the Pennine Way in January, when it is dark 16 hours a day, cold enough to be covered in snow but warm enough for the rain to soak through everything, and where every snatched minute of sleep is a minute conceded to one’s rivals. As the mother of a breastfeeding daughter, Paris had the additional disadvantage of having to express milk at rest stops, but she nevertheless beat both the Spine Race record and the men trying to keep pace with her. Her nearest challenger, Eugeni Roselló Solé, had to be rescued four miles from the finish line after he became dangerously cold and disoriented. The eventual winner of the men’s race, Eoin Keith, was about 50 miles behind Paris when she crossed the finish line.

Paris recently told the BBC she wanted to inspire people, particularly women. I suspect most people feel more awestruck than inspired; Superman does not inspire me to try flying.

The agony involved in these endurance races defies belief. I think not just of the winners, but competitors such as Roselló Solé, who spectacularly dropped out of the Barkley Marathons in 2019. A former winner of the event, he had to quit part way through in 2020, 2022, 2023 and 2024. And yet he keeps returning.

Why would anyone subject themselves to this? A quarter of a century ago, the behavioural economist George Loewenstein addressed that question. He focused on the experiences of mountaineers and polar explorers, which he summarised as “unrelenting misery from beginning to end”, and dangerous, too. He wanted to expand upon George Mallory’s reported answer to the question, “Why do you want to climb Everest?” (“Because it’s there.”) Mallory died near the summit in 1924.

The question should intrigue anyone interested in human decision-making. Textbook economics merely states that people act so as to satisfy some consistent set of preferences, but preferences are defined only as whatever it is that people are trying to satisfy. Surely it is not useful to