The book in a paragraph
Algorithms to Live By explores human algorithm design – the use of algorithms for solutions to life’s everyday challenges. Algorithms are not just the domain of computer scientists – they are any sequence of steps used to solve a problem. We constantly use algorithms to solve life’s daily challenges, mostly without the assistance of computers. The book takes learnings from computer science to give practical guidance on human problems. It also uses the lens of computer science to give insights into the human mind – how we think, how we decide, and how we behave.
Summary of Algorithms to Live By, by Brian Christian and Tom Griffiths
This is my summary of Algorithms to Live By, by Brian Christian and Tom Griffiths. The book provides insights on broad aspects of the human experience. For a taste, see author Tom Griffiths great TEDx talk.
My summary focuses on aspects of the book relating most directly to ‘impact’ for people and organisations, e.g. productivity, decision-making, prioritisation and strategy.
These notes are informal and may contain quotes from the book, mixed with my own thoughts.
Why is computer science relevant to human problems?
There are problems that we all face as a result of the human experience e.g.:
- What should we prioritise today, this month, or this decade?
- What degree of order vs spontaneity should we embrace?
- What balance between new experiences vs old favourites is best?
These may seem like very human problems. However:
- As computers have become more sophisticated, they have become more adept at dealing with chance, trade-offs and approximations. Computers are increasingly good analogies for the human mind.
- Humans are confronted daily by some of the some of the hardest types of problems ever tackled by computer science. We make decisions while dealing with uncertainty, time constraints, partial information and a rapidly changing world. Computer science has been grappling with, and sometimes solving, equivalents to these human problems.
- The findings from these studies can give us insights into how the human mind approaches these problems, and give us practical guidance on how to approach them in the future.
When to look, and when to leap
Optimal stopping problems appear throughout life, when you are presented with options, one at a time. Interviewing candidates for a job, finding a carpark and house hunting are all examples of optimal stopping problems.
Here’s an example of a classic optimal stopping problem:
- Imagine you’re interviewing candidates for a role, and you want to maximise the chances that you hire the best individual in the pool of people that you’re interviewing.
- You interview applicants one by one, and you can make an offer at any time with confidence that they will accept, ending the search.
- In this scenario we assume that if you decide not to hire someone you’ve just interviewed, they are no longer available to you (e.g. they will take another job with someone else).
Computer science has determined an optimal strategy for these kinds of problems, known as the 37% rule: look at the first 37% of options, choosing none (data gathering period), then be ready to immediately select the next candidate that is better that all those you’ve seen so far. You can apply it to hiring, house-hunting, finding a carpark etc.
Some takeaways on this:
- The optimal strategy (the 37% rule), also has a 37% success rate.
- In the real world, we often have different and more fluid constraints than in the simple version of the problem. We may know information about how each candidate compares to the pool (e.g. through standardised metrics like university marks), we can sometimes go back to candidates we’ve previously passed on (with some chance that they’re still available), and not everyone will accept our job offer.
- Algorithms to Live By provides guidance for many variants of optimal stopping problems.
- In experiments on optimal stopping problems people naturally perform quite well. We find the best possible candidate about 31% of the time.
- This tells us that humans usually stop a bit earlier than is optimal. It’s likely that this is due to people naturally accounting for time costs. Allocating a cost for continuing to search is quite sensible given the value of our own time. Erring on the side of stopping early is a sensible approach.
Some practical advice relating to optimal stopping problems:
- The ‘perfect’ strategy for the classic version of the problem finds the best possible candidate only 37% of the time (most of the time this strategy doesn’t result in the best option).
- This means that using a sound approach is all you can do – even when you don’t get the outcome you want. Following a good process means you’ve done all you can, and it’s not your fault if things don’t go your way.
- So, when faced with hiring and similar problems, you can afford to relax. Look for a while and gather some data until you’re ready to leap.
- As a rule of thumb, be ready to leap after looking at about one third of the total number of options you’re prepared to assess (or one third of the time you’re prepared to look for).
Explore versus exploit
Explore and exploit come with strong connotations, but in computer science their definitions are simple. Exploration is gathering information. Exploitation is using the information you have to get a known good result.
Finding the right balance between exploration and exploitation is useful. Some practical advice relating to the explore/exploit balance:
- Consider the time interval over which you are planning:
- Explore more when you will have time to use the knowledge that you gain.
- Exploit more if your window for using the information is closing.
- Choose optimism in the face of uncertainty. When in doubt, go for the option that you think could reasonably have the biggest positive outcome in the future. This approach helps to ‘minimise regret’ by reducing the number of great opportunities that you ‘miss the boat’ on.
At a tactical level, there are a few algorithms that may help with the explore/exploit balance e.g.:
- Win-Stay, Lose-Shift: stick with the current course (exploit) and then at any time it does not pay off, immediately shifting to another option (explore).
- A/B testing: commonly used by tech and internet companies. The basic setup of an A/B test is to split all traffic evenly between two options, run the test for a period of time, and thereafter direct all traffic to the winning option.
It’s Monday morning, and you have an as-yet blank schedule and a long list of tasks to complete. Some can be started only after others are finished, and some can be started only after a certain time. Some have sharp deadlines, others can be done whenever, and many are fuzzily in between. Some are urgent, but not important. Some are important, but not urgent. So what to do, and when, and in what order?
Before you start prioritising the things that you need to do, stop and consider your purpose – what exactly do you want to achieve? Make your goals explicit first, then consider the following approaches:
- You could minimise the “maximum lateness” i.e. prevent any one task from being really late. In this scenario, use the Earliest Due Date strategy – start with the task due soonest and work towards the task due last.
- To minimise the total number of items that end up being late use Moore’s Algorithm. Start with Earliest Due Date, but as soon as it looks like your next item running late, throw out the biggest remaining task from your schedule. Either schedule it at the end of your list, or discard it all together.
- To minimise ‘collective late time’ use Shortest Processing Time -always doing the quickest task you can. Focus above all on reducing your to-do list. This can be useful for reducing the mental burden of a long list of outstanding tasks.
- The best overall strategy for prioritisation is to use Weighted Shortest Processing Time -divide the weight (importance) of each task by how long it will take to complete, then work in order of highest importance-per-unit-time. This helps to keep focus on not just getting things done, but getting important things done.
When we consider complications such as task dependencies, new tasks arising, and human factors, things get a bit more challenging:
- Humans are quite adept at procrastination – putting off work on important projects by attending to more trivial matters. This suggests that we have a human bias toward implementing Shortest Processing Time, and not naturally considering task importance.
- When a new task arises, reassess the new task based on the scheduling algorithm that you’re using. If the new task should be done before the one you’re currently working on, you should technically be prepared to drop your work and switch to the new task. However, switching tasks has a cost. Psychologists have shown that for humans, switching creates more errors and delays. Try to avoid switching too often.
- For both people and computers, the ‘machine’ doing the scheduling is the same as the one being scheduled. Working out what to prioritise is an activity that also needs to be scheduled.
- Combining the cost of scheduling, with the cost of switching creates a vicious cycle, akin to a computer error known as ‘thrashing’. If you’ve ever had a moment where you wanted to stop doing everything just to have a chance to write down everything you were supposed to be doing, but couldn’t spare the time, you’ve been thrashed.
- In these scenarios, don’t work harder, work dumber. Try working on anything you can to reduce your backlog, until you have a more manageable list of tasks to schedule.
- Real time scheduling in the work environment is complex due to the balance between responsiveness and throughput. To balance responsiveness and throughput effectively:
- Have people who are focussed on being responsive (e.g. customer service) so that others can be focussed on throughput.
- Slow down and stay on a single task as long as possible.
- Use timeboxing (establish a minimum amount of time to spend on any one task), and batch your work (e.g. pay all bills in one go, rather than as each arrives).
- Use regularly scheduled team meetings to maintain throughput by deferring unplanned interruptions until the meeting.
Even with complete information and foreknowledge, finding the ‘perfect’ schedule is practically impossible, even for a computer. Instead, think on your feet, get on with it, and acknowledge that your plan is just an educated guess at what you should be prioritising.
When to think less
In business, we love to make forecasts. We build models from our observations and try to predict the future. One of the traps we fall into is building models that are too complex. We think: the more complex the model, the better it will perform. This is a fallacy.
Overfitting occurs when we create models that fit our observations quite well, but when tested, they do not perform as well as simpler models. Overfitting is a danger when we are dealing with inaccurate or noisy data (which we almost always are).
Often, the things we are really interested in are hard to define, let alone measure (e.g. employee satisfaction). In these cases, the extra complexity doesn’t just offer diminishing returns – the predictions get worse.
Overfitting is common and problematic in the world of business. An example is incentive structures which create perverse effects. E.g.:
- At a job placement firm, the number of interviews conducted is correlated with overall performance. However, when used as a performance metric this leads to lots of meetings, but less time actually placing high quality workers into well matched positions.
- At a factory, focussing on production leads to neglect of maintenance and repairs.
Some advice relating to overfitting:
- Simply being aware of the potential for overfitting is helpful. Be wary of overly complex models, KPIs and incentive structures. Do some cross-checks and be on the lookout for perverse effects.
- Find ways to reduce complexity, and force simplicity. A simple way to do this is to limit the time spent on analysis.
- If you have good data, and lots of time, then it’s fine to think long and hard.
- When you’re in the dark, the best plans are the simplest. In these cases, instinct and judgement is the rational choice.
In both life and computer science, lots of problems that we face are simply tough, and finding an optimal solution is near impossible. In these cases, it is easy to waste time trying to find the best possible solution. Perfect is the enemy of good. In many cases, you can find a solution that’s close to the perfect one, in just a fraction of the time.
It can be useful to ‘relax’ the problem. As a thought experiment, remove some of the constraints and try to solve the problem that you wish you had instead. Then use what you come up with to inform your real problem. For this kind of thought experiment, use questions such as:
- What would you do if you could not fail?
- What would you do if you weren’t afraid?
- What would you do if money were no object?
- What would you do if all outcomes had the same financial outcome?
Game theory is the study of how people and organisations behave in situations where they are faced by competing strategies acted out by other people and organisations.
Some interesting aspects of game theory include:
- The concept of ‘equilibrium points’ can be important. When some companies introduced unlimited vacation leave policies, employees unexpectedly took less leave than under a traditional system. Game Theory would suggest that while everyone wants to take vacations, they want to take slightly less than others to be seen as more dedicated and hardworking. The ‘equilibrium point’ of this system is zero leave – it’s a race to the bottom.
- Paying attention to the actions of others is often useful, because it allows you to add their knowledge to your own (e.g. you can assume that a popular restaurant is probably good). However, often other people’s actions are not tethered to any objective truth. Following others’ behaviours can leads to fads, herd-mentality and economic ‘bubbles’.
- In business this affects competitive situations (e.g. tender process and auctions) where consensus becomes detached from reality, and pricing spirals can result.
- In competitive processes, the cognitive effort required from ‘players’ can be enormous. Players need to constantly anticipate, and change course because of others’ tactics.
Some key advice relating to game theory:
- Remember that others’ actions are not just based on their own knowledge, they are also based on what they believe other people know. When in doubt, be hesitant to overrule your own doubts.
- It is possible to design auctions and tender processes, so that there is no better strategy for players than to just bid their “true value”, avoiding the need to undercut others. Studies show that the seller is no worse off in these ‘fairer’ processes.
- When you’re a player in a competitive process, try to adopt a strategy that doesn’t require changing course because of other’s tactics. Seek out games where ‘honesty’ is the dominant strategy, then be yourself.
Be ‘computationally kind’ to yourself and others, by reducing the amount of thinking that is required:
- There are cases where good computer science approaches can be simply transferred to human problems. Do this to save yourself time and energy coming up with your own approach.
- Using a sound approach should be a relief even when you don’t get the outcome you want. Following a good process means you’ve done all you can, and it’s not your fault if things don’t go your way.
- If you come up against a tough problem, remember some of the problems we face are intractable even by the world’s best computer scientists armed with supercomputers. Remember that good enough, really is good enough.
- People prefer ‘constrained’ problems. This why people are more likely to accept a meeting request when you propose a specific date (e.g. Tuesday between 1:00 and 2:00pm), compared to leaving it up to them. Saying “I’m flexible” is passing the cognitive buck, essentially asking the other person to handle the scheduling problem.
- Where you can, protect other people from unnecessary tension, friction and mental labour. Be computationally kind to others by framing issues in a way that makes their decision easier.