Just a quick note to let you know that Educating Newton is now available on the Amazon Kindle store. I’ve had to take the full text off my website now that it’s available for Kindle, but the Introduction is still available for download.
Just a quick note to let you know that Educating Newton is now available on the Amazon Kindle store. I’ve had to take the full text off my website now that it’s available for Kindle, but the Introduction is still available for download.
I’m not interested in programming languages. There, I said it. That would probably be a fairly understandable statement for pretty much any adult in the 21st century, but it’s particularly unusual and maybe even heretical for someone who spends most of their life as a software developer.
I’m not interested in programming languages, but I’m captivated by what I can do with them. To me there is a huge difference. A language is a tool, a method of expression that enables me to convene with Silicon in the same way that the English language allows me to convene with Carbon. I use computers to entertain me, to educate me, to challenge me, to communicate with those I care about, and to manage pretty much everything I do. A communication channel that helps me to realise those desires as effectively and naturally as possible is like the sculptor’s chisel or the painter’s brush. I’m sure some painters really care about the specific chemical composition of the paints on their canvas. But for the majority, I suspect, what matters to them most is that the tools they use allow them to portray as effectively as possible the emotions in their minds. And if that vivid blue strikingly conveys the depth in a young girl’s eyes, or the raging of a tumultuous ocean then it doesn’t matter how much Cobalt it has in it.
That’s how I feel about programming languages. The seemingly incessant desire shared by a large fraction of the developer community to use the latest and shiniest fad language has always perplexed me. And the subconscious assumption that most senior developers hold, that a programmer should know every inch of their language of choice, including those features that one should never use, seems counter-productive at best, and at worst actively filters out the people you should be looking to employ. Perhaps it’s because I use a small number of languages, and I always have done. It’s not that I never use new languages – I’ve probably used a dozen new languages in my current job – but whenever I’m looking at a new, large-scale project, I always go back to the same one or two core languages with which I develop anything of import. I’m not aware of anything I need to do that can’t be done with a reasonably small subset of my language of choice, which happens to be C#.
Functional languages perplex me even more. I remember back to my earliest undergraduate days when I was taught ML as part of the first year computer science course. I never really understood it. It was an interesting intellectual curiosity, perhaps even a vaguely entertaining mental challenge, but I never considered the possibility that it could be anything more than that. Which is one of the reasons why I was so surprised, a decade or so later, to discover it being used enthusiastically in commercial situations. Usually, it has to be said, by people who had moved into industry after doing postgraduate research in computer science. Of course, I myself am somebody who has done post-doctoral research in computer science, yet one of the reasons why I’m not doing it any more was that I found a substantial fraction of the research done, especially on the theoretical side, was perplexingly irrelevant intellectual self-gratification with no plausible real-world application. Functional languages largely, in my opinion, fall into that same category. I just don’t get the appeal and I never have.
If I listed the most important goals of any software development process I suspect I would align fairly well with most senior developers. Firstly, the application should do what it’s required to do. Secondly, it should be as simple as possible to maintain. Thirdly, it should be as cheap as possible to build. After that you can start looking at aesthetics. The reason I put 2 and 3 in that order is because the vast majority of the lifetime of any software application – and therefore the vast majority of the cost allocated – is spent on the post-launch maintenance phase. Support, training, bug-fixes, patches, adding new functionality, reworking existing functionality, improving performance and stability – that type of thing. I suspect a great deal of the differences of opinion between me and my colleagues might actually come down to how we order (2) and (3) on that list.
As far as producing features is concerned, I don’t have any particular doubt that functional languages could be made to produce the same level of complexity that is achieved by mainline imperative languages. So I’m not really arguing over point (1) – I’m arguing over (2) and (3). In my experience, functional languages have a vastly steeper learning curve, produce code that is much harder to understand at anything other than the lowest level, and is much harder to reason about in any kind of intuitive “human” way.
My view is that programming should be done in a “narrative” style. It’s one of the reasons why I so strongly oppose abstract frameworks like Spring – they encourage code to be written in a way that actively opposes the way the human brain naturally functions. When I write code, I should be able to interrogate it like I would interrogate a human being performing a task. I should be able to follow allong the chain of logic, the sequence of tasks and contingencies, and I should understand the process from a high level with relatively little mental input. That’s how my brain has evolved to think, so to write code in any other way is obviously not going to feel as natural to me and is therefore going to require more effort without any commensurate benefit.
Functional programming opposes the way the human brain naturally works in several ways. Most importantly, by avoiding mutable variables. Mutable variables are the computational analogue of the things in our lives that change. Like, oh, everything. You and I are mutable variables. When we change our minds we are not replaced with near-identical copies of ourselves. I can see that there are some situations where this kind of behaviour might be useful – for example, in dealing with mathematical equations – but not in a large scale system. When I drive down my road, my car is changing its location at every point. This document is changing every time I add a new letter to the sentence I’m typing. Everything in the world changes, and tracking those changes is what our brain has evolved to do. So forbidding a programming language from working in this way seems (obviously, to me at least) to be setting ourselves up for additional complexity that we shouldn’t have to face.
Functional languages insist that we think from the goal backwards, which is never how humans plan tasks in our real lives. Think about it – if you asked directions to my house, I wouldn’t reply with “Once you get to my front door you’re done. To get to my front door you have to walk up my drive. To get to the front of my drive, you have to walk from the bus stop 100m up the road in a north-easterly direction. To get to the bus stop, step off the bus. To step off the bus, move to the front and wait until it stops. To get the bus to stop, press the red button…” Why don’t we speak like that? Because that’s not how our brains work. Our brains work in a narrative sense. My directions from your house to mine will start at your house and talk you through the journey sequentially. Our brains remember stories, not abstract, chronologically isolated instructions.
I’ve been in this argument a thousand times. “Ah,” begins the functional programmer, “but that’s just because you haven’t been writing code in a functional language for long enough. If you spent more time doing it then you would learn to think that way”. Perhaps. Perhaps not. I have spent a long time writing in a functional languages and though I understand the concept, it was never natural for me. Maybe for some people it is, but I suspect those people are in a minority. And it’s also largely irrelevant – if it takes a long time to mould your brain into the shape required to think in a certain way, then that way of thinking had better come with some pretty amazing benefits or else I’ll say it’s a waste of my time. And my brain.
I remember during my teenage years there was a sudden craze for so-called “Magic Eye” pictures. These appeared at first glance to be featureless pages of random dots, until you focussed your eyes in just the right way and a remarkable three-dimensional image jumped out. The image seemed so real that you could reach out and pick it up in your hand, but when you tried to do so the illusion vanished. “Magic Eye” pictures work by tricking the brain into thinking that it sees a three-dimensional object. They do this by cleverly positioning seemingly random dots in a way that hijacks the brain’s depth perception and forces it to see something that isn’t physically there.
Much the same process is at work in more conventional optical illusions. Is it a vase or two faces? Is the front face of the cube pointing to the left or the right? Are those really perfect circles or are they squashed in one direction? I’m sure you know all the examples above, and plenty more. One of the oldest optical illusions is the familiar picture of two lines capped with arrows. On one line the arrows point inwards, and on the other they point outwards. Which of the two lines is the longest? Well it seems obvious – it’s the one with the arrows pointing inwards. No, the puzzle smugly announces, they’re both the same length! The first time you see that trick you’re amazed and immediately reach for a ruler. The second time, although the feeling is still there, you now know better and you confidently state that the lines are the same length, even though your brain is screaming at you that they are not. You have learned a flaw in the way your brain works. A trivial flaw, admittedly, but a reliable and repeatable flaw nonetheless. And you’ve learned what it feels like to experience a strong instinct that appears clear and convincing but which is also provably wrong. You were aware that your brain was deceiving you, you accepted that your instinct was wrong and you overruled it.
Congratulations, you now know what it’s like to be a skeptic!
Optical illusions work so well and they have such a predictable and universal effect, because just like the “Magic Eye” pictures they tap into our brain’s internal short-cuts. These are features of every single healthy brain on the planet, regardless of background, education, race or gender. And these short-cuts are not evidence that our brains are broken – in fact, quite the opposite, they demonstrate that our brains are working well and are highly efficient. They show that our brain has found clever ways to deduce as much information as possible about the world around us from the minimal information that it receives. Our brains often have to make “best guesses” in order to build up a coherent picture of that world, and they do this literally every waking second of your life. Your entire experience of the world around you is a construct formed inside the brain from insufficient data, using known short-cuts and based on best-guess approximations.
Is it a bird? Is it a plane? No, it’s Superman! We all understand that misunderstanding, despite the orders of magnitude difference in size between a bird, a man (albeit one from Krypton) and an aircraft. We understand that angular size (which is the only measurement directly accessible to the brain) cannot tell us absolute size unless we also know the distance to the object in question. Small objects nearby look identical to similarly-shaped but much larger objects proportionately further away. It’s a huge problem in Astronomy, where we can’t just get a tape measure billions of light years long and measure how far away a distant galaxy really is! To some extent Astronomers solve this challenge in the exact same way that our brains do innately – we find a way of knowing what type of object we’re looking at, and then we know how big (or bright) it really is, we can compare that with how big (or bright) it appears to be, and we can break down the ambiguity. When you see an unknown object moving across the sky, your brain is internally resolving the same size/distance ambiguity based on simple rules. If the object we’re watching seems to be moving in a constant, straight line then it’s probably a plane a long way away. If it’s moving more erratically, swerving, changing in altitude, then it’s probably a bird close by. If it’s throwing trucks around and firing lasers out of its eyes, then it’s probably Superman.
The brain is able to break down this size/distance ambiguity by making a guess based on experience. This is just one of a vast number of similar short-cuts, or “heuristics”, that the brain makes in order to make sense of the world. Every single one of them has a valid use, and in most cases it would be impossible to function without them, but every single one of them also exposes the brain to the possibility of making errors of judgement or, more worryingly, of being deliberately manipulated into experiencing something that is not real. Optical illusions are a particularly amusing and harmless case but there are other more malicious examples, such as irrational phobias. It makes sense to be cautious of deadly animals, but it doesn’t make sense to be scared of them when they’re on television. Or, for example, it is utterly irrational to be scared of spiders in the UK which are almost entirely harmless – yet many people are still so terrified of them that they can’t even enter a room in which a tiny, perfectly benign arachnid has been seen. The capacity to be terrified by spiders is universal, and very deeply rooted in the depths of our brains.
Another example might be the phenomenon of addiction. If our brain finds something pleasurable then it tends to encourage us to do it again, yet we all know how that can go disastrously wrong. Our brains evolved in a time when food was scarce, where enjoying highly energy-rich fatty, sugary foods was a good way to get us to eat as many of them as we could whenever we were lucky enough to find such a rare treat. Nowadays it’s less helpful, leading to obesity, heart disease, tooth decay and diabetes amongst other undesirable medical conditions.
More sinister sides of humanity also often rest on the deep innate responses of our brain. For example, we tend to judge people who are most like us to be the most trustworthy, and are naturally suspicious of those who are different. We also tend to spot patterns where none really exist because the our ancestors’ survival largely depended on being able to derive general rules from sparse examples. After all, the longer it takes you to learn that lions are dangerous, the less likely you are to survive long enough to have children and contribute your genes to the next generation. Pattern-matching is a highly evolved trait, and it is of immense value, even today. However, we are also afflicted with so-called “confirmation bias” – once we have a hypothesis about something we are more likely to pay attention to evidence that confirms our beliefs rather than disconfirms it, even if the disconfirming evidence is clear and undeniable. We see what we want to see, to some degree. You can easily see how these qualities lead to racism, sexism, and other societal wrongs.
I passionately believe that the best way to solve our civilisation-scale problems and to ensure the maximum possible flourishing and minimum possible suffering for all of humanity is to align our beliefs as closely as possible with the real world. We should accept reality, seek out truth, and make decisions based on the most accurate information possible. The reason why we have been able to eradicate some of the world’s worst diseases over the last century is because we gave up on the superstitious idea that diseases were evidence of evil spirits, witches or misalignment of mystical energy, and we moved past the pre-scientific ideas of “bad blood”, “balanced humors”, “chi” and “tainted air”. We kept looking until we found the truth – the existence of viruses and bacteria, and the complex biochemical workings of the human body – and that knowledge allowed us to create antibiotics, vaccines and effective palliative care which have saved billions of lives and prevented a literally unimaginable level of suffering.
As a skeptic, it is my strong view that this same process of inquiry should be applied to all aspects of human knowledge. We should seek out the truth as earnestly and honestly as possible, and we should use the knowledge we discover to maximise the flourishing of the entire human species in the same way that the discovery of viruses and bacteria kick-started the development of effective medicine. But to do so we have to come up with a way of searching for the truth and, more importantly, for recognising it when we find it. At a bare minimum, that mechanism absolutely has to cope with the fact that it will be operated by humans. That is to say, we have to realise that human beings are susceptible to the same cognitive shortcuts that we’ve been discussing so far, which can lead us into believing potentially damaging falsehoods. Our mechanism for searching out truths has to be able to recognise this challenge and overcome it, or our efforts will be spent in vain.
We all have beliefs, and those beliefs tend to change over time, which means we all necessarily have processes by which we develop new beliefs and abandon old ones. For some, that mechanism might be to let others decide their opinions for them. But most of us have other techniques for seeking out knowledge, which we believe are more likely to give us the “right” answers. When we’re young, our parents and teachers are the most important source of learning, possibly followed by television, our school friends and the Internet. After our formal schooling is over, we get to determine for ourselves how (or whether) we continue this search for knowledge.
We could, of course, continue to look for role models in whom to put our trust, choosing to believe everything they say. For many people, those role models might be newspaper editors, radio pundits, religious or community leaders or television personalities. I’m not saying that’s the worst possible solution because you might choose a good teacher and learn a lot of excellent information. But you might also choose a Jim Jones or a Charles Manson, or any one of a number of charlatans, frauds or megalomaniacs who peddle their delusions in such channels. To tell the difference between a fraud or charlatan, and a legitimately knowledgeable teacher, you really need some way of determining whether or not that person is telling you the truth. And that, of course, means that you need a way to determine truth that doesn’t actually involve asking the person you’re testing. So let’s put aside the possibility that you can just rely on other people for reliable knowledge – if we genuinely care about the truth, which we obviously should do, then blindly following a guru or teacher, no matter who plausible they may seem, simply isn’t going to work. By all means seek inspiration from inspirational people, but also submit the things they say to independent scrutiny to see how they match up to reality.
There are, of course, many other ways to search for the truth. I don’t want to go through them all, so to shortcut the whole process let’s just state at this point that if your mechanism for forming your personal beliefs does not explicitly deal with the obvious deficiencies in the human brain as I outlined above, then it is at best useless, and at worst actively harmful. If we have cognitive impairments that we know for a fact will lead us into incorrect deductions (e.g. confirmation bias) then any mechanism we have for discovering truths absolutely has to acknowledge those limitations and counteract them as effectively as possible. This is why we have tape measures, calculators and wristwatches – because the human brain is not very good at accurately determining the size of an object, multiplying large numbers together or measuring the passage of time. We use tools to make up for the deficiencies of our brains.
One other important deficiency that we should acknowledge is that our brains tend to form beliefs for emotional reasons rather than rational ones, and can easily be fooled into forming false beliefs either by accident or by the malicious use of trickery. People interested in making us think one way or the other generally have an excellent understanding of this. In fact, advertising is an entire subject devoted to precisely that one task, and we all know how effective it can be. Stage Magic is another example – magicians devote their lives to working out how humans can be deceived, and then they do precisely that right under your nose. There’s a reason why magicians are so often found debunking frauds and charlatans – because they have extensive first-hand experience of how easy it is to convince highly intelligent adult human beings of preposterously implausible claims merely using trickery and misdirection.
In summary, attempting to understand and explain the complexities of the Universe without correcting for the well-understood deficiencies of our unaided brains is like attempting to fly to the moon by climbing a hill and flapping your arms as fast as possible.
The good news is that for hundreds of years the smartest people on Earth have been thinking about exactly this problem, and they’ve come up with a solution which they are continuing to refine and improve all the time. It’s called Scientific Skepticism. The “Scientific” part comes from the application of the scientific method to analyse data from the world around us, to correct for biases and to learn new facts in a testable and quantifiable way. And the “Skepticism” part is all about enhancing this process by understanding how our brains can deceive us, and actively working to protect our investigations from the flaws and biases that we, as human beings, unwittingly introduce. The combination of both of these schools of thought is the best process we have for getting at the truth, and is therefore the best process we have for maximising human flourishing and minimising suffering. And that is why I support it, and why I spend so much effort getting as many people as possible to think the same way.
I have said this many times before, but it bears repetition: I can think of no aspect of human civilisation that would be worsened by the application of scientific skepticism, and many that would be immeasurably improved. Inequities that have plagued our species since prehistory would be entirely resolved by a political system based around a rational approach to truth claims. Scientific skepticism is the antidote to frauds and scams, it utterly disarms fakes and charlatans, it destroys pretty much all imaginable forms of negative discrimination, from racism and sexism to discrimination based on gender identity or sexual preference. Scientific skepticism would end all conspiracy theories, put a stop to dangerous anti-science movements like anti-vaccine activism and denial of anthropogenic global warming. It would remove in one fell swoop farcical creationist opposition to the teaching of established science in schools, and grief vampires like psychics, faith healers and mediums who prey on the emotionally compromised to benefit their own bank balances. It would remove from the market all bogus health products and techniques – those aggressively marketed by pharmaceutical companies as well as those peddled by fraudulent quacks and con-artists.
Scientific Skepticism is vital for the continued flourishing of the human race, and the good news is that everyone can develop the necessary skill set. Scientific Skepticism is a defence mechanism for our brains, an immune system protecting us from nonsensical or harmful ideas. It should be taught as an obligatory subject in schools and we should be enormously suspicious of those who fight against it. I cannot think of any rational reason why anyone should resist an effective mechanism for discovering truths, unless such a person benefits from spreading falsehood and disinformation – and such people can and should be opposed at every possible opportunity.
Some may worry that an application of Scientific Skepticism to their own beliefs would force them to make changes to their lives that may cause them emotional pain or uncertainty. This may be true – many delusional beliefs achieve their widespread adoption by providing a certain degree of comfort. I can offer two potential replies to this concern. Firstly, it is my strong belief that living under a comfortable delusion is not a path to psychological well being – having to fight every hour of every day, consciously or subconsciously, to ensure that your comfortable delusion doesn’t get derailed by inconvenient facts is a tiring and stressful activity, and rejecting that way of life gives a sense of relief and freedom that is profoundly beneficial. And secondly, I believe we are morally obliged to look to the future generations that will inherit this world after we are gone. My generation may have grown up with a bagful of comfortable delusions that many of us are reluctant to part with, but there is no reason why we should enforce that conspicuous mental slavery on our children, any more than we should force them to live with the bloodletting, plagues, racism, homophobia and ubiquitous violence of times gone by.
Scientific Skepticism is not a panacea – there are many things it cannot solve. Adopting a rational way of thinking won’t suddenly cure cancer or prevent earthquakes. But it will empower our society to end the injustices that have plagued the human race for millennia. It will make our lives immeasurably better while we strive together to solve the real problems that face humanity in the 21st Century and beyond.
The 14th and final presentation is now available on YouTube. It’s on the topic of Scientific Skepticism.
Please watch it here : https://www.youtube.com/watch?v=_j48onNWXRI.
I am also giving away my latest book entirely for free online. Please download a copy and have a read. It’s available on my website here : www.frayn.net/books/newton.
For those of you who celebrate this time of year, I hope you have a wonderful time.
I’ll let you all know when I have finished planing out my next project. Until then – best wishes.
I’ve posted another lecture on YouTube. This one is all about Significance – what does it mean to say that a scientific result is significant, and what happens when you get that wrong?
You can watch it here : https://www.youtube.com/watch?v=_B5mCKIJTZ8.
Finally, I have released a new YouTube video at the following link : http://youtu.be/ihCUC8Derhs. In this one, I talk about the difference between science as an idealised process for learning about the world, and science as an actual establishment implemented by fallible human beings.
Sorry for the delay in this process. You’ll be pleased to know that I’ve started work on episode ten already. 🙂
I have a few updates that I thought I would share:
– I have finished my new book – “Educating Newton”, and am currently looking for a publisher. I’ll update you on that when I get any further. So far a few agents have read it and enjoyed it, but not takers yet.
– Version v1.3 of DungeonDelveXL has just been released with a huge number of fixes and improvements.