We’ve been staying near Tromsø in the far North of Norway for the last week or so; it was my 30th birthday a couple of weeks ago, but at that point the days here were two hours long! So we thought we’d delay the trip slightly. Even though it is well inside the Arctic circle (which – I didn’t previously know this – delineates those Northern parts which have some period of 24-hour darkness in winter, and 24-hour light in summer), Tromsø is quite a thriving little city, and it boasts various “most Northerly” things, such as university, brewery, botanic gardens, and (perhaps slightly less commendably) Burger King. Not quite as cold as you might expect, by virtue of its coastal location, and the fact that it is situated right at the very Northern end of the Gulf Stream. However, that is not to say it is not cold! The mercury hasn’t risen above freezing since we’ve been here, and it has got as low as -19°C on a couple of nights.

In fact this is one of the coldest and driest winters they’ve had here: usually a huge amount of snow falls over winter, but this year there has been almost none. As a result the earth has been exposed to the full extent of the cold, meaning that the frost has permeated deep into the ground, and the spring which supplies the water to our cottage has partially frozen (finding and maintaining one’s own water supply is clearly rather more laborious here than in warmer climes). So we have had little more than droplets of water coming from the taps. Luckily there is a sauna in the house, so it is possible to mitigate the icy trickle which is our shower by heating oneself up as much is as bearable beforehand.

It’s a very beautiful place; quite mysterious, with magnificent mountains, a perpetually setting sun, frozen fjords and rivers, and (for some reason I have not yet worked out) occasionally steaming seas. Perhaps most magical of all, however, and one of the main draws for us, is that Tromsø lies right in the centre of the Northern auroral zone . This is the band between around 10 and 20 degrees of latitude from the magnetic North pole, to which charged particles from the sun are drawn by the Earth’s magnetic field. These particles react with atoms in the atmosphere, in much the same way as electricity reacts with the gas in a neon tube, to create a wonderful spectacle known as the aurora borealis, or Northern lights (the Southern equivalent is known as the aurora australis).

I had seen the Northern lights once before as a teenager in the South of Scotland (when the solar wind is especially strong, the phenomenon occurs at lower latitudes). I didn’t know what it was at the time, which I think added to the general sense of wonder – scientific explanations have a way of taming the awe of the inexplicable. However, they were very faint, and I wasn’t quite prepared for quite how beautiful a full display would be. It consists of mostly ghostly green lights, with occasional hints of red, blue and yellow (the colours depending on the atmospheric gases reacting). They resemble giant flames, or luminous clouds of gas; constantly shifting and moving; often in a number of bands across the sky, but sometimes snaking into coils and spirals. There is always a sense of rippling and flickering, and when they are at their peak they seem to burn fiercely, flitting quickly in and out of existence.

Unsurprisingly, there have been some quite outlandish sources attributed to them by different native peoples of polar regions. It seems to vary from region to region whether they were thought of as having positive or negative connotations; whether they were gifts from a benign god, or bad omens and symbols of celestial displeasure. They are known in Scotland (especially Orkney and Shetland, where they are relatively common) as the “Merry Dancers”, which seems quite appropriate. Slightly less understandable is the traditional French term, “chavres dansantes”, which translates as “dancing goats”. The Inuit attached spiritual significance to them, believing them to be images of their ancestors. What these ancestors were interpreted as doing seems to have varied from tribe to tribe, with some believing them to be simply sending messages through dance, and others perceiving them to be playing football with a walrus skull. Still more inverted this latter myth, and saw walruses playing football with a human skull!

Other odd interpretations come from different tribes of native North Americans. Most of these seem to be based on the assumption that the lights were from large fires in the North, but there any trace of common sense seems to end. For example: the Makah Indians of Washington State thought that there was a tribe of dwarfs living in the far North, who were “half the length of a canoe paddle and so strong they caught whales with their hands”. These dwarfs boiled whale blubber over the fires which caused the lights. On the other hand, the Menominee Indians of Wisconsin thought the fires were from torches used by friendly giants to spear fish at night.

I suppose these outlandish explanations go some way to conveying the mysteriousness of the lights…you could probably see whatever you wanted in them if you looked hard enough. There are also records of strange phenomena occurring when the lights appear, such as whistling and crackling sounds in the sky, and interference with electrical devices: there are even reports of battery-powered radios working without batteries during particularly strong events. Our radio would make such odd sounds when they were active that we had to switch it off.

Here in Tromsø the northern lights are just a part of life – in the winter months they appear roughly every other clear night to some extent, so seeing them during a trip here is really just a function of the weather. We’ve been very lucky in this respect, and have seen them 4 nights out of 7, sometimes quite faintly, but once or twice utterly spectacularly. So, if you want to see the Northern Lights, and live in Europe, I highly recommend coming to Tromsø. Just check the weather forecast, and, as soon as there is sure to be a clear spell, hop on a flight. Just be sure to bring some thermal underwear, and lots of money (a pint of beer costs about £9.50)!

You may also want to look at this very useful website, which collates all the possible data you might need to predict a display, such as geomagnetic activity (if you are looking at this post sometime soon after I have written it, notice the great activity around the 24th/25th January 2012 – the lights were visible as far South as London at that point), the current location of the auroral zone, and a 360° webcam of the skies above Tromsø.

Below are some photos.

]]>

So…deep breath. I’m going to attempt to explain this whole Higgs boson thing which the news keeps going on about, and which, seeing as it is supposedly one of the most important things ever, I’ve been meaning for a while to actually try to properly understand. Usual disclaimers: I am not a physicist (in fact whether or not I’m even a proper mathematician is arguable) and I am writing this mainly as a motivation to increase my own understanding. However, my theory is that, unless an expert is a supremely good communicator, it is often easier to gain a basic understanding of a complex subject from another interested layperson (as they know exactly how you feel). Certainly I would have liked someone else to have written something like this to save me the effort!

I think we have all heard about the search for the Higgs boson by the people at CERN. Probably, if you’re still reading this, you have also, like me, wondered exactly what this boson is, what it does, and why it matters so much. And probably you have some vague notion that it is a particle which “gives other particles mass”. That is the point I shall start from.

But first, a question – why are things the size they are? Sounds a bit vague and philosophical, I know. But the size of an object is determined by the size of the molecules which make it up, which are in turn determined by the size of their constituent atoms. Atoms consist of a nucleus made up of protons and neutrons, surrounded by orbiting electrons. And the size of an atom is determined by the sizes of the orbits of its electrons. But the size of electrons’ orbits depends on the mass of the electron! So in order to find an answer to why things are the size they are, we need to address the question of why an electron has the mass it does. And while we’re at it, we may as well ask why other elementary particles have the mass they do…for example, why do photons have no mass at all?

So there’s a bit of motivation. Of course, the question of why mass exists (which is really what we’re asking when we ask why fundamental particles – which are the building blocks of the universe – have the mass they do) belongs in the realm of philosophy. However, we can at least use scientific methods to approach the question of *how* there comes to be mass. The problem is that the simplest mathematical theory (and it is usually the simplest ones that are right) says that all fundamental particles should be entirely massless, which is clearly wrong.

Now, physics proceeds as follows: theoretical physicists use experimental evidence to formulate mathematical theories, and experimental physicists attempt to either verify or disprove these theories, in a constant back and forth. It is not necessarily for the best when the evidence is in favour of a theory though; all this does it to make that theory a bit more likely. On the other hand, just as in mathematics, it only takes one counterexample to completely rule something out. It is often when a particularly well-established theory is shown by an experiment to be “false” that new discoveries are made: for example, Wolfgang Pauli invented the neutrino in a “desperate” (his own word) attempt to explain away the apparent violation of conservation of energy in radioactive beta decay; he was later shown to be correct.

In the case of the mass of particles, we have something of a reverse situation: a well-established empirical observation (things have mass!) which is contradicted by what seems to be the obvious theory. And so, in time-honoured tradition, Peter Higgs postulated away this discrepancy with his proposal of a new particle, which has come to be known as the Higgs boson (a boson, by the way, is just a type of subatomic particle characterised by the fact that it has integer spin…”spin” is just some kind of odd quantum-mechanical version of exactly what it sounds like, which I won’t go into).

Now, it is by no means certain that the Higgs boson exists; as you may have gathered, the point of the Large Hadron Collider is largely to clear this issue up. By accelerating particles to obscene speeds and crashing them into each other we can produce a shower of smaller particles, and it is hoped that eventually the Higgs boson will be spotted in one of these showers. However, although the Standard Model (currently the most widely accepted “theory of just about everything”) does predict the existence of the Higgs boson, it unfortunately does not predict the mass of the Higgs boson, hence the protracted search. Of course, the Standard Model is just a model, and the Higgs boson might not exist at all. This wouldn’t be such a bad thing (unless perhaps you are Peter Higgs), as there are lots of alternative theories as to how particles gain mass (known as Higgsless models, which sound delightfully like something out of Hundred-Acre Wood).

But it is generally accepted that the most likely scenario is that it does exist. In which case, how does it work? Well, the reason quantum theory has its name is that it was observed that certain physical properties change only in discrete “quanta”, rather than continuously. One of the most central concepts in quantum theory is that of “wave-particle duality”, which says that objects at the quantum scale exhibit characteristics of both continuous waves and discrete particles. Similarly, quantum field theories like the Standard Model say that waves in the supposedly continuous fields of classical mechanics, such as the electromagnetic and gravitational fields, are in fact quantised, and it is the quantum “excitations” (energy levels) of these waves which are the elementary particles.

That is, every field has a particle associated with it, and the force exerted by a given field can be viewed as the action of its “force carrier” particles. For example, the particle associated with the electromagnetic field is the photon; as such all flux in the electromagnetic field has discrete levels, with the basic unit of flux a photon. It has even been postulated that there is an as-yet-undiscovered particle – the graviton – which mediates the gravitational field (the Standard Model does not explain gravity, among some other things, hence the “just about” qualification above). This differs from the classical view of particles with forces acting between them, as it implies that in fact *everything* is particles…for example, the electromagnetic “force” between two electrons is simply an exchange of photons. Alternatively, everything is really just fields: particles are not little balls at all, but simply discrete fluctuations in a field. It’s all a bit confusing, but however you look at it, fields are particles are fields…

So, the fabled Higgs boson, being an elementary particle, has its very own associated field. The Higgs field exists throughout the universe, in the vacuum between every particle, and it is the interactions of particles with this field which is postulated to give them their mass. That is, we can think of particles moving through the Higgs field: the more they interact, or are slowed down by the field, the greater their mass. The theory goes that, in the Beginning, every particle travelled at the speed of light, and had no mass. And then God said “Let there be mass”, and lo, they started to interact with the Higgs field, like so many balls of various stickiness moving through a pool of molasses. Some, such as photons and neutrinos, simply shrugged off the molasses and zipped through on their merry way, to be forever massless and light-speed. Others became bogged down and sluggish, and it is this very sluggishness that we perceive as mass.

That probably isn’t very satisfactory…I haven’t even addressed how the Higgs field interacts with the particles. But my brain is tired, and it’s about as far as I can go without getting into some pretty heavy duty stuff. So I’ll finish by quoting an analogy of how the Higgs mechanism works, using Margaret Thatcher as the analogue of an elementary particle, and a roomful of lesser politicians as the Higgs field. This was written by one Professor David Miller for the UK Science Minister in 1993, in terms I suppose he thought the minister might understand:

*Imagine a cocktail party of political party workers who are uniformly distributed across the floor, all talking to their nearest neighbours. The ex-Prime- Minister enters and crosses the room. All of the workers in her neighbourhood are strongly attracted to her and cluster round her. As she moves she attracts the people she comes close to, while the ones she has left return to their even spacing. Because of the knot of people always clustered around her she acquires a greater mass than normal, that is, she has more momentum for the same speed of movement across the room. Once moving she is harder to stop, and once stopped she is harder to get moving again because the clustering process has to be restarted. In three dimensions, and with the complications of relativity, this is the Higgs mechanism…*

(Note that momentum is simply mass times velocity. If a particle gains momentum, but remains at the same velocity, then it gains mass.)

*…Now consider a rumour passing through our room full of uniformly spread political workers. Those near the door hear of it first and cluster together to get the details, then they turn and move closer to their next neighbours who want to know about it too. A wave of clustering passes through the room. It may spread out to all the corners, or it may form a compact bunch which carries the news along a line of workers from the door to some dignitary at the other side of the room. Since the information is carried by clusters of people, and since it was clustering which gave extra mass to the ex-Prime Minister, then the rumour-carrying clusters also have mass. The Higgs boson is predicted to be just such a clustering in the Higgs field. *

]]>

The paper in question is a very short one, attributed to one Shalosh B. Ekhad. A quick search reveals that this “person” is in fact a computer belonging to a mathematician called Doron Zeilberger, who is quite well known in mathematical circles for his love of computers, and for not being entirely serious all of the time. However, I will humour him by writing as if it was indeed the entity named Shalosh B. Ekhad who wrote this article.

So, Ekhad became interested in how often the Jewish holiday of Hanukkah coincides with Christmas, and began to run computer searches (which I suppose equates to just thinking about it, if you are actually a computer). For the non-Jews reading this (I had to look it up myself) Hanukkah is an 8-day holiday which begins on the 25th day of the month of Kislev in the Hebrew Calendar. The Hebrew calendar is an example of a lunisolar system, in that it takes into account the relative motions of both the sun and the moon. Each year consists of 12 lunar months of either 29 or 30 days, apart from leap years which have an extra month. Leap years occur 7 times every 19 years; if we think of the first year of a 19-year cycle as being year 1, then the leap years are years 3, 6, 8, 11, 14, 17, and 19.

As you might expect, this means that dates of the Hebrew calendar vary quite wildly in relation to those of our strictly solar Gregorian calendar. In particular it means that the pattern of years in which Christmas falls within the Hanukkah period is highly unpredictable. What Ekhad found was that it will happen in 27% of the years of this (3rd) millenium, with this figure falling for subsequent milleniums, until the 9th millenium, when it will stop happening altogether until at least the year 20000 AD. So far, so mildly diverting. Much more interesting is the following observation: during the period when the gaps between between the years 1801 and 7390, that is, in the time-span in which the gaps between years in which Christmas falls within Hanukkah are relatively small, the number of years making up these gaps are *always* Fibonacci numbers! (I talked about Fibonacci numbers in this post). In particular they are always either 2, 3, 5 or 8. Ekhad then goes on to point out that exactly the same phenomenon occurs for years in which Christmas falls within Sukkot, another Jewish holiday lasting 7 days.

This seems quite incredible…perhaps slightly less so for those who are used to Fibonacci numbers cropping up in the most unexpected places, but it cries out for explanation all the same. So what is going on here? Well, the regular occurrence of gaps of 2 and 3 years between these special Christmas-in-Hanukkah years surely has something to do with the fact that the number of non-leap years between leap years in the Hebrew calendar is always either 2 or 3. The sequence of gaps between leap years (3,2,3,3,3,2,3) would also go some way to explaining the occurrence of 5 and 8 as well, as both of these numbers can be made from sums of consecutive numbers from this sequence. But then so can 6,9,10,13….

So there’s more to it than that. Any insights?

]]>

Of course, most of this time was not spent actually doing anything related to that particular paper. In fact, the majority of the time was spent waiting for referees to get round to reading the thing. Actually “waiting” is the wrong word, as I have come to realise that the best strategy when submitting papers to journals is not to wait, but to completely put it out of your mind (unfortunately this doesn’t help when you then have to revise it months later), and perhaps set some kind of reminder to get in touch with the editor one year in the future and ask exactly what is going on. I currently have two other papers “under review”, one of which has been “with editor” (I assume this to mean that the editor hasn’t got around to actually looking at it, let alone passing it to a referee) since May, and the other which, perhaps thankfully, I have no way of knowing what is happening with.

There is intermittent hand-wringing about the peer-review system in mathematical circles, and in academia in general. Like exams, and job interviews, it seems to be grudgingly accepted to be the least bad form of evaluation. Recently Timothy Gowers raised the possibility of an alternative system on his blog, which led to much fevered debate (I have just noted that I am at least the the seventh blog to have linked to that particular post, so it is safe to assume the debate sparked by it stretches much further than that particular lengthy list of comments!).

I am always aware of the non-mathematicians – and non-academics in general – that might be reading this blog; so briefly, this is how the current system works: you have some original idea, and write a paper about it. You submit your paper to an appropriate journal. Having ascertained to their satisfaction that you are not a crank/crackpot/charlatan, the editors of that journal identify those among their list of potential referees who are most likely to know your subject well enough to understand what you are talking about, and pass it on to them. These referees read your paper as sceptically as they can, and write a review. This review includes, crucially, their opinion as to whether or not the paper should be published, and if so, what should first be changed. The editor makes a decision based on these reviews, which is passed back to you. You then either make the recommended changes/argue your case, try another journal, or give up entirely, depending on what exactly the judgement has been.

In theory, this sounds like a very sensible and rigorous way of doing things: objective evaluation by impartial experts. However, consider the following statistics (lifted from this site about publishing in economics journals, but which I think apply fairly broadly):

1. The average wait for an acceptance decision is 3 years (on the other hand, the average wait for a rejection is 6 to 8 months. At least you are relatively swiftly put out of your misery…)

2. Assuming the average acceptance rate of “good” journals to be 15%, you need to have 7 papers under review in such journals at all times in order to have one paper accepted per year in a “good” journal.

3. If you want to have 10 papers published in the first 5 years of your career, then you need to have about 12 papers under review at all times

Pretty daunting! Especially for someone who, like me, is taking the first few tentative steps onto the academic career ladder. For established mathematicians, it doesn’t really matter too much how long a paper takes to get published (within reason of course), as they are relatively secure in their positions. However, I am currently applying for post-doctoral fellowships, and the length of a publications list is one of the most obvious ways to judge the quality of such applications. But how is one possibly expected to have such a list at all if a paper takes longer to get published than it takes to do a PhD?

Incidentally, I was once told that part of the current problem with the peer-review system is the fact that there are too many people submitting papers these days, and not enough established referees. It is amazing how many of the world’s problems come down, basically, to over-population…

Whatever the cause of the problems, time-scales like these seem quite anachronistic in the internet age; clearly an alternative system would be desirable. And indeed, there is at least the first murmurings of one with arXiv.org. This is basically a preprint repository, where people post their papers while they wait for the wheels of peer-review to grind their slow and rusty way to a conclusion. It enables people to share their results with the wider academic community immediately and effectively, and within a few years, it seems to have become an integral part of scientific life. However, a site like this is obviously no replacement for peer-reviewed journals (seeing a “proof” of the Riemann Hypothesis posted every few weeks or so serves as a handy reminder to be very sceptical of any new research that hasn’t yet been reviewed).

Basically what Gowers was proposing was a kind of extension of arXiv…you would upload your paper to some website, and it would then be reviewed by other people in your field. The lengthy comments and debates which arose from his post were mainly about the details: what people’s motivation would be to review papers, how easy it would be to manipulate the system, what level of anonymity would be required, and various other seemingly small, but very important details.

Interestingly, it seems as though the publishers of journals are themselves expecting some kind of seismic shift in the way these things are done. Upon confirming the proofs of our paper, I was offered the choice of various extras and options. The price list was as follows:

- To make the paper “Open Access” (that is, to basically buy the copyright from the journal): €2000
- To have figures printed in colour: €950
- To order extra offprints (aside from the measly one free copy you are allocated as author): €200 for 25 copies (same price for electronic copies!)
- To order a poster of your article: €50

I could go on, but you get the point. I have never seen such a blatant attempt to rip a person off outside of my spam folder. When I complained about this to my supervisor, he opined that, because the future is so uncertain for these publishers, they are trying to make as much money as possible now while they can.

Ironically, if they hadn’t set the price quite so high (how do you possibly justify charging €8 for a .pdf? The mind boggles) I would have actually ordered some offprints, as various members of my family would love to have some hard evidence that my years of university education have produced something tangible! They’ll just have to take my word for it instead.

^{* I say “same”, but actually the paper that was finally accepted bears little resemblance to that one! (The final version is here if you are interested). Same in the sense that a river is always the same river I suppose.}

]]>

Here is a question for you: what day of the week will it be the next time this happens, on 11/11/2111? This is the precisely the kind of question that some “idiot savants” are famously good at answering very quickly. How could they possibly do this, in their heads, in a matter of seconds? It seems very mysterious, until you give it some thought. Not that I have! But John Conway, a highly esteemed mathematician who needs no introduction to any other mathematicians who may be reading this (non-mathematicians might possibly unwittingly know of him through his creation the Game of Life…if you can remember back to the dark old days of Windows 95, this was actually included as a “game” along with Minesweeper et al.)^{*}, has actually invented a method of giving the day of the week on any given date. Why? I don’t know. Perhaps he was bored of competing with mere mortals and decided to take on the savants.

Anyway, his method is called the Doomsday algorithm, and is actually quite simple. It takes advantage of the fact that the “Doomsday” of a given year – that is the last day of February that year – always falls on the same day of the week as the following easy to remember dates: 4/4 , 6/6, 8/8, 10/10, 12/12. So all we need is the Doomsday for a given year; we can then exploit this fact to get to the closest date to the one we want, and then just count mod 7 (which basically means, in this context, go cyclically through the days of the week) until we get to our desired date.

To find the Doomsday for a given year, we just need to start with the Doomsday for this year. Then use the fact that the same date next year will be one day forward (as 365/7 has remainder one), or two if next year is a leap year.

So, given that the Doomsday this year is Monday, in 100 years it will be Monday + [124], as there are 24 leap years between now and then (years divisible by 100 are not leap years! Unless they are divisible by 400). 124/7 has remainder 5, so we get Monday + [5] = Saturday. So Doomsday is Saturday in 2111, hence 10th October is also Saturday, and 11th November is therefore Saturday + [32] (as October has 31 days). 32/7 has remainder 4, so finally we get Saturday+[4]=Wednesday.

Great! Now we know that the next time it will be 11:11:11 11/11/11 will be a Wednesday. Of course we could also have just looked it up on the internet. And this is all assuming that you care, which is quite an assumption to make. But my point is that there is nothing particularly complicated here. Conway is reputed to be able to do this for most dates in just a few seconds, and so the claim that savants are using some mysterious part of their brain, or are somehow hard-wired into the Gregorian calendar system, begins to look a bit unlikely. It is more likely to be the case that they are simply very good at mental arithmetic (especially mental modular arithmetic).

Incidentally, note that the Gregorian calendar “resets” every 400 years, in two senses. Firstly, as I mentioned, years divisible by 4 are leap years, unless they are divisible by 100, in which case they aren’t. Except when they’re divisible by 400, in which case they are! Conveniently, the number of days in 400 years – of which 97 (=24+24+24+25) are leap years – is divisible by 7, which means that the whole days-of-the-week/date correspondence also resets. So we can instantly say that 11th November 2411 will be a Friday.

That’s quite enough days-of-the-week. It may have occurred to you, whilst pondering the Doomsday algorithm, that perhaps the Gregorian Calendar is more complicated than it could be. Also, surely non-Christians are not entirely happy with having the supposed year of Christ’s birth as year zero?

The Islamic calendar is lunar, and has months with the following names (translated from the Arabic):

- “Forbidden”
- “Void”
- “The First Spring”
- “The Second Spring”
- “The first month of parched land”
- “The second month of parched land”
- “Respect”
- “Scattered”
- “Scorched”
- “Raised”
- “Truce”
- “Pilgrimage”

This naming system gives some idea as to how violent and harsh life must have been in the Arabian peninsula at the time. The Forbidden refers to fighting – no fighting allowed in this month. Similarly Respect and Truce. But this still leaves 9 months of fighting time! In fact it will have been necessary in month 2, which is called Void as this was traditionally when the non-Muslims came and robbed everyone, leaving houses empty. Months 5 through 9 generally sound pretty difficult: the “Scattered” of month 8 refers to the necessity of spreading out to look for water after 3 parched months. And then to top it all off, in month 9, which sounds worst of all, they had to fast all day (“Ramadan” means “scorched”). I think the best month, however, is Raised: apparently she-camels raise their tails when they are pregnant. I like that there is a whole month named after camels’ biological cycles. Then again, given the importance of camels if you live in the desert, this is perhaps more sensible and practical than naming months after Roman dictators (July, August) and Gods of War (March), and certainly more interesting than just calling them the 9th month (November), 10th month (December) etc.

However, the lunar system would have led to some problems. Given the fact that 12 lunar months is 11 or 12 days off the time it takes the Earth to travel around the sun, after 20 years or so camels would be raising their tails at entirely the wrong time of year, fighting would be taking place when time would be more practically spent looking for water, and all sorts of other confusion would ensue. I assume they found a way around this, but I’m not sure how.

Most ancient civilisations used lunar calendars…understandable if they had only rudimentary astronomical skills, as the moon provides a regular and easily-observable marker of time. Ironically though, the medieval Islamic astronomers were probably the best in the world at the time, and this was a direct result of having such a calendar. A new month was only declared when certain respected men testified before a committee that they had seen the crescent moon. If it happened to be cloudy that day, then the month would have to start the next day instead. This was, of course, not very satisfactory; it was important to be able to know in advance when the next new moon would occur, as different months meant different religious practices. The search for alternative methods of prediction led to an intensive study of astronomy, and great advances in the subject.

OK, that’s all for today, I’ve got to stop writing now and observe two minutes silence. If you are interested in alternative calendar systems, you’ll be pleased to hear that there is a whole wiki devoted to them! Here. You’d be surprised how many there are.

Finally, as a footnote, I see that, despite a habit of writing rambling, over-long posts with gaps of up to a year in between, I seem to have somehow reached the milestone of 50 Google reader subscribers (if you are interested in how many people subscribe to a given blog through RSS, most of them will be through Google…you can find out by going to Google reader, clicking “subscribe” and searching for that blog).

At the risk of sounding like some kind of customer satisfaction survey: Who are you all? How did you get here? What do you like or dislike about this blog? The more feedback I get, the better your experience might be!

^{* By the way, if you are not a mathematician, can you name any famous mathematicians? I would guess probably not. Can you name any famous physicists? I would guess at least 2 or 3. This has always struck me as exceedingly unfair.}

EDIT: since I posted this (about 5 hours ago), people have landed on this blog through the following searches:

“what day will be 11th november 2111”

“day+of+the+week+of+november+11,+2111”

“what day of the week is 11/11/2111”

“what dayoftheweek will 11112111”

“what day will it be nov 11, 2111”

“what day is november 11th 2111?”

“what day of the week will the next 11/11/11 be”

Why is everyone so interested in what day of the week it will be on 11/11/2111?! How odd. The only possible explanation I have for this is that some teacher somewhere set this as a homework question, and the pupils turned to Google for the answer…

^{
}

]]>

The book in question is *The Emperor’s New Mind*, by Roger Penrose. The main thesis of this wonderful book is, apparently (and in a very small nutshell), that the mind does not work like a computer^{*}. However, I am currently about 3/4 of the way through it, and this has not yet been touched upon! Rather, over 400 pages or so, Penrose has valiantly attempted to explain Turing machines, classical mechanics, relativity, quantum theory and cosmology to the interested (and, one must assume, quite dedicated) layperson. I can only assume that all this is going to coalesce into a grand theory of Mind, but it does so far seem like quite an ambitious project. Having tried to achieve this kind of comprehensive introduction to even the smallest of mathematical subjects myself in previous posts (you might have noticed that I have long since given up trying to do this), I have great respect for Penrose’s tenacity. I find that the problem with this type of enterprise lies in trying to tread the line between being impenetrable to non-mathematicians, and boring for mathematicians. While *The Emperor’s New Mind* is a great book, I think it is safe to say that it probably falls on the former side of this line; it is perhaps not entirely suitable for bedtime reading.

Anyway, I have just been reading Penrose’s take on the maltreated feline of this post’s title, and it got me thinking, so I thought I would discuss it. The cat in question is a paradox which Erwin Schrödinger came up with in order to show the absurdity of trying to apply quantum theory at the classical physical level (that is, the everyday world with which we interact, as opposed to the exceedingly odd quantum level of subatomic particles). This is, of course, a massive and complex subject, and I will only provide the merest of scrapes of its surface! If you happen to be a pedantic physicist, then please do comment on any inaccuracies in what follows.

First of all, I will need to briefly explain Heisenberg’s Uncertainty Principle. This says that it is not possible to measure both the momentum and the position of a subatomic particle (such as an electron) accurately at the same time. The more accurately we know one of these, the more our measurements of the other becomes a matter of probability. This may not sound so odd as it stands. However, consider an extreme example. If the momentum of a particle is specified precisely at some point in time, then its future momentum is entirely predictable, just as you might expect. However, if we now choose to measure its position (ie. simply work out where it actually *is*) , then we will find that, because the momentum was precisely specified, the particle has equal probability of being at any one point in space as any other! ^{**} On the other hand, if we chose to first measure its position, then any future measurement of momentum will be completely uncertain.

So far, so wacky. But now consider what we actually mean by “measure”. I don’t know much about experimental physics (or indeed physics in general, I should confess), and have absolutely no idea how one actually goes about measuring the momentum of a single subatomic particle. But this is not too important. A measurement here simply means an act of observation, and it necessarily involves a conscious mind. To measure a particle’s momentum is simply to “become conscious” of its momentum. But the consequences of the act of observation in quantum mechanics is a great source of confusion and paradox. To observe a particle is to dramatically – and discontinuously – alter its behaviour. A particle’s behaviour is smooth and predictable (albeit in a rather unintuitive way) up to the moment in which an observation takes place, at which point it suddenly changes. In more scientific terms, a particle has a rigorous mathematical description – a wavefunction – describing all its possible states; once the particle is observed, this wavefunction “collapses” into one of these possible states, and which exact state it collapses into is a matter of probability, rather than determinism.

In summary, the very act of observing a particle changes its behaviour! This is quite mind-blowing. Personally I think that it could be construed as fairly strong evidence that reality is an illusion created by our collective consciousnesses. But that might be a result of reading too many books like *The Tao of Physics* and *The Holographic Universe* in my youth. Roger Penrose is quite strongly opposed to this “subjective reality” theory (in fact, up to where I have read he is strongly opposed to most attempts at explaining such crazy quantum behaviour, but I assume he will eventually come forth with a theory of his own).

Anyway, back to Schrödinger’s cat. So imagine a sealed container, inside which we deposit a cat (preferably someone else’s cat), a vial of cyanide gas, and a device which, when triggered by some quantum event, smashes the vial and kills the cat. Note that it is not too difficult to give an example of such a trigger: Schrödinger himself thought of an electron which may or may not be emitted by a decaying radioactive substance; Penrose uses the example of a single photon which may or may not be reflected by a half-silvered mirror. In both cases, it would technically be possible to set up the experiment so that there is an equal probability of the event occurring as not in a given time period.

Now we let the experiment run for the allotted time, and then ask the question: Is the cat alive or dead? At this point, the natural reaction is to say that, well, it is certainly one or the other, and exactly which is a matter of probability. And indeed this would be the case, if you were in the box with the cat (presumably wearing a gas mask). But, as explained above, without an observer, a particle exists in a superposition of various possible states. It is only once observed that the wavefunction describing these possible states collapses into the one which we observe. So, if we assume that quantum effects can be magnified to the classical level, and if there is no observer inside the container (I’m afraid we must also make the possibly dubious assumption that the cat has no consciousness!), the triggering particle has both been emitted AND not emitted (or reflected and not reflected, depending on which version we use), in which case, until we look inside the box, the cat is suspended in some mysterious limbo state between life and death.

Of course, this is ridiculous. And it perfectly sums up the absurdity of assuming that quantum effects can apply right up to the observable, classical scale. If that were the case, then a cat could be simultaneously both alive and dead (similarly, a tree falling in an unpopulated wood could make both a sound and no sound). But how can we possibly explain this? If quantum mechanics perfectly explains the behaviour of subatomic particles, and subatomic particles make up the world we experience, how can it be paradoxical for us to experience quantum effects?

Well, this question has naturally been the springboard for a whole slew of alternative quantum theories. Take the “many-worlds interpretation,” for example. This is one of the more widely-accepted interpretations (which goes some way to demonstrating how crazy these explanations can get). Very briefly and crudely, it suggests that there are infinitely many parallel worlds: rather than simply causing a particle to collapse into one of its simultaneous states, an observation of a quantum effect actually causes our current world to veer off into one of many possible parallel worlds. In Schrödinger’s experiment, this would mean that the cat is indeed both alive and dead, but that these two possibilities occur in different worlds; opening the box and observing the cat causes our world to branch off into one of these two alternatives. This theory imagines the unfolding of reality as a many-branched tree, rather than a continuous line.

Many other theories have been postulated, none of which I really feel inclined (or qualified) to talk about. Needless to say, this is a thorny philosophical question. It seems clear that while quantum theory is an accurate way of modelling reality on a certain scale, it is just not consistent with the classical world without some kind of grand re-imagining of reality.

^{* Incidentally, I can’t help but agree with this view. And I think that the fact that various cultures have at different times used anything from electromagnetism to hydraulics as metaphors for the workings of the mind just shows that we have a tendency to assume that whatever the technological state of the art is, that must be how our brains work! However this is a whole can of worms which I won’t devote more than a mere footnote to. }

^{** Of course, the probability of a particle ever being at any given point is technically always zero, but you know what I mean.}

]]>

This is partly inspired by a book I’ve just read: *Galileo’s Daughter*, by Dava Sobel. It doesn’t really match up to *Longitude*, but is a good read nonetheless. It is really about the life and work of Galileo Galilei, although Sobel gives us the hard science and history in a more easily digestible form, by interweaving commentary on his relationship with his daughter. She seems to have been a quite extraordinary woman: sent to a convent at age thirteen due to her illegitimacy (and hence lack of marriage prospects), she spent her whole life in extreme poverty within those walls, but still managed to be a doctor, playwright, composer, musician and prolific correspondent in the little time she had which wasn’t dedicated to prayer, labour and general suffering.

Anyway, one of the things which struck me most about Galileo’s life was his relationship with the all-powerful Catholic church at this time. He was a very devout Catholic: publicly, of course (claiming Catholicism is, after all, preferable to torture and painful death), but more surprisingly, given the utter ignorance and persecution he suffered at the hands of the Inquisition, he remained privately devoted to the church. He even said, near the end of his life:

*I have two sources of perpetual comfort – first, that in my writings there cannot be found the faintest shadow of irreverence towards the Holy Church; and second, the testimony of my own conscience, which only I and God in heaven thoroughly know. And He knows that in this cause in which I suffer, though many might have spoken with more learning, none, not even the ancient Fathers, have spoken with more piety or with greater zeal for the Church than I*

Much of his life was spent treading a fine line between his great desire to spread the word about the incredible new things he was discovering (including laws of motion, the telescope, sunspots, and the moons of Jupiter), and the need to placate the various censorious cardinals who felt that his works threatened their supremacy. He was finally convicted of heresy for his assertion that heliocentric Copernican model of the solar system was right – that is, that the sun is at the centre, and not the earth (previously it had been assumed that the sun travelled around the earth once per day!) – and died ignominiously, under house arrest for his “crimes”.

Galileo didn’t see any conflict between his discoveries and the teachings of Christianity, and he spent much effort trying to think up theological arguments which would persuade the authorities that this was the case. The problem was, of course that while he took the word of the bible to be figurative, they mostly did not.

I would imagine that, if asked, most people would probably say that scientists probably have a higher tendency towards atheism than non-scientists. This may well be true (it’s not the kind of thing you can easily get reliable statistics on), but is there really such an incompatibility between religious and rational thought? Here is a cartoon:

There is a serious point here: you can’t really rationalise a belief in God (although at least one person has tried – see below), and it needs to be taken on faith. In a similar way, there are foundational aspects of mathematics which we must just accept. Take numbers, for example. How do we know they exist? Well, a few posts back I gave what is considered to be a logically sound definition of the natural numbers, based on the cardinality of sets, which only requires that we accept the existence of the “empty set”. But how do we prove that the empty set exists? We just have to take it as an axiom, which is basically math-speak for “on faith” (in other systems the existence of the empty set CAN be proved, but there will always be other axioms the proof is based on). Although mathematics is as rigorous a subject as we have, it is still based on faith. Is an acceptance of the axiom of choice less irrational than a belief in a higher power?

I’m not really qualified to talk about these questions. So let’s consider some more mathematicians’ religious views. Isaac Newton (1642-1727) – who could in some ways be considered to be Galileo’s natural successor – was, in my own humble opinion, the greatest scientist and mathematician that ever lived. He had some very interesting religious beliefs. Technically, he was also a heretic (not difficult at that point), although living as he did in post-Reformation Britain, this was not such a problem as it was for Galileo. He believed in a God, and studied the bible, but shunned the Church and its dogma.

Most interestingly, he was fascinated by mysticism and the occult; some reports say his spiritual studies were of more importance to him than his scientific ones. He was an alchemist, and wrote at great length on the subject (although much of these writings were destroyed, possibly in a lab fire started by his dog).

What is alchemy anyway? What do alchemists do, exactly? I used to be very confused by this, until I found a book on the subject on my parents’ bookshelf some years ago. I am still confused, but less so. The thing to understand is that it is rich in metaphor: the object was not really to find a way to turn lead into gold (although many took this aim literally and wasted much of their lives trying to fulfil it, just as many interpret the bible and the koran literally, and waste everybody’s time); it is a metaphor for the improvement of the soul. Similarly the elixir of eternal life, the philosopher’s stone: they are all just symbols for what is basically the same aim as Buddhism. Enlightment through chemistry! A phrase which has perhaps taken on different undertones in the past 50 years or so.

Jump forward 200 years or so to the logician Kurt Gödel (1906-1978). He was an utter genius who turned mathematics on its head, but also a very strange man. He was quite devout in his Christian views, and never seemed to have really separated them from his work. In fact, he even produced a logical “proof” for the existence of God, known as Gödel’s Ontological Argument. While this was possibly not his least popular proof, it is no doubt his most dubious! Here it is:

Good isn’t it? Looks nice anyway. I won’t attempt to explain this, but there is a good commentary here if you are interested. I should point out that this is not generally accepted to be a proof! The existence of God is still an open problem. Incidentally, Gödel has the dubious honour of probably qualifying for the weirdest death in mathematics: he had a crippling fear that someone was trying to poison his food, and only trusted his wife to serve him. When she went to hospital for a few months, he starved to death.

Time for an atheist I think, to balance things out. Godfred Hardy (1877-1947) would not have been impressed with Gödel’s Ontological Argument. In fact, number 3 on his list of life ambitions was to “prove the non-existence of God.” To get some idea of how important this was to him, it comes after:

1. Prove the Riemann Hypothesis

2. Be really good at cricket

but before:

4. Assassinate Mussolini

Unfortunately (or perhaps fortunately, if you are a Christian or a Fascist), he didn’t actually succeed in any of these. Hardy was also quite an odd man (we seem to be developing a theme here). Apparently he couldn’t stand to see his own reflection, and when staying in a hotel would cover all the mirrors with towels. He was, however, a great mathematician, although when asked by Paul Erdős what his greatest accomplishment was, he unhesitatingly (and quite modestly I think) replied: “the discovery of Ramanujan”.

So, this brings us to Erdős (1913-1996). The most prolific mathematician ever (according to wikipedia, although I think that the completed works of Euler must surely contend with his output, if they ever actually finish getting completed), he was a great source of much-quoted aphorisms. Incidentally, I just read with interest that the one about mathematicians turning coffee into theorems is apparently somewhat lost in translation from the German, in which the words for “theorem” and “coffee residue” are the same: Satz. This makes this saying a lot more noteworthy! I always thought there was something lacking.

Erdős didn’t believe in a God, although you wouldn’t necessarily think it given how often he referred to one. He called God “The Supreme Fascist”, and accused Him of hiding his socks. He also often referred to The Book, a mythical compendium of all the best proofs in mathematics, which was apparently written by God (Erdős also accused God of selfishly keeping the best proofs in his Book to himself, and only allowing us the odd tantalising glimpse). I’m not sure how a theologian would classify this belief system.

I’d better do some proper work now. But here is a cartoon about The Book to finish up. This also addresses another “proof” of the non-existence of God: the Omnipotence Paradox, which basically says that, if there were an all-powerful being, it would be able to create a task it is unable to carry out. As with all paradoxes, this doesn’t really prove anything, but does nicely exemplify the inadequacy of language to describe certain concepts. If there is a higher power, then it is certainly outside of our ability to discuss, so we should probably do something more useful instead.

^{* Regular readers of this weekly blog will have noted by now that I use “week” quite flexibly}

]]>

I’ve recently been reading a book called *Chaos*, by James Gleick. It is a nice, easy-to-read overview of chaos theory in all its forms. Chaos theory is not really a proper mathematical field, more of an ideology, which has applications in all walks of life. The phrase seems to be bandied about less these days, perhaps because the ideas have become so accepted that it is no longer considered a theory, but just “how things are”. It takes the form of turbulence, entropy and unpredictability; it has great influence on the weather, the traffic, the stock markets…indeed it is hard to imagine how science worked before the notion of chaos. As one physicist in the book puts it:

*“Relativity eliminated the Newtonian illusion of absolute space and time; quantum theory eliminated the Newtonian dream of a controllable measurement process; and chaos eliminates the Laplacian fantasy of deterministic predicability”.*

Poor physicists! Always having their work eliminated by something or other. Luckily this doesn’t happen in mathematics. Chaos in mathematics is studied in the form of dynamical systems, in which small perturbations in initial conditions can have a dramatic long-term effect. This sensitivity is known in popular culture as the “butterfly effect”, from a paper by Edward Lorenz – a pioneer of chaos theory – titled: *Predictability: Does the Flap of a Butterfly’s Wings in Brazil set off a Tornado in Texas? *

Lorenz was a meteorologist, and first noticed chaotic effects whilst running weather simulations. Weather is notoriously chaotic (see, for example, long-term forecasts by the Met Office for evidence of this), and one day, whilst trying to restart a simulation where he had left off, he fed in data which had been output from the middle of a previous session. He noticed that the outcome was wildly different from his previous results, a consequence of the computer having rounded his output to what he had thought was an insignificantly fewer number of decimal points. Gleick goes into weather patterns in some depth, as well as delving into such interesting topics as the fractal – and by implication, infinite – nature of coastlines (the closer you get the more little “bays” there are), and the chaotic behaviour a human heart displays while fibrillating (basically what a defibrillator does is to reset a chaotic system with a massive jolt of electricity).^{*}

But I am not actually going to talk about chaos theory today. Well, not quite. Instead I am going to share a few odd and interesting freakonomics-style chains of events I’ve learnt about recently. They all involve seemingly insignificant things – conkers, diclofenac and a cat parasite, to be precise – which have (arguably) had a huge impact on world events. In that sense you could possibly claim that this was some kind of chaos in practice. But that would be quite a tenuous way to try and link it with what I’ve written so far, so I won’t.

**1. How Conkers Created Israel**

Acetone is a very useful chemical, used all over the place. Probably you are most familiar with it as the strangely pleasant-smelling principal ingredient in nail varnish. However, in the search for more reliable explosives in the late 19th century, it was discovered that it could be used as a solvent to extrude a new compound – called cordite – from nitroglycerine, nitrocellulose, and (unexpectedly) vaseline. Cordite was swiftly adopted as the principal explosive used in artillery and small arms, and after the start of World War I the demand for the chemical increased greatly. Unfortunately, the primary source of the acetone used to make it had been German factories, and at this point the Germans were naturally rather reticent about supplying Britain with ingredients for explosives. So it was imperative that Britain found a new source of the chemical.

David Lloyd George, who was at that time the Minister for Munitions, commissioned a Russian-born Jewish Professor called Chaim Weizmann to come up with a new way to produce acetone. Traditionally the substance had been produced by distilling starchy materials, such as maize and potatoes. But unfortunately, due to naval blockades,even these basic ingredients were in short supply during the war. So Weizmann turned his attention to non-food starch, and discovered a way to adapt the process to use horse chestnuts, better known as conkers. Factories were set up which produced huge quantities of acetone, and in order to supply them with the raw material, the following message was posted in classrooms and Scout huts around the country:

*Collecting groups are being organised in your district. Groups of scholars and Boy Scouts are being organised to collect conkers. Receiving depots are being opened in most districts. All schools, W.V.S. centres, W.I.s, are involved. Boy Scout leaders will advise you of the nearest depot where 7/6 per cwt is being paid for immediate delivery of the chestnuts (without the outer green husks). This collection is invaluable war work and is very urgent. Please encourage it.*

The result was that, for a number of years, far fewer games of conkers were played in playgrounds, and small boys became decidedly more financially solvent, as well as unwittingly essential to the war effort.

As well as being an imaginative chemist, Weizmann was one of the foremost Zionists of his time, and he used the considerable status he had gained from his contributions in the war to further this cause. Lloyd George became Prime Minister in 1916, and his gratitude to Weizmann played a large part in his government’s issuance of the Balfour Declaration, which said:

*“His Majesty’s government view with favour the establishment in Palestine of a national home for the Jewish people, and will use their best endeavours to facilitate the achievement of this object, it being clearly understood that nothing shall be done which may prejudice the civil and religious rights of existing non-Jewish communities in Palestine, or the rights and political status enjoyed by Jews in any other country.”*

This was a clear stamp of approval by the British government for the creation of a Zionist state, and greatly helped Weizmann’s campaign. Weizmann went on to become the first president of Israel in 1948. Would this have happened without conkers?

**3. Why diclofenac is bad for Zoroastrianism**

Zoroastrianism is an ancient religion; probably the first monotheistic belief system, and still practised by tens of thousands of people in Iran and South Asia. Members of the Zoroastrian communities in India and Pakistan are known as Parsis; originally from Persia, they emigrated there in the 8th century.

One unusual aspect of the distinctive Parsi culture is the way in which they dispose of their dead. The Zoroastrians regard the elements as sacred, and believe that burial and cremation respectively defile earth and fire. So instead, they choose to leave the bodies in specially constructed buildings, known as “Towers of Silence”, where, through a combination of vultures, sun and wind, they gradually disintegrate.

However, recently an unexpected series of events has led to something of a crisis in this system, leading to much distress in the communities. Vultures play a large role in disposing of the bodies in the Towers of Silence, and at some point their numbers began to drop drastically. This led to a huge increase in the time taken for the bodies to decompose, which in turn caused the population of scavengers such as rats to grow drastically, thereby increasing the incidence of diseases such as rabies. On top of this was the obvious distress to the families of the deceased, not to mention the people living in the increasingly urbanised areas around the Towers. No other method of disposing of the bodies proved as effective as the vultures, and debates are still ongoing as to whether to lift the ban on burial and cremation.

You may have been prescribed a painfiller called diclofenac in the past , perhaps for a muscle injury. It is fairly benign, an NSAID closely related to ibuprofen and aspirin. It was approved for veterinary use in India and Pakistan some time ago, and became widely used by farmers in order to increase the lifespan of their animals. It was only recently that diclofenac was found to be the cause of the vultures’ near-extinction. The drug was still present in the carcasses of the animals they had been feeding on, and was causing kidney failure in the birds. Thus a simple painkiller was responsible for a great upheaval in a thousands-year-old way of life.

**2. Do Cats Cause Wars?
**

You are probably most likely to have heard of toxoplasmosis from the film *Trainspotting*; it was the disease which killed Tommy, who caught it from a kitten he had bought for his girlfriend (who had left him after Renton had stolen their sex-tape and replaced it with a football tape). Toxoplasmosis is indeed contracted from cats, and is a very odd and little understood parasitic disease. It has the following life-cycle:

A cat eats an infected rodent. It then passes the parasite through its faeces, which other rodents come into contact with. When a rodent is infected, the disease has a curious effect on their brain, causing them to lose their fear of cats. I like the following quote from a paper on the subject:

*We tested the hypothesis that the parasite Toxoplasma gondii manipulates the behaviour of its intermediate rat host in order to increase its chance of being predated by cats, its feline definitive host, thereby ensuring the completion of its life cycle. Here we report that, although rats have evolved anti-predator avoidance of areas with signs of cat presence, T. gondii’s manipulation appears to alter the rat’s perception of cat predation risk, in some cases turning their innate aversion into an imprudent attraction.*

So, instead of being fearful of cats, the rats actually become *attracted* to them! Imprudent indeed. The cat eats the rat, and so the whole cycle starts over again. Quite fascinating, and all a bit sinister and sci-fi; you are probably thinking that it is lucky this is just a problem for rats!

Well I’m afraid that is not actually the case. It is estimated that around a third of people have toxoplasmosis, and this figure rises to as high as 90% for some countries (most French people have it, for example). If you have a cat, the chances are that you have toxoplasmosis. Luckily it is pretty much unheard of for people to die of it, let alone even become ill, unless they have severely weakened immune systems (Tommy had HIV). However, perhaps even more worryingly, it has been shown by various studies to affect our behaviour too. And no, it doesn’t just make us attracted to cats.

As I said, it is still quite poorly understood. According to the wikipedia article on the subject, correlations have been found between the parasite in humans and the following:

- Decreased novelty-seeking behaviour
- Slower reactions
- Lower rule-consciousness and greater jealousy (in men)
- Promiscuity and greater conscientiousness (in women)

One study showed that people with toxoplasmosis are 2.5 times more likely to have a car accident from reckless speeding than those without; it has been suggested that this is due to the effects on reaction speed. Others have shown that infected women are more likely to give birth to sons, that motorcycle-owners are more likely to have it, and even that it may affect a person’s football skills! (I think that one was from a tabloid: “brain parasite improves football skills”). If a third of all people have it, then this is scary stuff. Perhaps especially the part about recklessness and lack of rule-consciousness…might toxoplasmosis have contributed to humankind’s warmongering inclinations?

Anyway, that’s enough conspiracy theory, sky-burial and mind-controlling brain parasites for now. Next week: more maths!

^{*Speaking of electrocuting body-parts by the way, apparently an electric current to the brain makes you better at maths. Has anyone tried this?}

]]>

We have some idea of what mathematics is from Adam’s posts; but what is statistics? Statistics is applied maths with uncertainty. In statistics mathematical techniques are used to model and quantify our uncertainty about reality. Modelling climate change, predicting the outcome of elections, wrecking the financial system and ensuring the casino always wins: statistics is everywhere. And uncertainty is the key to statistics.

In order to get across an understanding of what uncertainty is I will try to describe some of the different kinds we face and how statistics deals with them. The five levels in the following taxonomy lie on a continuum running from complete certainty to complete uncertainty, and provide a means of measuring the range and limitations of statistics in different situations.^{**} The further we go along this continuum the less effective statistics is at prediction and inference, and many problems in statistics and quantitative social sciences like economics come from not recognising just how far along the continuum we are.

**Level 1: Complete Certainty**

This is where most of maths resides. A priori truth. Facts are facts, largely unchanging and immutable. 1+1 is always going to be 2^{†}, is the ratio of the circumference of a circle to its diameter. You can be certain of it. Maths is the exploration and development of this land of hard facts, and aside from maths, not much other human endeavour lives here. Some logic is certain (all men are mortal; Socrates was a man; therefore Socrates is mortal, etc), and perhaps some of the laws of physics (although since the advent of probability in quantum theory, complete certainty has become elusive in even this formerly so self-sure subject). At this stage, mathematics rules. And from the point of view of the statistician, this category is not very interesting.

**Level 2: Certain Uncertainty**

At this level we have events with uncertain outcomes that are governed by mechanisms which we understand completely. The canonical example might be an idealised fair coin toss, in which there is a 50/50 chance of each side of a coin landing. The statistics here are simple: if you toss a coin 100 times you would expect around 50 heads and 50 tails. But you might get 40 heads and 60 tails. Or even 0 heads and 100 tails. There is uncertainty here but it is fairly easy to quantify and predict outcomes. Using simple tools like the binomial distribution we can work out the probability of each of the outcomes above occurring (0.08, 0.01 and 7.810^{-31} respectively). This is like being in an honest casino where the rules are known by all in advance and they are always followed. This type of uncertainty can be very well predicted but is fairly rare. The real world tends to be more complex.

**Level 3: Fully Reducible Uncertainty**

At this stage events can be observed to follow the kind of patterns found in level 2 phenomena, and with enough data, and a bit of statistics, these type of events can be predicted with as much certainty as level 2 events. This is like being in a casino which follows rules, but in which the only way to determine what those rules are is through observing gamblers and inferring the odds. As long as the rules stay the same – and we can make enough observations – we can determine what the rules are and reduce the uncertainty to level 2.

This is the region of uncertainty where a lot of physical sciences and some human activity takes place. It is the ordered world of controlled experiments and predictable outcomes: test tubes, rats in cages and production lines. At this level of uncertainty the rules exist; we just have to discover them. This region relies on regularity, which seems to be present in a lot of the natural world. If you take the average weight of 100 rats and then add the weight of the world’s heaviest rat, the average will not change by very much (you don’t get rats the size of buses). If you plot the distribution of the weights of rats, it will follow a similar bell curve as that of the historical number of rainy days in November in Manchester, or of the height of Scottish men. Because we know this we can be confident that, with a bit of care, certain statistical models like linear regression and analysis of variance will work reasonably well to predict outcomes.

This is the stage where the tools of statistics are one of the best ways to describe uncertain events. Here statistics rules. The only trouble is recognising when you are in this level and not in Level 4.

**Level 4: Partially Reducible Uncertainty**

Here Be Dragons. And money, and humanity, and disaster. This is where the bits of existence which are not (yet) neat enough to fit into level 3 sit. Generally, if there is an event with lots of people involved, or a rare event with catastrophic implications, or anything we don’t understand very well, it should be in Level 4. Think: financial markets, earthquakes, technological change, 100 year floods and flashes of genius. Success in this field is as much a function of luck as it is statistical understanding (cf. millions of rich fools in the City of London). Some models will have success here but most will fail. Here, for now, there is no certainty. But there is money to be made in claiming that we have it. Mathematically, the probability distributions of events at this level have ‘fat tails’: extreme events are common.

There are some nice solutions here. For example Bayesian statistics – incorporating prior beliefs based on theoretical assumptions – can reduce uncertainty. Noise reduction, dimension reduction: many statistical techniques at this level involve some attempt to reduce the uncertainty to level 3. Level 4 is where the demand for answers is highest, and so answers are provided. Unfortunately it is very difficult to ascertain the validity of those answers; they could easily be right for 15 years and then become disastrously wrong, as in the recent financial meltdown.

**Level 5: Irreducible Uncertainty**

At this level, facts are abstractions. This is the realm of philosophers and religious leaders. Gods and Ghosts. The foundations of level 1, rather confusingly lie in this level, giving a vicious cycle of uncertainty (or perhaps a virtuous circle of certainty, depending on your temperament). For more on this level see Wittgenstein’s *On Certainty*. As he said of this level: “Whereof one cannot speak, thereof one must be silent.”

^{* This is a guest post by my friend Tom Liptrot.}

^{** These ideas have been to some extent lifted from this paper. There the distinction being drawn was between physics and economics, but the parallels hold between maths and statistics. }

^{† At least in the ring of integers}

]]>

Now, other than indexing websites and providing a portal through which to access them, clearly the most crucial aspect of a search engine is the ordering system it uses to list the sites. There needs to be some way of assigning an “importance score” to each webpage, such that the ones which people are most likely to want view come first. Arguably the sole reason google are as successful as they are is a very effective method of doing this invented by Larry Page while he was at university, conveniently called PageRank. The system uses the links to a page to determine its score, and crucially, it measures not just the number of these links but their “quality”; that is, it assigns higher importance to links coming from pages which themselves have a high score.

PageRank is a modified version of the notion of centrality in a network, so I shall first explain this concept. A network is just another name for a graph, which I have talked about in the past: they just consist of a bunch of nodes, connected by lines. Because of this simple formulation, there are various ways in which they can be encoded, other than by the standard one of a pretty picture. One of these is as an adjacency matrix: label the nodes 1 to, then put a 1 in theposition if nodeis joined to node, and aotherwise. For example, the (4-cycle) network on the left has the following adjacency matrix:

This is very useful, as to a certain extent it reduces problems about networks – which can be very complicated, and are still the object of a great deal of research – to linear algebra, which is very well understood (in fact it is one of the few subjects in mathematics which could be called complete in any way).

What we want is to find a way to assign a relative importance (centrality) to each node, which takes into account not only the number of links to it, but the centrality of each of the nodes it is linked to. Linear algebra simplifies this problem considerably. As a reminder, an eigenvalue and corresponding eigenvector of a matrixare, respectively, a constant valueand a vectorsuch that:

The prefix -eigen comes from German (as it seems does most mathematical notation!), and means “own”, or “innate”. This is slightly misleading (different matrices can have the same eigenvalues), but it reflects the fact that the eigenvalues/vectors contain a lot of information about a matrix, as does the fact that the eigenvalues of a matrix are often referred to as its spectrum. Geometrically, if we think of a matrix as representing some linear transformation (a rotation, shifting, or stretching of space in some way), then an eigenvector is a vector whose direction is fixed by the action of the transformation. The corresponding eigenvalue is the factor by which the magnitude of the vector is increased or decreased.

But that isn’t really relevant to us at present, as we are not considering the matrix to be a transformation, but simply an encoding of the network’s structure. So what significance do eigen-things have here? Well, let’s carry on using the example of a 4-cycle. An eigenvector in this case will be of the form:

So, putting this and the adjacency matrixinto the above formula, we get:

Which simply represents the connections in the network. Let the valuerepresent the centrality of the node. Then, for example, the line:

tells us that the centrality of node 1 is proportional to the sum of the centralities of the two nodes connected to it.

(At this point you might be wondering which eigenvalue we are using. Luckily we don’t need to worry about this! The Perron-Frobenius theorem says, among other things, that if a matrixhas only positive entries, then it has a unique largest eigenvalue, for which there is a corresponding eigenvector which itself has only positive entries. This also does away with the need to speculate as to what a negative centrality score might signify).

Now, back to google. Instead of nodes imagine webpages. Instead of lines, imagine links between webpages. Then the centrality score of a page is a very crude version of the situation we wanted: each webpage has a centrality which is proportional to the sum of that of each of the webpages it is linked to. So not only are we taking into account how many links a website has, but how significant each of these links are.

Of course, PageRank is a bit more complicated than this. For a start it matters which way the links are pointing. So let’s take a new picture of the internet – this time with arrows. Technically what PageRank measures is the probability that a person randomly clicking on links will end up at that page after an infinity of time has passed. This sounds rather complicated, but it isn’t really. It is just the result of a “random walk” through the internet; the PageRank algorithm simulates this random walk.

It works like this: start off by assigning an equal value to each page (to simplify things I will just use 1). We then divide each page’s value equally among its links. What will happen when we simulate the process is that each page will confer its value to those that it is linked to, and with enough iterations we will reach a steady state. So instead of a 1 if two websites are linked, we put the fractionin theposition if websitelinks to website, whereis the number of links coming out of website. In this case we get the following:

Note that the entries in the fourth column are 1/2, because there are two links coming out of website 4; the more links a page has on it, the less important they are considered to be. We then give each page an equal starting PageRank , which appears in the corresponding position in the (not yet eigen-) vector. Again in this case I will use 1, so we have:

The eigenvalue in this case will eventually be 1 (see below), so to simulate the random surfing action, we just multiply these together. After the first iteration we have the vector:

which already gives us a good idea of relatively how “important” the pages are: page 4 has no incoming links, so it has a PageRank of 0, and pages 1 and 3 have a higher score than page 2 as they have more links. If I were doing this properly, then I would continue to multipy by the same matrix, and after enough iterations we would have an eigenvector satisfying:

which would be our desired PageRank values.

However, I’m not doing it properly! It would be too long and complicated to write here. In reality, a “damping factor” is included; this is to simulate people getting bored of clicking on links, and starting off at another unrelated page (randomly clicking is fairly boring after all), and it assures that the system reaches a steady state. Also, in reality the matrix has probabilities for entries. In fact it is a stochastic matrix, in which the rows and columns all sum to 1 . Stochastic matrices happen to have the nice property that the highest eigenvalue assured by the Perron-Frobenius Theorem is always 1, and this is why we don’t needin the above equation.

There are, of course, various problems with this system. To name but three: it is based on the assumption that people surf the internet randomly, which of course they don’t; it is unfairly biased against new websites; and it could theoretically be possible to artificially increase a site’s rankings with devious linkage. But it is currently the best we’ve got (or so google’s world dominance seems to suggest).

Interestingly, it has been suggested that a PageRank-like system should be implemented to measure the importance – or “impact factor” – of academic journals. Letbe the impact of a journal in a given year, letbe the number of times that articles published in the journal in the previous two years were cited elsewhere, and letbe the total number of articles published by the journal in the previous two years. The current system uses the simple equation:

,

which is clearly rather crude, and currently the subject of much academic grumbling. Could JournalRank do a better job?

]]>