I use this blog as a soap box to preach (ahem... to talk :-) about subjects that interest me.

Friday, December 28, 2012

Authors' Mistakes #8 - Graham Tattersall (addendum)

I’ve given up reading Geekspeak. It is too stupid.

In four sentences that appear towards the end of Chapter 11 (pages 103 and 104), Tattersall refers three times to the bomb dropped on Hiroshima as an atomic bomb, and three times to modern multi-megaton bombs as nuclear bombs. He must think that there is a difference between an atomic bomb and a nuclear bomb!

So, here is somebody with a PhD (probably in Math) who doesn’t know that the Hiroshima bomb was a nuclear bomb.

The front flap of the book jacket says: Dr. Graham Tattersall, a confirmed and superior geek, and, later: Math has a new champion, and the Geeks a new King.

BS, I’d say.

Authors' Mistakes #8 - Graham Tattersall

Geekspeak, by Graham Tattersall is a little humorous hard-covered book containing a collection of essays about how to calculate odd things.

Tattersall, in his attempts at making complex calculations easy, cuts too many corners. For the sake of simplicity, he says things that are not right.

For example, at the beginning of Chapter 8, when estimating the number of people who die every year in the UK, he states:

You can estimate the figure quite easily from the average lifespan in this country, which is roughly 75 years — and rising. Imagine for a moment that all births in Britain stopped today, that from now onwards people die off and aren’t replaced.
Each year more die, until, after the time of the average lifespan of 75 years, there will be very few of today’s 60 million people left. So, if ages are evenly spread [my observation: a dubious assumptions, but let’s let it pass], an average of 60 million divided by 75 people will die every year. That’s about 800,000 per year.

He then comments that the actual figure is 500,000 and that the discrepancy can be explained by the fact that our lifespan is growing.

Perhaps. But there is a problem with his estimate due to his erroneous use of statistics: you cannot state that the ages of death are “evenly spread” between 0 and 75 while, at the same time, claim that “very few” live beyond the average. What average is it if there are very few people that live longer?

Tattersall’s assumption that very few people live longer than average is not only conceptually flawed, but also factually wrong, because it turns out that many people happily survive the average age of death. I looked at the US statistics and discovered that almost 60% of people live longer than average.

As an aside, you might wonder how it is possible that more than half survive the average age of death. It seems unreasonable. The reason is that the distribution is highly skewed. I will explain it with an example that, although not completely realistic, proves that for steeply decreasing distributions the number of points above the average can exceed the number of points below the average: Assume that you have a population of 100 people and that 55 of them live 60 years, 30 live 70 years, and 15 live 75 years. The average age of death is (45 x 60 + 35 x 70 + 20 x 75) / 100 = 66.5, with 55 people above average and only 45 below average. Convinced now?

Another example of his “pseudo-scientific” but confusingly imprecise presentations is in the small “Speak Geek” section at the end of Chapter 5. He states that A man who weighs 100kg at the North Pole would weigh only 99.65kg at the equator.

This is roughly correct (it’s actually close to 99.67kg). It is due to the fact that the surface of Earth, which spins at the rate of about 40,000km every 24 hours, is not an inertial system. We are kept in place by the presence of gravity. Otherwise, we would keep moving on a straight line along the tangent. From our local point of you, this appears to us as a force directed away from the axis of the Earth. This is the same apparent force that pushes us away from the centre of a car when we take a curve.

So far so good. Tattersall calls the centrifugal force an “upward ‘flinging’ force”, and that is were I have a problem. After mentioning gravitation, he explains the centrifugal force as follows: The second force is upwards, and is caused by the rotation of the Earth constantly trying to fling your body into space like a stone on a string being swung round your head.

This is misleading in too many ways to be acceptable. First of all, Earth doesn’t try to fling us anywhere. Secondly, the slingshot works because we exercise a pull on it off-centre. Therefore, to compare Earth to a slingshot is not right.

You could dismiss my objections by saying that he explains things in a colourful way and that I am just being my usual hair-splitter and boring self. But when he says that the centrifugal force is upward, he is plainly wrong. The centrifugal force is outward. It is upward at the equator, but to say it in general is badly misleading.

Finally, I would like to report a horrible mistake in Chapter 7. He starts it with the following two sentences: Captain Picard has been sorting out a spot of bother on some planet or other, while the Enterprise orbits at a safe distance. A radio command is sent to the ship: ‘Beam me up, Scotty.’

How can he not know that Scotty was the chief engineer under Captain Kirk? What a shame! And he calls himself a geek? It is painful.

I usually report problems in a book after reading it. In this case, I couldn’t hold back after reading about a third of it. I will probably have to report more...

Thursday, December 27, 2012

Authors' Mistakes #7 - Lee Child (again)

Lee Child definitely knows how to write thrilling crime novels. The Visitor, published in the US with the title Running Blind, is a good read, all 500 pages of it.

I’m amazed that, despite Child’s experience and Bantam’s professional editing, a mistake still found its way into the published book. I already reported on this blog a mistake I found in another Lee Child’s book: Die Trying.

What follow is a paragraph in Chapter 22, on page 352 of my paperback edition of The Visitor:

The apartment they wanted was on the eighth floor. Reacher touched the elevator button and the door rolled back. The car was lined with bronze mirror on all four sides. Harper stepped in and Reacher crowded after her. Pressed eight. An infinite number of reflections rode up with them.

Do you see it?

It’s easy enough: When the door opens, it slides away (incidentally, rolled is not the verb I would have used, but then, who am I to criticise?)  Then, how can Reacher see that the interior of the door is lined with a mirror? He is even still standing outside the lift. Child should have swapped the sentence “The car was lined...” with “Harper stepped in...” In fact, Reacher could have only seen the inside of the door after the door closed.

Thursday, December 20, 2012

Authors' Mistakes #6 - Michael Crichton & Richard Preston

I just read Micro, the novel left unfinished by Michael Crichton when he died and completed by Richard Preston.

Crichton is a great author and I enjoyed several of his novels, and especially Timeline, Airframe, and Jurassic Park. Micro is also a very entertaining story, but this time, in my opinion, he went too far with his scientific (or not) speculations.

That an extremely high magnetic field can shrink objects is difficult to take, but what happens when objects are shrunk violates the basic laws of Physics in an unacceptable way.

According to Crichton and Preston, when people are compressed, they become very light and strong. This might make for a nice story, with people jumping around like fleas and falling from great heights without getting hurt, but it is not scientifically credible.

For one thing, where does the mass go? The principle of conservation of energy-matter has been proven correct uncountable times. Even if we accept that the space between the molecules and atoms (and perhaps subatomic particles?) is reduced, why should the mass of a person be reduced as well? It doesn’t make any sense.

Then, there is the issue of body temperature and thermoregulation. Crichton and Preston mention that shrunken humans have some problems with maintaining their body temperature, but it is not an issue that one can dismiss easily.

If you reduce the linear dimensions of something by a factor of 1000 while maintaining its shape, its volume is reduced to one billionth of the original one and its surface to one millionth. This means that the surface per unit of volume would grow by a factor of 1000. No way that the human body could function under those condition. It would lose all its heat and die.

Finally, the authors don’t even try to explain how the shrinking process can be reversed. To shrink people to 1/1000 of their original size, they apply three times a strong magnetic field. Every time, the people shrink by a factor of ten. Then, without any explanation, when they apply again a magnetic field, the people return to their original sizes. This goes against logic. What is it? Three is a charm? Give me a break...

Tuesday, December 18, 2012

Authors' Mistakes #5 - Academic textbook on research methods

I have been reading Practical Research Methods for Media and Cultural Studies: Making People Count, by Máire Messenger Davies & Nick Mosdell, Edinburgh University Press 2006, ISBN 978-0-7486-2185-9.

On page 62, to explain methods for random sampling, the authors describe how to make a sample of 50 students from a population of 150.

When they explain the stratified method for random sampling, they say: your population list may be divided into two lists of seventy-five males and seventy-five females and you sample each list randomly until you have twenty-five of each.

The statement that out of a total population of 150 students the genders are equally split is in general not correct. You might think that it doesn’t really matter whether the two lists don’t have exactly the same length, and that the method remains valid. But this is not the case, because if there are, say, 90 males and 60 females, a sample built with 25 males and 25 females would obviously not represent the population.

Now, rather than studying a sample that represents the whole population, you might like to investigate differences between male and female students, regardless of how many of each gender are present in the population. Then, it would make sense to do the split and pick equal numbers of students from the two lists.

But how they put it, they are definitely wrong. And they keep doing it. Here is how the text continues: You then subdivide the two lists into age groups to ensure you have sufficient numbers of, say under- and over-thirties (assuming that age is relevant to your research question). As you can see, if you are going to subdivide your sample into particular demographic subgroups, your subgroups will become smaller and smaller the more categories you include, until the sample size for these subgroups cannot be seen as reliably representative.

So far so good, but now comes the blunder: For instance twenty-five females, split equally into under- and over- thirties will give you only twelve or thirteen people in each group.

How can you split equally 25 students on the basis of age (if you define the discriminating age in advance)? It’s plainly wrong.

And they go deeper and deeper in their nonsense: If you want to look at four age categories, you will get only six or so people in each age and gender subgroup.

You might think that with a couple of “on average” added in the crucial places, everything would make sense. But that is not the case, because their explanation would still imply that the age distribution of students is flat, which is not.

It’s sad to see such mistakes in academic textbooks...

Monday, December 17, 2012

Kill, kill, kill!

In memory of Emilie Parker, one of many.

The second amendment to the US Constitution says:

A well regulated Militia, being necessary to the security of a free State, the right of the people to keep and bear Arms, shall not be infringed.

It was adopted on 15 December 1791.

Today, we wouldn’t place a comma between subject and predicate, but grammar is not the only thing that has changed over the past 221 years.

In 1791, the USA had been independent for five years and counted fourteen states (Massachusetts, New Hampshire, Vermont, New York, Rhode Island, Connecticut, Pennsylvania, New Jersey, Delaware, Maryland, Virginia, North Carolina, South Carolina, and Georgia), plus the District of Columbia, two territories, and land disputed by Spain in the South and the U.K. in the North.

According to the 1791 census, the population of the fourteen states was 3,723,418, including 681,850 slaves. The current population of the 50 states is approximately 315 millions.

Much has been said about the American citizens’ right to bear arms. I only want to make a couple of points.

Firstly, the US doesn’t need a militia, as it was perhaps the case in 1791. They have federal, state, and local police, whose work is actually made more difficult and dangerous by all the weapons that are in circulation (an average of almost one per person).

Secondly, assault automatic weapons, contrary to what the lobby groups like the National Rifle Association claim, have nothing to do with sportsmanship. Cheap weapons designed for high fire power to kill at short range might be suitable for criminals, but have nothing to do with a civilised society.

I say: ban all automatic weapons, standardise and enforce registration laws, and make it difficult to buy ammunition.

Monday, December 10, 2012

A Prank

I don’t get it.

Two DJs called the hospital where Kate Middleton was being treated and claimed to be Queen Elizabeth.  A nurse believed them.  Then, the next day, apparently, the nurse felt so humiliated that she committed suicide.  Now, everybody blames the DJs.

It seems to me that somebody who commits suicide for having been duped must have some psychological problems.  I know that what I’m saying is not politically correct, but who in her right mind would kill herself for being victim of a prank?

I am really sorry for the nurse.  Nobody should be so desperate and alone to arrive to the point of taking their own life.  But the prank was only a trigger.  Probably the small stress that brought the nurse’s state of mind over a tipping point.  The fact that that particular nurse answered the phone was an unfortunate and tragic accident.

The DJs only did their job.  They didn’t even expect that their prank would succeed.  I understand that they feel bad.  I would too.  But they should be reassured that they are not to be blamed.

Nobody could have possibly considered the possibility that a prank call would result in a suicide.  After all this hoopla, the poor DJs will probably live the rest of their lives thinking that they caused the death of a vulnerable nurse.  But they did not.  They are not responsible for it and should receive our sympathy instead of our scorn.

Every act, even the most insignificant, can have horrible consequences, and we focus on the consequences of an act because they are observable and measurable.  The tendency of focussing on what can be measured is understandable, but in many cases also completely wrong, because very often, people “get away” with unconsidered acts that could have very serious consequences.

For example, we are always shocked when we see on TV horrific pictures of car accidents, but how many times people drive dangerously and are not caught only because, by chance, they don’t cause any carnage?  In my opinion, whether an accident occurs or not is less important than what could have happened if it had occurred.

Tuesday, November 20, 2012

Misunderstood Science #2 - Another question of probability

In this article, I want to describe a problem that illustrates how to correctly estimate probability in a way that will surprise the non-statistician.

Here is the problem: An American friend of yours has two children. You know that one of them is a girl, but cannot remember the gender of the other child. What is the probability that they are both girls?

Many people would equate the lack of information concerning the gender of the other child with equal probability of the two possible genders, and answer 50%.

Their reasoning would be completely wrong but, as it turns out, their answer would be correct, at least in practical terms. If you read and understood my previous article on probabilities http://giuliozambon.blogspot.com.au/2012/11/misunderstood-science-question-of.html, you might know why. But let’s proceed in order.

For the genders of two children, there are four possibilities: MM, MF, FM, and FF, which represent the sample space of the problem. If we assume that boys and girls are equally probable, the four possibilities are also equally probable, at 25% of probability each. As you know that one of the children is a girl, you can exclude the MM case. As a result, you are left with three possibilities and can conclude that the probability of both children being girls is 1/3, or approximately 33.3%. And obviously, the probability that your friend’s other child is a boy is 66.7%.

But then, why did I say that the 50-50 answer is correct from a practical point of view? There are two reasons:
1. No parent gives the same name to their two daughters.
2. The frequencies (and hence, the inferred probabilities) of given names are very low.

Let’s start by taking into consideration that two daughters in the same family always have different names. We do so by splitting the ‘F’ of the above possibilities into ‘x’ and ‘f’, where ‘x’ indicates the girls with a particular first name, and ‘f’ the other girls, who have any other name. This results in a sample space consisting of MM, Mf, Mx, fM, ff, fx, xM, xf, and xx.

‘x’ can be any name we want, including the name of the daughter we know to belong to your friend’s family (even if we don’t know that name). Then, after discarding MM as we did before, we can also discard the possibilities that don’t contain ‘x’. This leaves us with Mx, fx, xM, xf, and xx. Now, as parents never give to two daughters the same name, we can also discard xx, and remain with the four possibilities Mx, xM, fx, and xf.

If we assume that boys and girls are on average equally probable and that the genders of children of the same family are independent from each other, we can calculate the probabilities associated with the four possibilities:
PMx = PxM = PM * Px
Pfx = Pxf = Pf * Px

We can use the frequency ‘y’ with which the name ‘x’ occurs among girls as an estimate of its probability, and rewrite the two expressions as follows:
PMx = PxM = PM * PF * y = 0.25 * y
Pfx = Pxf = PF * (1 – y) * PF * y = 0.25 * (y – y2)

As you can see, if y2 is much less than y (i.e., much less tan 1 as stated in our condition 2), all four possibilities have, for all practical purposes, the same probabilities. Then, the probability that the children are both girls is indeed 50%.

But is it true that all names have a frequency much less than 1? If you look at the web site of the US Social Security Administration, you will find the page http://www.ssa.gov/oact/babynames/limits.html from which you can download the number of children born in any particular year and given any particular name (but only if that name was given to at least five children).

Let’s say that your friend’s daughter was born in 2011. Then, you quickly find out that of the 33,723 names listed, out of a total of 3,623,043 girls, the most frequent girl name was Sophia, which was given 21,695 times. If your friend gave to his daughter the name Sophia, with y = 21,695 / 3,623,043 = 0.060, the resulting probability for two girls is around 48.45%.

Perhaps 48.45% is not close enough to 50%, but consider that the average occurrence of any name is 33,723 / 3,623,043 = ~108, which provides y = 0.00003. Then, the probability of two girls not knowing the name of the daughter your friend certainly had becomes 49.999%. Or perhaps you find out that the name of your friend’s daughter is Hilde, which in 2011 only occurred 5 times out of 3,623,043. In that case, the probability of him having two daughter is almost exactly 50%.

All in all, we can conclude that 50% is, for all practical purposes, correct, even if the reasoning of many people to reach that value is wrong.

Friday, November 16, 2012

Yet another book of puzzles

The last thing I would like is to put off readers of this blog by posting too many advertisements for my books. But, after all, I don’t publish things too often, do I?

I have just released a new puzzle book:

For the time being, you can only buy it from Lulu in print for AU$ 9.99 or from Smashwords in various e-book formats for US$ 1.99. It will take a while before you will find it on Amazon, Barnes & Noble, etc. It always does.

This book contains 100 difficult CalcuDoku puzzles. CalcuDokus, introduced in 004 as KenKen® (a registered trademark of Nextoy LLC), is a 9x9 numeric puzzle similar to Sudoku. But, unlike Sudoku, CalcuDoku doesn’t require you to learn complex strategies.

Each cage contains a target number and a code to indicate one of the four basic operations: “x”, “+”, “-”, and “:”. To solve a CalcuDoku puzzle, you have to solve all its cages; and to solve each cage you must write in its cells the digits that give you the cage target when you apply to them the cage operation.

Unless a cage consists of a single cell (in which case there is no operation and its solution coincides with its target), you can solve it in several ways. For example, a 2-cell cage marked “7+” admits six solutions: 61, 16, 52, 25, 43, and 34. But only one of those solutions is correct and will let you solve the whole puzzle.

You can discard the wrong solutions of all cages by repeatedly applying the rule that each digit between 1 and 9 can only appear once in each row and column.

The first sixty puzzles of this book consist of randomly generated cages, like the following one:

They are difficult, but I limited their difficulty by setting to 2 the maximum number of cages admitting more than 200 combinations. Therefore, although I haven’t tried them all, I’m pretty confident that they can be solved analytically. That is, without having to guess.

To create the other forty puzzles, I used a different strategy: instead of generating random cages, I arranged them in fixed patterns and only generated random digits, targets, and op-codes. Here is the type of puzzle you can expect:

These puzzles are in most (but not all) cases more difficult than the random ones. In fact some of them are quite diabolical. The first couple of patterned puzzles are easier than those that follow. Otherwise, the difficulty of the puzzles varies in no particular order.

Of the pattern-puzzles, I solved those numbered from 61 to 98. I haven't solved puzzle 99 but I believe it should be possible to complete it without having to guess (which I never do). Puzzle 100 is a different type of challenge: it admits two solutions, which differ in three cages. I could have removed the ambiguity by splitting one of affected cages, but I thought you might like to check it out, just for fun.

In case you are wondering, the shading of patterned CalcuDokus serves no practical purpose. It’s only there because it makes them prettier.

Thursday, November 15, 2012

How to calculate the twin paradox

The Web is full of pages about Special Relativity and how it is responsible for slowing clocks and creating the twin paradox. But what if a star ship accelerates during the first half of its journey and slows down during the second half?

WAIT A MINUTE! Shouldn’t we switch to general relativity when dealing with accelerated systems? Not necessarily. If you accelerate and decelerate along a straight trajectory that joins two star systems and ignore the curvature of space caused by other objects, you are fine with special relativity.

This article tells you how to calculate the time spent by a subluminal (i.e., no warp drives!) star ship constantly accelerating half of the way towards its destination and then constantly decelerating during the second half of its voyage. Its purpose is to support Science Fiction writers who need to write about interstellar travel. I adapted the formulae from an article from the University of California Riverside and, for fun, I re-obtained them from the standard Lorentz transformations for length and time. Initially, I thought I would also explain how it is done, but it would have been a bit too complicated for most people. If you are curious, I found a paper from the University of Leipzig and another article from UCR to be useful.

First of all, let’s define some terminology:
  • ‘a’ is the acceleration of the ship measured on the ship itself. Technically called the proper acceleration of the ship, which is the acceleration felt by the passengers. That is, what an accelerometer placed on the ship will measure.

  • ‘D’ is the distance between the point of departure and point of arrival, measured when the ship is moving at a speed much lower than the speed of light and with its engines off. Basically, you can take it as the distance we would measure from Earth.

  • ‘T’ is the time needed by the ship to make its journey, as measured on Earth. Earth orbits the Sun at 30km/s and the Sun moves at 370 km/s with respect to the cosmic microwave background. But we can ignore these speeds, because they only represent some 0.1% of the speed of light. On Earth, we are also subjected to its gravity and both Earth and the Sun move on curved trajectories, but these accelerations can also be ignored for our purposes. I just read that special-relativity effects slow down the clocks on GPS satellites (orbiting at 20,000km above sea level and travelling at one orbit per 12 hours, or 3.83km/s) by 7μs/day, while general-relativity effects (Earth’s gravitational force is much weaker up there) speed them up by 45μs/day.

  • ‘t’ is the time needed by the ship to make its journey, as measured on the ship.

Here we go. Let’s start with the time measured on Earth.  This is given by:

T = 2 sqrt[(D/2/c) 2 + D/a]

To make our life easier, we will measure time in y (year), distances in ly (light year, the distance light covers in one year), and speeds as fractions of c (the speed of light ~300,000km/s). In this units, g (gravitational acceleration on Earth’s surface, 9.81 m/s2) turns out to be 1.03 ly/y2. With this choice of units, c disappears from the above formula, because c = 1.

For example, let’s suppose that we want to reach Proxima Centauri, the nearest star to our solar system (4.24 ly) and that our ship can sustain the acceleration of 0.1g. For the people left back on Earth, the journey will take:

T = 2 * sqrt[(4.24/2) 2 + 4.24/(0.1 * 1.03)] = 13.51y

With an acceleration of 1g (ten times higher), still from the point of view of Earth-bound people, the journey would take 5.87y (you only need to remove the 0.1 from the above expression).

The time measured on the ship is given by:

t = c / a * 2 * arcsinh[a*T/c/2]

With c = 1, the formula becomes:

t = 2 * arcsinh[a*T/2] / a

arcsinh is the inverse function of the hyperbolic sine. You’ll probably find it in Excel (haven’t checked). I have it in the calculator application on my Mac when I set it to scientific mode.

So, how older do the passengers of our ship become when they travel to Proxima at 0.1g and 1g? You only need to plug a and T into the formula and obtain 12.61y and 3.55y.

Not a big deal, is it? In case you are wondering, the top speed, when the ship is half a way to Proxima and switches from 1g of acceleration to 1g of deceleration, is given by:

v = a* T / 2 / sqrt[1 + (a*T/2/c) 2] = 1.03 * 5.87 / 2 / sqrt[1 + (1.03*5.87/2) 2] = 0.95c

If you go to Tau Ceti, a star similar to ours that is 11.9 ly away, you get, for 1g acceleration:

T = 23.98y
t = 6.23y
v = 0.9967c

Now the differences become more significant. Still, you would have imagined a more dramatic difference, wouldn’t you? I did.

OK. Let’s look at the planet HD 40307g. It is the latest Earth-like planet discovered. It might have a gravity twice as strong as Earth’s, but it orbits a star slightly cooler than ours with a 200-day period. It also seems that it rotates on its axis, which would imply a day-and-night cycle. It could have liquid water and be able to sustain life. Its distance from us is 42 ly.

T = 43.90y
t = 7.40y
v = 0.9990c

Well, here the twin paradox is definitely dramatic. After a round trip, the twin on Earth would be 2 * (43.9 – 7.4) = 73y older.

Now, what type of propulsion could possibly accelerate a ship at 1g for more than seven years? You tell me!

Monday, November 12, 2012

Misunderstood Science: A question of probability

Probability and statistics are very confusing. Most people think they are self evident and consider them easy to handle, at least in everyday’s life. But they are wrong. For starters, how many of you could state the difference between probability and statistics?


OK. Here it is: Probabilities are decided in advance and have to do with predicting outcomes; statistics are concerned with inferring probabilities based on observed outcomes.

For example, if you have a six-faced die and state that each face will come up on average once every six throws, you are talking about probabilities: you estimate probabilities in advance with a mathematical formula and use them to predict what you will get in practice.

If, on the other hand, throw a six-faced die 600 times and count how many times each face comes up in an attempt to determine how probable they are, you are doing statistics. Obviously, statistics cannot ever be an exact science.

For one thing, no die can be perfectly balanced. Even if you started with a perfectly balanced die (and you tell me how you would determine that!), you couldn’t keep it that way, because with each throw imperceptible abrasions would remove tiny particles (perhaps just atoms) from one or more faces. All in all, if you throw any die enough times, you will discover that some faces come up, on average, more often than others.

But even with an ideal, perfectly balanced die (which, I repeat, is a physical impossibility), you cannot expect to get all faces exactly the same number of times. It is theoretically possible but, the higher the number of throws, the less likely it is. If you throw a die, say, 600 times, I bet you a thousand dollars against ten that you will spend the rest of your life trying to get 100 1s, 100 2s, etc. (I’ll settle the matter with your heirs)

How do you calculate a probability? Conceptually, it is simple: The probability of an outcome is given by the number of ways in which you can obtain that outcome divided by the total number of ways in which you can obtain all possible outcomes. That’s why it is easy to estimate that the probability of, say, a 5 when throwing a die is 1/6 (~16.7%), or the probability of head when throwing a coin is 1/2 (50.0%).

FYI, statisticians call the set of all possible outcomes the sample space. This is a bit twisted, because sample is a statistical term, while sample space refers to the calculation of probabilities,
but who says that scientists are always consistent?

Anyhow, the concept of sample space and the above definition of probability lets you answer questions like: what is the probability of getting a 10 if I throw two dice?

The size of the sample space is 36, because you can get 6 possible values with each dice, and they are independent from each other. The possible ways in which you can obtain a 10 are: (4,6), (5,5), and (6,4). As a result, the probability of obtaining a 10 is 3/36 (~ 8.3%). As a comparison, you can obtain a 7 with (1,6), (2,5), (3,4), (4,3), (5,2) and (6,1), which results in a probability of 6/36 (~16.7%).

Everything clear? Let’s check it out with a fun problem.

I place a ten-dollar bill in one of three identical boxes. Then, while you keep your eyes closed, I move them around so that I still know where the money is but you lose track of it. You have to choose one of the boxes; if it is the box with the money, the ten dollars are your. Clearly, you have a probability of 1/3 (or ~33.3%) to win. You make your choice by placing your hand on one of the boxes. But, before you can open your box, I open one of the other two boxes and show you that it is empty. I then ask you whether you want to stick to your original choice or switch to the other box that is still unopened. What do you do and why?

Obviously, you want to maximise the probability of winning. The questions you need to answer are: does it matter whether you keep the box you initially chose or you switch to the other box that is still unopened? And if it does matter, are you more likely to win if you keep the original box or if you swap it for the other one?

The answer seems obvious: there are two boxes and only one contains a reward. As there are no reasons for preferring either box, it is irrelevant which one you choose. They both have a 50/50 chance of being the winning one.

Or not?

Well, ... no. You are better off switching boxes, because the other unopened box is more likely to contain the ten-dollar bill than the one you initially chose.

Surprised? :-) Let’s see...

What is the probability that you chose the winning box? As I already said: 1/3. If you keep the box, you also keep the 33.3% chance of winning.

And what is the probability that the money is not in the box you chose? Obviously, 2/3. But if it isn’t, as I have already opened one of the two other boxes and showed to you that it was empty, you must conclude that the money is in the remaining box. No doubt about that.

In conclusion, if you stick with your original choice, you have 1/3 probability of winning, but if you switch boxes, you have a 2/3 probability of winning. Twice as high!

Where is the trick?

There is no trick. The whole story appears illogical only because of a widespread fallacy incurred by many people when thinking about probabilities. For probabilities to be equally spread among different outcomes, the possible outcomes must be independent from each other. In our game, they initially were independent, but ceased to be so when I opened one of the boxes. This is because I knew that the box was empty. This made the content of the third box no longer independent. If I had opened one of the boxes without knowing whether it was empty or not, the probability of finding the money in either your box or in the third box would have been equally spread at 50/50, as you probably thought.

If you are not convinced, think that if I had opened one of the two boxes without knowing that it was empty, I would have had 1/3 of probability of opening the winning box, exactly the same probability you had when choosing your box. But if that had not happened, and I had opened an empty box without knowing in advance that it was empty, I would have not introduced any dependency, because opening that box would have not said anything about the third box.

Amazing, isn’t it? Martin Gardner once said: in no other branch of mathematics is it so easy for experts to blunder as in probability theory. Imagine for non-experts...

Sunday, November 4, 2012

How can people ignore a grunting pig?

A few days ago, I got as a present the rubber pig you can see in the following image:

Walking in Manuka and Kingston, two suburbs of Canberra, I often made the pig, which is about 20 cm or 8" long, grunt towards the people I encountered.  Some were amused, but most simply ignored it.  We live in a world in which most people don’t even notice a cute grunting pig.  How sad...

Wednesday, October 31, 2012

Authors' Mistakes #4 - Vince Flynn

Well, this is not really a mistake, but bad writing. Not something you would expect in a “New York Times No.1 Bestseller”. The book is “Extreme Measures”, and the problem is on the first page of Chapter 39.

Vince Flynn writes: In many ways it was what made him such a great leader, but his lack of trust and inflexibility had also made things almost impossible.

Since when is lack of inflexibility counterproductive?

Clearly, lack should be read as referring to trust only, not to trust and inflexibility, but the sentence is ambiguous and, as such, unacceptable.

To fix it, he would have only needed to swap the two shortcomings and write his inflexibility and lack of trust. Surprisingly, no copy editor picked it up...

Sunday, October 14, 2012

JSP, JSF and Tomcat Web Development

Apress of Berkeley (CA) has released the second edition of my book Beginning JSP, JSF and Tomcat Java Web Development.

The first edition of this book was released in November 2007.

Some years later, Apress asked me whether I would have liked to write a second edition of the book.  My reply was that not enough had changed to warrant an update.

Then, in early 2012, they asked me again. In the meanwhile, JSF had added three new libraries of elements and Java 7 SE had been released.  Michael Sekler, who had contributed to the first edition, agreed that I would go alone with the second edition.

I said yes.

It took me five months to change the structure of the first edition, update and add functionality and examples, etc., but it was worthwhile.

You might be wondering why it took so long.  After all, most of the material for the second edition came from the first one, right?  Not really. The problem with writing computer books is that you cannot simply write your stuff and be done with it. You have to write examples for most of what you say, and this adds a whole new dimension to writing. You have to write the examples in the tightest possible way, because thousands of smart people will pore over them.  No slacking off in either format or content. Not even one superfluous or missing tab.

And then, once you have designed and written your examples, you have to test them in the most thorough way.  This also applies to the examples you have from a previous edition.  You see, in computing everything keeps changing. Therefore, one example that was working flawlessly a couple of years ago, even if it doesn't fail (thanks to back compatibility), it will generate a lot of warning
messages during compilation. And a professional developer doesn't want his/her code even to generate a single warning!

For web applications, you have to test your examples with all major web browsers (Internet Explorer, Google Chrome, Firefox, and Opera).  This often leads to changes that have to be retested again from scratch.

Once you are happy with your examples, you need to integrate them into the text of the book and explain them in enough detail for the reader to make full sense of them.  And sometimes the lines of code don't fit into the printed page...

And don't think that the work is done once you send the chapter off to the publisher, because technical and scientific books are different from novels and most non-fiction books: before going through copy-edit, they are technically reviewed.  And although if you have been exemplary with your coding, debugging, and documenting, the TR (Technical Reviewer) might come up with points you had not considered, or points you had considered and discarded without mentioning them in the chapter.  Even if you avoid having to rework the examples, you will have at the very least to explain your choices...

One of the readers of the first edition complained that it contained not enough material on JSF. In this second edition I did something about it: in the first edition, I had devoted to JSF a chapter plus a quick-reference appendix; in the new edition, I dropped the quick reference and added a second chapter to the main body of the book. By doing so, I added quite a bit of practical information on JSF, because much of the quick reference appendix consisted of a list of elements that you can already find explained in several web sites.

Another complaint about the first edition was that it included too many appendices and too much extraneous material. Well, there were as many appendices as chapters! I had done it for two main reasons: I wanted to keep the main body of the book uncluttered but I still wanted to provide information on everything needed to write a dynamic web page. The resulting table of contents was as follows:

CHAPTER 1 Introducing JavaServer Pages and Tomcat
CHAPTER 2 JSP Explained
CHAPTER 3 The Web Page
CHAPTER 4 Databases
CHAPTER 5 At Face Value (JSF Primer)
CHAPTER 6 Communicating with XML
CHAPTER 7 Tomcat 6
APPENDIX A Installing Everything
APPENDIX E SQL Quick Reference
APPENDIX F JSF Quick Reference
APPENDIX H Abbreviations and Acronyms

And here is the new table of contents:

CHAPTER 1 Introducing JSP and Tomcat
CHAPTER 2 JSP Elements
CHAPTER 3 JSP Application Architectures
CHAPTER 4 JSP in Action
CHAPTER 6 JSP and Databases
CHAPTER 8 JSF and eshop
CHAPTER 9 Tomcat 7
CHAPTER 10 Eshop
APPENDIX C Abbreviations and Acronyms

I eliminated the appendices on JSP, JSF, and Eclipse by merging their contents into the main body of the book. Then, I made the appendix on package installation disappear by explaining how to install all necessary packages as they became necessary. Finally, I dropped the appendix on HTML characters and told everything I wanted to say about HTML and SQL in the two remaining appendices. The result is a much better book in which the chapters are clearly focussed and that you can read with less flipping forth and back.

What are you waiting for? Buy it!

Sunday, September 16, 2012

Authors' Mistakes #3 - Lost in Space

Lost in Space is a Science Fiction film made in 1998. The script was written by Akiva Goldsman, who a few years later received an Academy Award for his adaptation to the big screen of A Beautiful Mind.

Akiva wrote scripts of many successful films like The Client, I Robot, The Da Vinci Code, I Am Legend, and Angels & Demons, but in Lost in Space, which he also produced, he made a very bad mistake.

*** Warning: spoiler! ***
At the end of the film, Prof. John Robinson (William Hurt) saves their spaceship by ordering the pilot (Major Don West, played by Matt LeBlanc) to fly through a crumbling planet.

The problem with that solution is that the ship will gain speed flying towards the planet’s centre, but it will lose it all again to emerge on the other side. If the ship was not able to reach escape velocity before passing through the planet’s centre, it will still be unable to reach it going through the planet’s core.

Perhaps, the writer thought of the so-called “slingshot” effect, used by deep-space probes to exploit the gravitational pull of one planet to gain speed. But that only works because the probe is well off-centre, not plunging towards the middle of the planet, like in the film. If the probe approaches the planet from “behind” (with respect of the direction of orbital movement of the planet), the planet pulls the probe along. The probe had from the very start enough speed not to remain trapped in the gravitational well of the planet, but this “pulling”, beside changing the direction of its motion, gives it some additional speed.

There are two other mistakes in the same scene.

The first mistake was that to cross the planet our adventurers would have needed longer than half an hour, while in the film everything happened within a few minutes.

As the planet had a gravity comparable to Earth, we can assume that it had similar mass and volume, at least as a first approximation. Now, the potential energy of the ship on the surface of the planet is Given by GMms/r (forget the signs), where G is the gravitational constant, M is the mass of the planet, ms is the mass of the spaceship, and r is the radius of the planet (~6000 km, like Earth). The acceleration due to gravity on the surface of the planet is GM/r2. As we know that on the surface of Earth the gravitational acceleration is about 10 m/s2, we can calculate without much fuss that GM/r = 10 m/s2 * r = 60 km2/s2, without having to look up the values of G and M. This means that the potential energy of the ship on the surface of the planet is U = 60 km2/s2 * ms. Now, when the ship passes through the core, it will have converted all its potential energy to kinetic energy. As the kinetic energy is given by ½ ms * v2, where v is the ship’s speed, you can easily calculate that the ship, if it was at rest at the beginning, will have flown through the centre of the planet with a speed given by: v = sqrt(2 * 60) km/s = ~11 km/s (which, incidentally, is the escape velocity, which on Earth is 11.2 km/s, as I could have stated at once). To cross the whole planet, the ship would have needed 12000 km / (5.5 km/s) = 2182 s = ~ 36 minutes.

But there is still another mistake.

After emerging from the planet, West says: “The planet’s gravity field is collapsing,” which is total nonsense, as gravitational forces don’t collapse. Even in a supernova it is the matter forming the star that implodes, not its gravitational field.

Maureen, John’s wife, then observes: “We’ll be sucked in.” This is also moronic for two reasons: firstly, the planet had been “sucking them in” all the time; secondly, the gravity pull of a planet remains the same, whether it is in one piece or in a million pieces.

Saturday, September 15, 2012

Category Romance Novels

Category (or Series) Romance novels are those small and inexpensive paperbacks with sweet and happy couples portrayed on the front cover. You find them in stores like K-Mart and Target but seldom in bookshops.

Famous historical novels like “Gone with the Wind” are not Category Romance novels. The love story between Rhett and Scarlett is central to “Gone with the Wind”, but is not its only theme. Category novels are much more narrowly focussed on the relationship between their protagonists.

Romance novels are about love relationships. The tradition started by Jane Austen with her romantic novels set in the Regency Era (1811-1820) endures, but the modern romance novels have developed far beyond the intrigues and the rich dresses of British aristocracy of the early 1800s. To convince yourself how true such a statement is, you only need to look at the web site of Mills & Boon/Harlequin, the best known publishers of Category Romance. Harlequin is a Canadian publisher that acquired M&B (a UK publisher) some decades ago.

Before I talk about Romance Writing, let me give you some figures, which I got from Wikipedia. In 2008, M&B sold 200 million novels per year, and is currently publishing about 100 books and 100 e-books per month. Harlequin is currently releasing 120 new titles each month in 29 different languages in 107 international markets. In the UK alone, M&B has over 3 million regular readers. If you thought that Romance novels were a niche market, think again!

I will reproduce here how M&B defines the characteristics of some of their series (i.e., imprints), extracted from the Australian submission guidelines. They publish several books in each imprint every month.

Sexy: These stories are all about passion and escape, glamorous international settings, captivating women and the seductive, tempting men who want them. Length: 50,000 words. Spine colour: red.

Sweet Romance: Sweet Romance stories are all about real, relatable women and strong, deeply desirable men experiencing the intensity, anticipation and sheer rush of falling in love. Length: 50,000 words. Spine colour: light blue.

Medical: Intense and uplifting romances set in the medical world. Experience the breath-taking rollercoaster of emotions, ambitions and desires of today's medical professionals. Length: 50,000 words. Spine colour: teal.

Historical: Richly textured, emotionally intense novels set across a wide range of historical periods - ancient civilisations up to and including Second World War. Length: 65,000 words. Spine colour: blue.

Blaze: Blaze is Mills & Boon's sexiest romance series, yet there's more to these books than simply sex. We ask our authors to deliver complex plots and subplots, realistic engaging characters and a consuming love story you won't be able to forget. Blaze stories are fun, flirty and always steamy! Length: 60,000 words. Spine colour: orange.

Blush: Are big romance novels filled with intense relationships, real life drama and the kinds of unexpected events that change women's lives forever! Length: 85,000 words.
Featuring relatable characters who strike a chord with the reader regardless of the book's setting or plot points. Length: 55,000-60,000 words.
Spine colour: purple.

Intrigue: Crime stories tailored to the series romance market packed with a variety of thrilling suspense and whodunit mystery. Length: 55,000-60,000 words. Spine colour: dark blue.

Desire: Contemporary, sensual, conflict-driven romances that feature strong-but-vulnerable alpha heroes and dynamic heroines who want love - and more! Reads that are always powerful, passionate and provocative. Length: 50,000-55,000 words. Spine colour: pink.

Romantic Suspense: These novels are romance-focused stories with a suspense element. Powerful romances are at the heart of each story, and the additional elements of excitement, adventure and suspense play out between complex characters. Length: 70,000-75,000 words. Spine colour: dark purple.

Nocturne: Dark, sexy, atmospheric paranormal romances that feature larger-than-life characters struggling with life-and-death issues. Length: 80,000-85,000 words. Spine colour: black.

As you can see, you can find all sorts of Romance novels. Ultimately, they are all meant to transport a lady reader to a world of fantasy in which Good always prevails over Evil. Harlequin has similar guidelines. But Harlequin also has a series of e-books, “Historical Undone”, with a length of between 10,000 and 15,000 words. This could be a good entry point to test the waters before writing and submitting a full-length novel.

The series that I find more congenial is “Sweet Romance”. This is because the novels are short (50,000 words) and don’t include explicit sex. It’s not that I am so puritanical, but I don’t like to read about “sliding members” and “penetrating male sexes”.

Neither my wife and I have read Category Romance novels before discovering that they have such a huge market, but we are warming up to the idea of writing together Romance stories.

Here is our recipe for writing a successful “Sweet Romance” novel, taken from the writings of Valerie Parv and Emma Darcy, two very successful Romance authors.

If there is a character-centred genre, this is Romance.
  • Romance novels essentially have two characters: the heroine and the hero. All other characters are only there for support and shouldn’t do much.
  • The protagonist must the heroine. She must be beautiful, intelligent, honest, and successful. This doesn’t mean that she must be perfect, but almost. In essence, she must be somebody with whom any woman might like to identify.
  • The hero must appeal to the vast majority of Romance readers. Therefore, he must be handsome, sexy, A-male. But he should also be not much younger and not much older than the heroine and (obviously) honest, courageous, and generous. In essence, every reader should be able to vicariously fall in love with him. Incidentally, surveys have proven that the readers prefer dark-haired heroes.
  • The protagonists never engage in casual sex, never steal, and never use violence. It used to be that the protagonist needed to be a virgin, but this is no longer strictly necessary, although it is not appropriate to dwell on previous sexual relationships of the protagonists.
  • If the hero does something “naughty” like telling a lie or getting drunk, you have to explain in detail why and show that he is in fact a good man and at once feeling guilty for committing such a bad act. You should also make clear that it is a one off and that it will never happen again. Best, don’t make him do anything you then have to waste pages and pages of contrition in order to recover from.
  • No swearwords, ever!
  • No physical features that would make it impossible for the reader to identify with the heroine. That is, the heroine must not be too tall or too short, with a weight problem or anorexic, etc.
  • The plot should be linear. Forget flashbacks and memories. They only distract from what is happening now.
  • At least in the “standard” 50,000-word novel, no subplots. There is not enough space for them and, in any case, they distract from the main plot.
  • The protagonist and the hero should already meet in the first chapter. Possibly, in the very first paragraph.
  • There must be at least one major conflict between the protagonist and the hero, and this must become clear as soon as possible. Ideally when they meet. This conflict (supported perhaps by a couple of additional minor issues) is what keeps the protagonists apart, even if they feel attracted to each other immediately.
  • The main plot is the evolution of the relationship between the protagonist and the hero and the ultimate resolution of the conflict between them. The relationship should go through two or three crises, of increasing seriousness, alternating with peaks of happiness/optimism, to reach a satisfyingly happy ending.
  • The protagonists marry in the last chapter, with a very short resolution, if any. This implies that you must resolve all minor issues and tie up all loose ends before you resolve the main conflict and bring the protagonists fully together. Note that love, at least in Romance novels, is forever.
  • Nowadays, they can have full intercourse before marrying, but it shouldn’t happen too early in the novel. This is because intimacy is something to achieve, and only when it is clear that the protagonists are in love. Make them do it too soon, and you will struggle to hold them apart until they finally unite at the end.
General Points
  • As the readership consists almost exclusively of women, you must not write what a woman is likely to find distasteful, especially if it refers to the protagonist.
  • The point of view must be that of the protagonist. You can briefly switch to the point of view of the hero, but only if strictly necessary to support the plot and only briefly and clearly. In other words, omniscient and multiple viewpoints are out.
  • You have to maintain pace throughout the novel. This is done through dialogues and by surprising or shocking the reader. Suspense and short chapters help. Try to end each chapter with something that might encourage the reader to start the next one. These are short books, and many readers go through one of them every day.
  • Try to set the novel in a stimulating environment. Incidentally, novels set in the Australian Outback seem to be quite successful with American readers.
  • Do not waste many words on the scenery. Ultimately, the readers are interested in the characters, and in particular the heroine, more than on anything else.
  • Narrative should be kept to a minimum. Some readers page through books and buy those that contain more dialogue.
  • Like with any other form of writing, remorselessly cut down anything that doesn’t advance the plot or help developing the characters. With only 50,000 available, you cannot afford long-winded descriptions or speeches.
  • Limit each chapter to about 20 pages, so that the whole book consists of 10 to 12 chapters.
  • Use short sentences in small paragraphs. A lot of ink without breaks is usually discouraging.
  • If you are a man, use a female pen name and invent a persona to go with it, because almost no reader will think that a male author can create a good female fantasy.
To write successful novels, you always have to conform to what the readers want, and the readers of Category Romance have very strict requirements. And this straightjacket is what makes it interesting for me. I like the challenge.

Wednesday, August 15, 2012


I recently watched on TV a discussion about the concept of aboriginality. What is it and who should have the right to claim it?

When the whites came to Australia and claimed it for themselves, they almost completely destroyed Aboriginal culture and heritage. They did it in many ways, some of which were blatant and some more subtle. They did it by making a sport out of shooting Aborigines. They did it by taking away from their families those who had a white parent (thereby creating what is known as The Lost Generation). They did it by banning the use of Aboriginal languages and ceremonies. They did it by forcing Aborigines to abandon their traditional way of life. They did it by spreading diseases and alcohol. They did it by mocking and ridiculing black people.

Today, many descendants of the first Australians live in appalling conditions. On average, they live shorter lives than non-Aboriginal Australians. Their unemployment rate is shocking. And they end up and die in jail all too often.

It is not possible to erase what has already happened, but it is our collective obligation to provide to the Aborigines a better than fair chance to progress. I said “better than fair” because, after centuries of neglect and marginalisation, they need all the help they can get. Not as an act of charity, but as an act of justice and hope. They need help in finding ways of helping themselves.

We have in Australia a kind of affirmative action for Aborigines and Torres-Strait Islanders (Torres Strait is between Australia and Papua New Guinea). For example, all applications forms to apply for jobs with any level of Australian government (local, state, or federal) include a box to tick if you can claim to descend from original Australians.

If you are recognised as an Aborigine, you have access to funding and grants that are unavailable to other Australians. This makes some Australians unhappy, but I fully support it. It is in my opinion a must.

That said, the fact that benefits are associated with being recognised as a person of Aboriginal descent clouds the issue of Aboriginality. Inevitably, some claim to be Aborigines to take advantage of the benefits, although they are not. Because of the benefits, the question of whether somebody is an Aborigine ceases to be purely a matter of identity, heritage, and culture.

Who decides whether you are an Aborigine? There are Aboriginal Councils and other organisations that can issue certificates of Aboriginality, but on which basis?

If somebody has the colour and the somatic traits of an Aborigine, speaks an Aboriginal language, and is known by the elders of the clan to which he belongs, there cannot be any doubts about his aboriginality.

Similarly, if one looks white and cannot prove to have any Aboriginal ancestry, there is a good chance that he is not an Aborigine.

But most cases are not so “black and white” (pun intended!)

Some people only have an Aboriginal grandparent and look completely white, even with blue eyes and blond hair. And yet, having grown up in a family that was known in their neighbourhood to be Aboriginal, they consider themselves Aborigines.

Others were taken away from their Aboriginal mother and forcibly adopted by a white couple when they were babies. They might look dark enough (whatever that means), but have sometimes no idea where they were born, and might only know something of what it means being an Aborigine from what they have learned as adults.

As far as I know, in the USA, belonging to a tribe of Indian Americans is determined on the basis of a DNA test. Having a heritage and belonging to a culture has not much to do with DNA, does it? Is then a DNA test what is needed? If not, would it be fair to exclude people who know nothing about Aboriginal culture only because they were forcibly removed from their families?

I believe that we should define clear and measurable criteria and require that at least one of them is satisfied. Obviously, together with the desire of being recognised as an Aborigine, because the last thing we want is to force people into drawers. The first possible criteria that come to mind are:
  1. Traceability of ancestry to an Aboriginal person.
  2. Presence of any Aboriginal DNA detected with a set probability, similarly to what is done in court to determine paternity.
  3. Being recognised as a member by an Aboriginal clan.
Satisfying any one of them should suffice. I certainly left out something, but the point I want to make here is that the decision cannot be left to subjective opinion of people who have no association with the person who’s applying to be recognised. The applicant shouldn’t feel that his Aboriginality is arbitrarily questioned.

Like every law and regulation, also what I propose would be subject to abuse. For example, a single corrupt elder of a recognised Clan (there can be crooks anywhere) could be bribed into signing illegitimate certificates of Aboriginality. And any analysis, including aDNA test, can be faked.

But a clear set of rules would at least be verifiable. I’m convinced that the number of abuses would be reduced.

All this says nothing about who should get support in preference to others, because “greater need” is a fishy concept. But (just for the pleasure of throwing in a cliché) Rome wasn’t built in a day...

Monday, August 13, 2012

Authors' Mistakes #2 - Colin Forbes

I had never read anything by Colin Forbes. Knowing how successful he was and how many novels he had written, when I saw a copy of Double Jeopardy at a heavily discounted price, I grabbed it.

I was keen to read that particular novel because it played in Zurich, where I lived for eleven years.

Anyhow, I soon discovered that I hated his writing style. Compared to David Baldacci’s, to name one of my favourite authors, Forbe’s prose sounded dry and rough. The dialogues were somewhat primitive. I only kept going because, as I said, the story played in Switzerland and the plot promised to be interesting.

Unfortunately, when I completed the third chapter, having only read 38 of the novel’s 373, I gave up.

The first mistake, at the beginning of Chapter 3, was that Keith Martel (the hero of the story), while at Heathrow, learns from his boss that he will fly to Geneva instead of to Zurich but, in the same page, he then lands in Zurich. This was a genuine mistake. It could have not been that Keith had flown to Zurich via Geneva, because then he would have gone though passport control in Geneva, not Zurich.

The second mistake was that Keith, when his plane reached the Swiss border near Basle (80 km before landing in Zurich, when planes usually start their descent), saw the Matterhorn through a window across the aisle. This is impossible for two reasons: Firstly, the Matterhorn is on the Swiss-Italian border, on the Southside of the Alps, while Basle is more than 200 km north of the border. With the Alps in between, even Superman would have had problems in seeing the Matterhorn from the skies above Basle. Secondly, the plane was turning east. This means that Keith, sitting on the left of the plane, would have seen through the starboard-side windows only sky.

I also had a third problem: There is no square in Zurich named Centralhof, and certainly none that fits Forbe’s description. Perhaps he didn’t want to risk being sued by people living at a real address, and invented a realistically called square.

But by then, I was fed up with the “raspy” prose, and decided to give up on the book. Who cares about possibly good plots if the reading is not pleasurable?

Wednesday, August 8, 2012

Authors' Mistakes #1 - Lee Child

I usually alternate my reading between non-fiction and fiction books. Early today, I started reading Die Trying by Lee Child. I like the Jack Reacher character, so cool and efficient.

Anyhow, I just discovered a mistake at the beginning of Chapter 4. Somehow, it is comforting to know that also the famous authors, despite all their experience and the resources of their publishers, make mistakes.

Here is the opening of Chapter 4.

Right inside the shell of the second-floor room, a second shell was taking shape. It was being built from brand-new softwood two-by-fours, nailed together in the conventional way, looking like a new room growing right there inside the old room. But the new room was going to be about a foot smaller in every dimension than the old room had been. A foot shorter in length, a foot narrower in width, and a foot shorter in height.

The new floor joists were going to be raised a foot off the old joists with twelve-inch lengths of the new softwood. The new lengths looked like a forest of short stilts, ready to hold the new floor up. More short lengths were ready to hold the new framing a foot away from the old framing all the way around the sides and the ends.

Well? Have you noticed anything? I did, as soon as I read the two paragraphs. Come on... Isn’t it obvious? OK, I’ll tell you, as you know I would!

The first paragraph states that a the new room being built inside another room is A foot shorter in length, a foot narrower in width, and a foot shorter in height. Then, supposing that you place the new, smaller, room in the centre of the old one, how much space do you have left around? I should say, half a foot. Right?

And yet, Lee Child’s second paragraph states that the new framing is a foot away from the old framing all the way around the sides and the ends.

It gives an all new dimension to carpentry, doesn’t it?

In the past, I reported such errors, at least the most blatant ones, to the publisher. But I have never received a reply. Their loss, I should say.

Wednesday, August 1, 2012


In Australia, like in many western countries, we have speed cameras.

What is odd, though, is that their position is announced in advance with big street signs (at least in the Australian Capital Territory, where I live).  Speed cameras are called here “speed traps”.  But what trap is visibly marked so that the prey doesn’t “fall” into it?

There are some mobile cameras, but very few.  Often, instead of being used by the police to issue speeding tickets, they are connected to big panels that tell you your speed, as if the speedometer mounted in your dashboard were not enough.

Canberrans are up in arms whenever the government announces more speed cameras.  In the newspapers, you read articles accusing the Police of being money grabbers.  It is as if the Police didn’t have the right to fine you when you break the law.  Evidently, the motorists think they have the right to exceed the speed limits.

I say, fill the territory with speed cameras and place police cars with Doppler radars around curves, at the bottom of down slopes, and hidden in the shrubs!  Hit as many speeders as possible and hit them hard for breaking the law in a way that endangers everyone.  Indeed, most accidents occur because people drink & drive and/or speed.  Breath checks are good, but they also slow down the traffic.  Speed checks don’t.

The Swiss do it right: Zurich has dozens of speed cameras, which take very precise measurements of car speeds (how could it be otherwise?  They are Swiss!  :-).  But fines are only issued if the measured speed exceeds the speed limit by more than 5 km/h.  By giving a margin of 5 km/h, they generously take into account tolerances.  I was once fined forty Swiss Francs (about $40) because my speed was 6 km/h above the speed limit.  Fair enough.

Let the speeders be annoyed that they get caught.  I don’t know how much the fines are, because I have never been fined since I came back to Australia four and a half years ago, but I say: make them higher.  Sooner or later, people will start thinking that speeding doesn’t pay.

Tuesday, July 31, 2012

Copyright, Copyleft, and Copywrong

The Free Software Foundation (FSF) founded by Richard Stallman about three decades ago is based on the idea that software users should be able to collaborate with each other. Proprietary software, with its restrictions imposed by its copyright holders, makes that impossible. Users should be able to run the software, study it, modify it, and redistribute it.

To provide an alternative to the proprietary versions of the Unix operating system, Stallman started the GNU project, which, together with Linus Torvalds’ Linux kernel, resulted in what everybody today calls the Linux operating system (which should actually be called GNU/Linux, but few bother).

GNU/Linux (I do bother) is a great achievement, and thousands of developers have contributed to its success by extending it, maintaining, and adding to it useful applications.

FSF software is free for everyone to use, adapt, and redistribute, but only as long as the modified or repackaged software remains free. To achieve this, the software is licensed with what is called a copyleft, of which the standard GNU licence is a particular version.

Stallman is an extremely intelligent person, an inspired speaker, and totally dedicated to the free software movement. His ideas are contagious, and he has my admiration, but the fact that he believes in what he says, or even that many believe in what he says, doesn’t automatically mean that what he says is right for you and me.

Like with every social or political movement, there are, broadly speaking, two types of people who favour the free software movement: the true believers and the opportunists.

The true believers deserve our respect. They put a lot of effort into developing software to see it “fly”. Their reward is to know that thousands or millions of people around the world use what they have developed. They keep learning and love to discuss the intricacies of their products with like-minded people.

The opportunists are those who are against proprietary software because they like to get as much as possible for free. On the basis of what I have learnt about human nature, I wouldn’t be surprised to discover that, unfortunately, they are the vast majority.

By misusing the ideals of the FSF, they can take the high moral ground and portray themselves as people who fight the rich and allegedly corrupt multinationals (e.g., Microsoft and Apple). What better excuse is there for obtaining pirated copies of proprietary programs than an act of civil justice?

Allow me to be sceptical about moral choices that benefit the person who takes them.

Where do you stop? Everybody knows that the government is corrupt. Why should we then pay taxes? And the supermarkets exploit the farmers and make too much profit on what they sell. Isn’t then justified to “appropriate” stuff from their shelves? In Italy, in the early 1970s, when one third of the population voted for the Communist party, we even had a term for it: proletarian shopping (i.e., spesa proletaria).

Give me a break!

Do these abuses invalidate the FSF ideals? Of course not, but millions and millions of people have found in it a justification for stealing. And, perhaps not surprisingly, the idea of free software has contributed to the concept of “free everything”. The prevailing culture today is that it is OK to “share” some songs, even if “sharing” has become a euphemism for downloading hundred of songs for free. And scripts, books, and films are sometimes available for download within days of their release.

But make no mistake: downloading illegal copies of any copyrighted material is stealing.

In 2007 I published an IT book with a list price of US$40, but Amazon sells it for about US$26 in printed form and for less than USD$18 as an eBook. For prices that I consider moderate, you get almost 450 pages of very specialised material. And yet, you can download free pirated or scanned copies of the book from several websites.

Whom are we kidding? Those who deprive authors like me of a couple of dollars of royalties per copy are not heroic people who fight their quixotic battle against the multinationals. They are thieves.

The downloading of pirated music is to a large extent to blame for the current crisis affecting the music industry, and the publishing industry is next.

Few authors, musicians, actors, and directors make a living from their artistic endeavours, and even fewer become rich. Piracy is an additional unnecessary hurdle that emerging artists, developers, and small independent publishers need to overcome.

And, just that I am at it, not only do I think that copyright is perfectly justified and should be enforced. I also think that it shouldn’t expire.

If you build a house or a company, you can pass it on to your heirs in perpetuity. Once your heirs will have paid the necessary taxes, fees, succession taxes, and what have you, they will own the physical results of your work. Marx said that property is theft, but Communism didn’t work, did it?

The same happens with less tangible goods, like shares, bonds, and plain old cash.

But if you invest your time and effort in producing intellectual property, your heirs will lose all their rights 70 years after your death. Now, 70 years seem a long period of time, but can you imagine applying the same to a farm or to a factory, or even to a painting or a sculpture? Can you imagine that one day some government official will knock on the door of your great-grandchildren and evict them from the house you have built because it is no longer theirs? I don’t think so.

In which way is intellectual property different from brick and mortar? Isn’t a fundamental doctrine of Economics that higher risk should be rewarded with higher yield? And what is more risky than writing a book?

You might resent the fact that a book keeps generating royalties long after the author is dead, without the need for any additional effort. But wait a minute! What about the dividends you get from shares and the interests you get from bank deposits? Isn’t it the same?

Most books stop selling after a few years. Books in print and being sold longer than 70 years after the author’s death are rare exceptions. Therefore, an unlimited copyright would only make a difference for the few “classics”. And for those, to reiterate my point, why shouldn’t the heirs benefit from them?

There is also another aspect to consider: when a copyright is extinguished, it is not just the royalties that disappear. The copyright holder loses any control he previously had. This means that anybody will be entitled to re-publish the book (or the song) with any alteration he might like to make! That seems completely preposterous. A dictator might decide to adapt a text to support his ideas. In fact there is a never-ending debate about whether the spelling of some old text should be adapted to “freshen up” centuries-old books.

Now, I know that the Constitution of the United States states that copyright should expire but, as Mark Twain once suggested, why not setting it to a million years? That would be constitutional, wouldn’t it?