I use this blog as a soap box to preach (ahem... to talk :-) about subjects that interest me.
Showing posts with label Science. Show all posts
Showing posts with label Science. Show all posts

Tuesday, March 24, 2020

COVID-19 - flattening the curve

Resistance is futile: COVID-19 - flattening the curve My last post was in September 2015. Somehow, after that, I lost interest in writing my sermons and stepped off the soap box. But the way in which politicians and journalists speak abut "flattening the curve" when talking about COVID-19 as if it were obvious to the vast majority of the public prompted me to attempt an explanation of what "flattening the curve" actually means. It still involves logarithmic plots, which many will find confusing, but it might help.

I want to post this article as quickly as possible. My apologies for the typos I will inevitably make.

Here are the curves the politicians are talking about, drawn for China, Korea, Iran, and Japan:

The numbers at the bottom indicate the days since the World Health Organisation has started reporting data on COVID-19 on 2020-01-21. You can see them on the WHO web site. For this plot, I have used all the reports till #63, published today (2020-03-23).

The numbers on the left tell you how many new cases were reported for each day. In fact, this is not entirely true, because the plot shows weekly averages to avoid wild fluctuations. That is, every point of each curve is averaged with the preceding and following three points. Therefore, these plots are useful to see the trends, rather than individual values.

The numbers of daily new cases of all countries represented in this plot reached a maximum before starting to decline. It means that the drastic measures taken in those countries managed to bring the contagoin under control. Notice that Korea and Japan have an initial "bump" followed by a systematic increase. This could be due to the transition from imported cases to community-transmitted cases, but it is only my speculation and I could be completely wrong.

More importantly, note that China and Korea are experiencing a resurgence of new cases in the past week or so. This could be due to the relaxing of the containment measures or, as China has stated in several occasions, to infected residents returning home from abroad, thereby carrying the virus back home. In any case, unless great attention is paid, the contagion could flare up again, like a non-completely extinguished bush fire.

While China, Korea, and Iran experienced a rapid increase in new cases, Japan quickly managed to bring the increases under control, as shown by the fact that the curve is "flatter" (first hint at what "flattening the curve" means, although it will become clear at the end).

Let's have a look at Germany, Italy, and Spain:

The curves are bent but haven't reached a maximum. This means that the measures adopted by these countries has started to bite, but the situation will become worse before beginning to improve. In other words, the bending of the curves indicate that the number of daily new cases is still increasing, although less rapidly. The days in which the number of new infections will begin to decrease is still to come.

Finally, let's have a look at Australia and the United States:

Do you see how the lines are straight up? These countries are still in the "exploding" phase of the contagion. In semi-logarithmic plots, straight lines mean exponential growth. It means that the number of new cases is growing exponentially. The situation in the USA is worse than in Australia because in Australia the daily increase is around a couple of hundred, while in the USA they get several few thousand new cases per day.

The last plot I want to show you is of the total number of cases, rather than of the number of daily new cases:

First of all, notice that the numbers on the left now reach 100,000. For those with knowledge of Mathematics, I will say that these curves are the integral of those shown in the first three plots. That is, these curves show the areas under the previous curves. Perhaps not surprisingly, the bottom curve of this fourth plot is that of Japan, which is the country with the lowest number of daily new infections.

As I already said, a straight line represents an exponential growth. The thin grey lines are there for reference, and tell you in practical terms how to read the country-specific curves. The slope of the lowest (dashed) thin line represents a doubling of the total number of cases every 10 days. As you can see, Japan managed to contain their total number of cases around that figure, as the curve for Japan is almost parallel to the 10-day-doubling line.

The other thin lines, closer and closer to the vertical, represent doublings of total number of cases every 5, 4, 3, and 2 days. As you can see, the curves of most of the countries shown are clustered around the 2-day-doubling line, the only exception being Australia, which is close to the 3-day-doubling line.

These are the curves that the governments try to flatten with their measures (some might refer to the curves shown in the first three plots, but if you flatten one, you also flatten the other). Here, like in the first three plots, you can clearly see that China and Korea have managed to flatten their curves, while the USA and Australia are still shooting straight up.

To give you a better idea of what a 2-day-doubling means, consider that each 100 infected people become 1131 after one week, 12,800 after two weeks, and 144,815 cases after three weeks. Staggering numbers. With 10-day-doublings, the initial 100 cases become 162 after one week, 264 after two weeks, and 429 after three weeks. This is the difference betweem Italy, overwhelmed by the sick, and Japan.

Saturday, February 8, 2014

CSI Miami got its Physics wrong

In the episode titled Sinner Takes All of the 10th and final season of CSI Miami, the CGI people got an animation wrong and nobody noticed.


They showed a bullet in slow motion.  The grooves caused by the rifling impression of the barrel were left-handed, but the bullet was spinning in the opposite direction, as shown in the following sketch:


That was clearly wrong.  To convince yourself of the mistake, imagine to look at the inside of the barrel, as shown in this classic image from Sean Connery's Bond films (actually, I flipped it horizontally because the rifling in the original image was right-handed):


The bullet, forced to go through the barrel, would spin in the same direction of the rifling, not in the opposite one as shown in Miami CSI.

Tuesday, December 31, 2013

Reflections on Faith and Science

I haven't written a single article during this month of December.  It is today or never.

I am an atheist.  No doubt about it.  I don't believe that some all-powerful, self-conscious entity is interested in our lives or even that it exists.  There are no reasons for believing that a God exists, but neither are there reasons for not believing that it exists.  Therefore, the most logical position is to be an agnostic, not an atheist.  I should be able to say: I neither believe nor disbelieve.  And yet, I don't believe.  For somebody like me, who has a scientific formation, this is not completely satisfying, because I am asserting something that can be neither proven nor disproven.

In any case, the existence or non-existence of God doesn't affect my life in any way.  At least not directly, as what believers manage to impose on everybody else does have an influence on me.  Religious fervour has resulted in laws prohibiting abortion (like in Malta and Chile), traditions keeping girls out of school (like in Afghanistan), and regulations forcing restrictive dress codes on women (like in the Orthodox Jewish quarter of Tel Aviv, where women must cover their arms).  Obviously, I will never need an abortion, I have been able to attend school, and I am allowed to wear short-sleeved shirts wherever I want.  Nevertheless, these rules, often directed at women, are deeply annoying.

This distinction between atheism and agnosticism is just another way of placing people in boxes.  A more important distinction is whether people have doubts or not.  Certainties are dangerous.  Certainties make possible for fanatics to strap around their waists belts full of explosive and blow themselves up in public places.  Certainties have caused over the whole recorded history of Humanity persecutions of entire ethnics groups and tortures of millions.

In fact, I believe that certainties are responsible for most of the problems we have today.  There are too many faithfuls and not enough scientists.

What many non-scientists have difficulties in grasping is that no scientific statement can ever be proven to be absolutely true.  For example, Newton's theory of gravitation worked flawlessly for a long time and is still used every day.  But it was discovered that it couldn't fully explain the orbit of the planet Mercury.  Einstein's theory of gravitation solved that problem and has been confirmed by countless measurements.  Does it mean that Newton was wrong?  Not at all.  It only means that Newton's theory is an approximation of general relativity or, if you prefer, that Einstein's theory can explain a wider class of phenomena and with more accuracy.  Does it mean that Einstein's theory will always be right?  Again, not at all.  It only means that, so far, it has never been proven to be at fault (although, truth be told, general relativity has not been successfully integrated with quantum mechanics; but that's another story).

Scientific statements, therefore, are a never-ending work-in-progress.  They can be proven wrong in some cases, but the proof of their correctness never ends.  Despite of their intrinsic uncertainties, all these temporary laws of Physics can still be used to discover further laws that explain our universe.  It is a bit like crossing an infinitely wide mountain creek on wobbling stones: scientists keep stepping on the same wobbly theories and, as they progress, the older theories become more and more trustworthy; more stable paths are identified.

People who insist that Intelligent Design (ID) should be taught at school in Science classes as an alternative to Evolution by Natural Selection (ENS) can only do so because most people don't know what I have explained in the previous two paragraphs.  The ID people state that ENS is an unproven theory.  But there is no scientific theory completely proven.  It is impossible.  The key issue is that ENS can be disproven, while ID cannot.  That is why ENS is a scientific theory and ID is not!

The same problem pops up with the hoopla about climate change, levels of CO2, and whether the changes are anthropic or not.  People ignorant in Science would like to have clear, unambiguous, and final answers, and confuse scientific results with beliefs.  But certainty has no place in Science.

My attitude towards God is scientific: if, after asking me whether I believe that a God exists (to which, as I said, I would reply no), you asked me whether I'm sure, I would have to answer with another no.  Of course I'm not sure.  How could I?  But I don't need to introduce an "ad hock" entity that explains everything Science cannot [yet] understand.  For centuries, the Catholic Church was a drag on Science because it wanted to cling to what its revealed truth (actually, it still is).  It was (is) a problem caused by certainties (not "misplaced certainties", because all certainties are misplaced).

All so-called proofs of the existence of God that come to mind rely on negatives: all this beauty of nature cannot be the result of random events; we don't know how our universe came into existence; it cannot be that our existence has no purpose; etc.  But how can one claim to prove anything on the basis of what one doesn't know?  It is baffling.

I know little about Judaism and Islam (of which I am somewhat ashamed), but I was taught the Catholic catechism.  I strongly encourage you to have a look at it, especially if you have never done it before.  It is an amazing construction of cross-linked concepts.  I have to wonder how many so-called faithfuls actually believe much of what is in there...

As Alain de Botton convincingly explained in his book Religion for Atheists, religion has its functions and its usefulness in society.  But it should be kept in check and not overpower everything else.

Christianity might have shaped morality and laws of the western world, but I don't need a priest to tell me that to contribute to a harmonious society I should behave with others as I would like them to behave with me.  Luke's do to others as you would have them do to you (verse 6:31) is only an expression of a Golden Rule that has been recognised and applied everywhere since antiquity.

I believe that ENS has resulted in the collaborative attitude of human beings.  A typical example of such a "social" attitude is shown by how people behave when confronted with the game called "the prisoner's dilemma".  From Wikipedia (look in particular to the last sentence):

Two members of a criminal gang are arrested and imprisoned. Each prisoner is in solitary confinement with no means of speaking to or exchanging messages with the other. The police admit they don't have enough evidence to convict the pair on the principal charge. They plan to sentence both to a year in prison on a lesser charge. Simultaneously, the police offer each prisoner a Faustian bargain. Each prisoner is given the opportunity either to betray the other, by testifying that the other committed the crime, or to cooperate with the other by remaining silent. Here's how it goes:

  • If A and B both betray the other, each of them serves 2 years in prison
  • If A betrays B but B remains silent, A will be set free and B will serve 3 years in prison (and vice versa)
  • If A and B both remain silent, both of them will only serve 1 year in prison (on the lesser charge)
It's implied that the prisoners will have no opportunity to reward or punish their partner other than the prison sentences they get, and that their decision won't affect their reputation in future. Because betraying a partner offers a greater reward than cooperating with them, all purely rational self-interested prisoners would betray the other, and so the only possible outcome for two purely rational prisoners is for them to betray each other. The interesting part of this result is that pursuing individual reward logically leads both of the prisoners to betray, when they would get a better reward if they both cooperated. In reality, humans display a systematic bias towards cooperative behavior in this and similar games, much more so than predicted by simple models of "rational" self-interested action.

It makes sense to speak of rules of ethics applicable to everyone, but, except for predispositions resulting from ENS, they ought to be based on rationality, with the aim of maximising our collective well-being, not allegedly inspired by a God invented to comfort us.  There is no need for a God to explain the validity of moral codes.

For millennia, religions played an important role in constraining some of human emotions that, if uncontrolled, would have resulted in chaos.  But, at the same time, religions also exploited those same emotions for their own purposes of expansion and control.  I say: let's get rid of them!

We must invest as much as possible in education, so that a secular, conscious morality will eventually replace the rules imposed by superstition, regardless of whether it is called witchcraft or religion.  One day, with the help of Science, we will be able to control our destructive emotions rationally, while still enjoying the positive ones.  Only then, we will have left behind the caves of our ancestors and be ready to explore the universe.

Wednesday, October 30, 2013

Authors' Mistakes #24 - CSI Miami (Marc Dube)

I confess: I am a fan of CSI Miami.  I don't like the CSI series located in Las Vegas and New York.  But the Miami series is bathed in warm colours and shows beautiful scenery.  I know that it was actually filmed in California and that the warm feel was obtained by saturating the colours, but who cares?  I also find the characters reasonably appealing.

Anyhow, last night I discovered a mistake in episode 16 of season 7 (Sink or swim).  I am not referring to the many licences that the authors take with the way CSI people operate in real life.  I understand that if our fictitious CSIs were confined to the labs and spend days to analyse a sample, the stories would evaporate.  What they did in Sink or swim violated the laws of Physics!



Here it goes.
An assassin kills a lady standing at the railing of a yacht by shooting her from underwater.
Do you see no problem with that?

There is one: when a ray of light crosses the boundary between water and air, it changes direction.  This phenomenon is called refraction (see for example the refraction page on Wikipedia).  Here is a nice diagram (also from Wikipedia) to describe it:


Suppose that the top side is air (with refractive index n1 = 1) and the bottom side is water (with refractive index n2 = 1.33).  Snell's law tells you that sin(θ2) = sin(θ1) * n1 / n2.  This means that light entering the water with an angle of, say 30°, is deflected to approximately 22°.  As a result of the deflection, the underwater shooter of CSI Miami saw his target 8° higher than it was.  With a target placed, say, 5 metres above the water, 8° roughly correspond to more than 80 cm.  Enough to shoot above the target's head instead of hitting her heart.  The effect increases when the angle increases.  So, for example, with θ1 = 45°, θ2 becomes 32°, which is 13° less than θ1.

Obviously, the positions of both the target and the shooter also play a crucial role.  For example, the 80 cm of the previous calculation are reduced to 49 cm if the target is only 3 m above the water instead of 5.

Now, refraction has no impact if the shooter is directly below the target, because both angles become zero.  But this is not what happened in CSI Miami, as the shooter had to be somewhat away from the boat in order to clearly see his target.

All in all, there is no way that the shooter could have made the kill.

Funnily enough, the Archerfish manages to hit insects one or two metres above the water by spitting at them from underwater.  A thorough study about that fish was published by Lawrence M. Dill in 1977 (Refraction and the Spitting Behavior of the Archerfish (Taxotes chatareus), Behavioral Ecology and Sociology, 2, 169-184).

For your reference, here are the links to all past “Authors’ Mistakes” articles:
Lee Child: Die Trying
Colin Forbes: Double Jeopardy
Akiva Goldsman: Lost in Space
Vince Flynn: Extreme Measures
Máire Messenger Davies & Nick Mosdell: Practical Research Methods for Media and Cultural Studies
Michael Crichton & Richard Preston: Micro
Lee Child: The Visitor
Graham Tattersall: Geekspeak
Graham Tattersall: Geekspeak (addendum)
Donna Leon: A Noble Radiance
007 Tomorrow Never Dies
Vince Flynn: American Assassin
Brian Green: The Fabric of the Cosmos
John Stack: Master of Rome
Dean Crawford: Apocalypse
Daniel Silva: The Fallen Angel
Tom Clancy: Locked On
Peter David: After Earth
Douglas Preston: Impact
Brian Christian: The Most Human Human
Donna Leon: Fatal Remedies
Sidney Sheldon: Tell Me Your Dreams
David Baldacci: Zero Day
Sidney Sheldon: The Doomsday Conspiracy

Wednesday, July 10, 2013

Authors' Mistakes #18 - Douglas Preston


Douglas Preston and Lincoln Child have written together more than a dozen very good thrillers. But they have also authored books on their own. I just finished reading Impact, by Douglas Preston, a gripping Science Fiction story.


I found the story very good and free from all small typos and mistakes that so often mar paperbacks.

But, unfortunately, Preston made a huge mistake that actually invalidated the whole story. I know, “suspension of disbelief” and all that, but this mistake also causes a completely unacceptable inconsistency within the story.

WARNING: Spoiler. In the rest of this article, I’m going to reveal some key elements of the plot and hit at how it ends.

The premise of the whole story is that an alien race, around 100 million years ago, placed an intelligent machine inside the Voltaire crater on Deimos, the smaller of the two moons orbiting Mars. Awakened by an exploration probe, the alien AI sends to Earth a sort-of asteroid entirely made of strange matter. It then sends another, bigger, asteroid, also entirely made of strange matter, to the Moon, almost destroying it.

The first asteroid reaches Earth on April 14. Its speeds is measured to be 48 km/s (page 10). When, days later, the second asteroid hits the Moon with devastating results, the US top military brass wants to nuke the machine and be done with it. But they are told that it would take at best nine months for a space mission to reach Mars, and, in any case, the next window of opportunity for a Mars launch would be almost two years off (page 442).

On page 439, the US president is told that “the Deimos Machine can’t fire unless Voltaire crater is oriented toward the Earth. And since it’s a deep crater, the orientation has to be fairly close. [...] It was aligned in April [...]. The next alignment was tonight. You saw what happened to the Moon”. When the president asks “When’s the next alignment?”, the reply is “Three days from now”.

Do you see the mistake?

No?

Think about it: Earth and the Moon were struck on the same nights when the crater was aligned, first in April and then less than a day before the meeting described on page 439. But how could that be? Strange matter or not, an asteroid travelling at 48 km/s takes at least three months to travel from Mars to Earth. Actually longer, when considering that the asteroid’s speed must have been highest when it was measured on Earth, because it was moving toward its perihelion.

How could an asteroid possibly reach Earth shortly after leaving Mars? Preston could have had the president ask that question. Then, a scientist could have said something like “The Deimos Machine must be able to operate some sort of teleportation mechanism. Perhaps it can open a wormhole and send the asteroids through it. After all, these aliens can travel between the stars. That’s probably why we didn’t detect the large asteroid that hit the Moon”.

But that would have not worked either, because on page 463 (the second last of the novel) one of the main characters says “Last week, one of the satellites in place around Deimos by chance intercepted a powerful burst of radio noise from the artifact. Evidently a communication of sort”. In other words, the machine didn’t use wormholes or other fancy stuff to send a message to its constructors. It only used a burst of radio waves. And if the machine doesn’t have any “subspace-like” capability of sending information, it doesn’t make sense to hypothesise that it has it for an asteroid.

Then, we can only conclude that Preston just screwed up.

And obviously, without a “magic” quasi-instantaneous travel from Mars to Earth, the story becomes impossible. By the time the first asteroid hit Earth, there might have been dozens of them on their way. It would have been too late for one of the protagonists to stop the machine and save the planet.

For your reference, here are the links to all past “Authors’ Mistakes” articles:

Sunday, June 30, 2013

Authors' Mistakes #17 - Peter David


As far as I know, Peter David has written 95 novels, mostly SF, and won 10 awards. I know him from his Star Trek books, of which, between 1997 and 2002, I read 19. I am now reading After Earth, the novelisation of the recent film with Will Smith, and discovered in it an appalling (and certainly unexpected) mistake.


On page 101, he wrote: A parsec, she recalled, was a measure for the speed of light, how far it would travel over one hundred years.

The sentence is at best awkward. What does it mean “A parsec [...] was a measure for the speed of light” when in fact a parsec is a measure of distance, as David writes in the next sentence?

But the problem is that a parsec is only 3.26 light years, not 100!

If you draw two lines from a point in interstellar space, one passing through the sun and one passing through Earth, when the amplitude of the angle between the two lines is one second of arc, the object’s distance is by definition 1 parsec. The “par” in parsec stands for “parallax” and the “sec” for “second”.

Imagine making two observations of a star with a six months period between them. During the six months, Earth will have moved half of its orbit. As a result, you will have to point the telescope in two slightly different directions. If you know the radius of Earth’s orbit, you can use the angle between the two directions to calculate the star’s distance.

Here is how you do it.

Earth’s distance from the sun (i.e., the radius of Earth’s orbit) is approximately 150 million km = 1.5 x 108 km.

A second of arc is 1/3600 of a degree and there are 360 degrees in a full circle, which is 2π times the radius R of the circle. This means that 2π x R / 360 / 3600 is the length of a second of arc = R x 4.85 x 10-6.

If a star has a parallax angle of two seconds (not one second because the two lines of view are through opposite points of Earth’s orbit, rather than one though Earth and one through the sun), to calculate its distance in kilometres you only need to imagine a circle of radius D centred on the star and passing through the sun. Then, the diameter of Earth’s orbit is given by:

1.5 x 108 km x 2 = 2 x D x 4.85 x 10-6

That distance will be 1.5 x 108 km / 4.85 x 10-6 = ~3.1 x 1013 km.

As the speed of light is 300,000 km/s = 3 x 105 km/s, 1 light year is 3 x 105 km/s x 3600 s/h x 24 h/d x 365 d/y = ~ 9.46 x 1012 km (actually, the light is 0.07% slower, but the year is 0.07% longer, so it works out just fine! :-) .

Then, 1 pc = ~3.1 x 1013 km / (9.46 x 1012 km/ly) = 3.28 ly. Close enough, considering the approximations.

There is no star at 1 pc from Earth, but Proxima Centauri, the closest star to Earth, is 1.3 pc away.

Thursday, May 16, 2013

Authors' Mistakes #14 - Dean Crawford


-->
If you care about Science (capital initial intentional) do not buy Apocalypse by Dean Crawford.


People who don’t understand science shouldn’t write about it. After reading 146 of the 553 pages of the novel, I gave up.

The first problem I encountered was on page 86. Crawford writes: “a young Air Force ensign”. He should have known that “ensign” is a rank exclusively used in the Navy. OK. It has nothing to do with science, but it was annoying nonetheless.

It is when Crawford starts writing about science that he really gets onto my nerves. On page 95, he writes:

If an object starts moving at high velocity, then time begins to run more slowly compared to another object that remains stationary. The discrepancy was predicted by Einstein in his Theory of General Relativity.

The first sentence, although not entirely rigorous (and not written in the best English) is acceptable in a novel. But “General Relativity” is wrong, as it is “Special Relativity” that explains time dilation when objects move fast.

Then, on page 96, Crawford claims:

Mercury orbits very close to the sun and always seemed to appear slightly out of place. It turned out that the sun’s mass curved the light reflected from Mercury’s surface when seen from the earth, making it appear in a different place to where it actually was.

Wrong. Even ignoring the mixed-up tenses, Crawford’s statement is incorrect. The anomaly in Mercury’s orbit that Newtonian Physics failed to correctly predict is the perihelion precession (i.e., how fast the point of the orbit closest to the sun moves). This is a real effect, not something that is explainable away with curved light paths.

One page later, on page 97, Crawford makes another blunder. After explaining that the presence of a large gravitational field has a dilation effect on time similar to that caused by high speeds, he goes on saying:

Sergey Avdeyev [...] orbited the earth almost twelve thousand times over 750 days whilst aboard the Mir space station. At such velocity, and farther from the mass of the earth than those of us on the ground, the time dilation he experienced sent him 0.02 seconds into the future, because time passed slower for him than for the rest of us.

Wait a minute! If the cosmonaut was subjected to a lower gravitational force, the resulting effect was to reduce the time dilation caused by the earth, not to increase it. Therefore, “despite being farther” would have been correct, not “and farther”.

Incidentally, the author also shows his poor command of English by inserting a comma between “velocity” and “and farther”.

Crawford proves beyond any doubt that he has not understood Relativity when, on page 113, one of the characters explains what a scientist had thought:

his idea was to place some kind of camera aboard a spaceship and send it into orbit around the sun for long periods of time at a very hight velocity. [...] The ship would then return to earth [...] the high velocities and close presence of the sun’s immense mass would allow the cameras [wasn’t it singular at the beginning of the paragraph?] to peek into earth’s future, just by a few minutes.

Baloney! If the ship’s time slows down, it means that it will fall behind earth-based clocks. That’s all.

What made me stop reading the novel was the explanation given by Crawford of a machine capable of filming the future (chapter 22). According to Crawford, you can peek into the future if you hold a camera very close to a black hole and point it towards a TV set located further away from the black hole. The camera will film future news shown on the TV set.

This is complete nonsense.

There are also other misconceptions, like the following one, expressed on page 156:

jets of steam hissed and enveloped the entire device in thick water vapor [...] A precautionary measure, to wash away any particles irradiated by the immense energy within the chamber.

“Irradiated by the energy”? Give me a break! And again, a misplaced comma (after “measure”).

In case you are wondering about the fact that at the beginning of this article I claimed to have read 146 pages while the last quotation refers to page 156, it is because I skipped chapter 21 in order to read the description of the “time machine”.

Crawford appended to the novel an Author’s Note where he claims: “all of the science within my novels is real, [his italics] but some of it is stretched to embrace the extreme events that are part and parcel of thriller fiction”. Clearly, he hasn’t simply stretched the science. He has broken it in a bad way. In the same Author’s Note, Crawford also states:

If one were able to stand alongside the event horizon of a sufficiently massive black hole, then time would indeed be dilated in the manner described.

He really hasn’t understood Relativity. And, what’s worse, his book got published by Simon & Schuster and sold well, otherwise it would have not been printed in Australia. How many thousands of people read it and were misled by Crawford’s bad science?

It makes me angry that ignorance, once more, has prevailed.

In any case, Crawford’s prose is also not satisfying. His writing is flat and banal with the pretence of being interesting or educational. He writes sentences like “The warbled tones of a despatch officer replied to his question across the radio waves.” (page 1). “Across the radio waves”? Please!

Monday, April 29, 2013

Authors' Mistakes #12 - Brian Greene


The Fabric of the Cosmos, by Brian Green, is a very enjoyable book about the Physics of the universe.


The author, Professor of Physics and Mathematics at Columbia University, knows a lot about Physics and also knows how to explain it. The first 75 pages have been a pleasure to read and I am looking forward to the remaining 450.

But – dare I say? – I believe I have spotted a mistake in what he wrote.
On page 73, Green says:

Things that are more massive and less distant exert a greater gravitational influence, but the gravitational field you feel represents the combined influence of the matter that’s out there.20

This makes sense, because the gravitational force, which is proportional to the amount of matter, is inversely proportional to the square of the distance. Therefore, while large and close bodies exert a lot of pull, the influence only goes down to zero at infinity (i.e., nowhere).

But then, in endnote 20 of that chapter, he explains:

One qualification here is that objects which are so distant that there hasn’t been enough time since the beginning of the universe for their light – or gravitational influence – to yet reach us have no impact on the gravity field.

WAIT A MINUTE! Where should such matter come from? All the matter of the universe comes from the Big Bang and got to where it is now by moving away from a single point at speeds lower than the speed of light (obviously). Then, light and gravitational influence certainly have had enough time to reach us. Or not?

To put it in a different way, if the radius of the universe is given by c times T, where c is the speed of light and T is the age of the universe, there cannot be matter outside that radius.

I find it hard to believe that Green made such a mistake. And yet, I don’t see any fault in my reasoning. If you do, please explain it to me!

I am adding the following part on 2014-04-09.

I was wrong. But I believe that Green was still not right.

Imagine that we live in a mono-dimensional but limited universe. You can visualise it as an expanding circle that started with radius zero at the time of the Big Bang. All points on the circumference move away from each other. We are on a point of the circumference and no point on the circle is in any way different from any the others. Normally, to explain the expansion of the universe, the two-dimensional model is used, in which the universe is the surface of an expanding balloon. I prefer to use a circle because it is easier and we can understand what happens without unnecessary additional dimensions.

As we explore the universe with ever more powerful telescopes, we look at other points on the circumference that are further and further away from us. This means that, like in our real three-dimensional universe (if you believe that it is, indeed, real ;-), we see events that occurred deeper and deeper in the past. Note that the light moves on the circle, as that is the only dimension that universe has, exactly as in our real universe the light moves through three-dimensional space.

Now, as the universe expands, the radius of the circle grows. The light of a distant galaxy is red-shifted because of the expansion of the circumference, which moves that galaxy away from us. That is, the red-shift depends on how quickly the location of the galaxy and our location come apart, not directly on the fact that the radius grows.

If the radius has grown since the Big Bang with average speed ‘S’ (regardless of what units is used to measure the variation of distance over time), its length is now ST (by the definition of average speed!), where T is the age of the universe (or 13.8 Gy). The length of the circumference is therefore 2ΠST. Now, how long does it take to the light emitted from the farthest point (i.e., diametrically opposite to us or, more dramatically said, from the other end of the universe) to reach us? Obviously, ΠST/c.

Now the key question is: how does ΠST/c compares with T?

If ΠST/c > T or, more simply, S > c/Π, there are parts of the universe that we cannot possibly see regardless of how good our telescopes are, because the light from those parts takes longer than the age of the universe to reach us.

In fact, if there are such parts, we will never be able to see them, because the rate of expansion of the universe is increasing (for which Perlmutter, Schmidt, and Riess won the Nobel prize in Physics in 2011). Therefore, S (being the average rate of expansion) is also increasing, and will remain greater than c/Π.

So, in fact, when Green states that there exist objects which are so distant that there hasn’t been enough time since the beginning of the universe for their light – or gravitational influence – to yet reach us, he is correct (and I was wrong). But only if S > c/Π, because if the inequality is not verified, we can see the whole universe and I was right. That said, even if S > c/Π, Green is still partially wrong because he says yet. As the universe expands faster and faster, if the the light from certain objects requires now longer than T, it will always require longer than T.

Am I wrong in some other way? :-)

And what value has S, the average rate of expansion of the universe since the Big Bang? I have to think about that...

Monday, February 25, 2013

Creativity and Research in Academia

I have been reading Sue North’s PhD thesis titled Relations of Power and Competing Knowledges Within the Academy: Creative Writing as Research (University of Canberra, 2004). In the conclusion of Chapter 2, The Conflict of the Faculties, she says:

The doxa of creative work and the doxa of research arise from different epistemological underpinnings – creativity from the unexplainable force of the imagination, and research from the logical force of understanding. [my links]

In simple terms, she said that it is common knowledge that creativity is a manifestation of imagination, while research is a process based on studying, understanding, and logical thinking. By using the word doxa, she tells us that these beliefs are so widely accepted that they don’t even need to be expressed. In other words, everybody considers them to be true.

I agree with her: most people think that way. But I believe that they do so because they don’t have a clear understanding of how people tap their creativity and what it means to do research.

The concepts that artists pull their creations out of thin air and that researchers only exercise logic are both wrong.

Let’s look at artists first. They couldn’t create anything without studying the world around them, the work of other artists, and the tools they need to do their work. Ideas don’t just spring out of the mind of an artist fully clothed and armed, like Athena out of Jupiter’s head.

As an example, consider what writing a novel involves.

The author needs to invent characters and design a plot for them, so that they can interact with each other. Some authors start with a plot and others starts with the characters, but, in either case, they must ensure that those two elements are consistent, credible (and interesting). This can only come after years of observing how people interact and trying to understand what motivates them.

And then, authors cannot write their novels with any hope of success unless they know their craft: structure, voice, pace, dialogue, to name some aspects of it. This means that they need to learn techniques, read a lot, and write a lot.

Creative writing is the Cinderella of Academia. This status of affairs reflects the almost universal opinion that, because everybody can write, studying creative writing is a trivial activity pursued by people who want to have it easy at the University.

It is only when people actually try to write something worth reading that they realise how little they know and how much they need to work in order to get any recognition.

Furthermore, to write a novel an author needs logic, discipline, and rigour, otherwise his/her four hundred pages will be full of inconsistencies, loose threads, and untruths.

In order to create realistic characters placed in a realistic environment and doing realistic actions, an author needs to do a lot of research. Most (I would say all, but I don’t want to be so absolute) authors define their characters and their plots to a level of detail that remains below the surface of the finished product. What ends up into the novel is only the tip of the iceberg.

Authors also perform another activity typical of research: experimentation. This can be in the content, the dialogue, or the form. For example, Peter Carey, in his historical novel True History of the Kelly Gang, doesn’t use a single comma, while Alessandro Baricco, in his short novel Silk, uses line breaks to control the pace and convey meaning. And if this seems too literary and abstract, how much research do you think Frank Herbert had to do in order to create the universe in which his Dune stories take place?

I’m talking about creative writing because I had a couple of non-fiction books and a couple of Science Fiction stories published. Therefore, I can talk about it with some credibility. But I’m sure that equivalent concepts apply to other creative activities.

I hope I have convinced you that creative work couldn’t exist without logical thinking, knowledge, and research. If not, think again.

Now, let’s look at research. I shall go out on a limb here and say that without imagination any type of research would be impossible.

It is standard practice in academic papers to present the results of research in a logical fashion: you write about existing results, identify a gap, and explain how your results close it. There is more to it but, in essence, research papers are logical to the core. This is especially true in Physics, the prototypical scientific discipline.

But this is not how research actually works. Most neatly presented conclusions are in reality the result of hunches, leaps, backtracking, crises, and serendipitous events (i.e., strokes of luck). Research works somewhat like solving a jigsaw puzzle: you start from the edges and the pieces you can easily recognise. Then, you fill the gaps to complete the picture. Sometimes, you feel that a piece might be right in a certain spot and place it there, hoping to have it confirmed later. But both academic papers for the scientific community and magazine articles for the rest of us present the research process as if the researchers had started the puzzle from the top-left corner and systematically worked their way to the bottom-right piece.

The key point I’m trying to express is that logic cannot add knowledge. It can be used to extract information that for any reason is still hidden in the data and, sometimes, this leads to surprising and useful results, which in turn can trigger new avenues of research. But, ultimately, truly new discoveries occur when a researcher follows a hunch and jumps over a gap in the logic. This what Edward De Bono calls Lateral Thinking. And what about the creativity that any experimental researcher needs in order to overcome the many technical (e non-technical) problems s/he encounters daily?

In other words, imagination plays an essential role in the progress of Science and Technology. The logical chain of thought is often reconstructed after the discovery has been made or a working solution found. When a scientist gets enamoured with an idea and invents an experiment to verify it, s/he will not necessarily tell you.

Think to Einstein and his special theory of relativity. He postulated the constancy of the speed of light. He certainly didn’t deduce it logically.

To conclude, successful creative endeavours require study, understanding, and logic, while research produces its best results thanks to insights, imagination, and dedication.

So, please, let’s stop perpetuating these stereotypes of wild artists and white-coated scientists!

Friday, December 28, 2012

Authors' Mistakes #8 - Graham Tattersall (addendum)

I’ve given up reading Geekspeak. It is too stupid.

In four sentences that appear towards the end of Chapter 11 (pages 103 and 104), Tattersall refers three times to the bomb dropped on Hiroshima as an atomic bomb, and three times to modern multi-megaton bombs as nuclear bombs. He must think that there is a difference between an atomic bomb and a nuclear bomb!

So, here is somebody with a PhD (probably in Math) who doesn’t know that the Hiroshima bomb was a nuclear bomb.

The front flap of the book jacket says: Dr. Graham Tattersall, a confirmed and superior geek, and, later: Math has a new champion, and the Geeks a new King.

BS, I’d say.

Authors' Mistakes #8 - Graham Tattersall

Geekspeak, by Graham Tattersall is a little humorous hard-covered book containing a collection of essays about how to calculate odd things.


Tattersall, in his attempts at making complex calculations easy, cuts too many corners. For the sake of simplicity, he says things that are not right.

For example, at the beginning of Chapter 8, when estimating the number of people who die every year in the UK, he states:

You can estimate the figure quite easily from the average lifespan in this country, which is roughly 75 years — and rising. Imagine for a moment that all births in Britain stopped today, that from now onwards people die off and aren’t replaced.
Each year more die, until, after the time of the average lifespan of 75 years, there will be very few of today’s 60 million people left. So, if ages are evenly spread [my observation: a dubious assumptions, but let’s let it pass], an average of 60 million divided by 75 people will die every year. That’s about 800,000 per year.

He then comments that the actual figure is 500,000 and that the discrepancy can be explained by the fact that our lifespan is growing.

Perhaps. But there is a problem with his estimate due to his erroneous use of statistics: you cannot state that the ages of death are “evenly spread” between 0 and 75 while, at the same time, claim that “very few” live beyond the average. What average is it if there are very few people that live longer?

Tattersall’s assumption that very few people live longer than average is not only conceptually flawed, but also factually wrong, because it turns out that many people happily survive the average age of death. I looked at the US statistics and discovered that almost 60% of people live longer than average.

As an aside, you might wonder how it is possible that more than half survive the average age of death. It seems unreasonable. The reason is that the distribution is highly skewed. I will explain it with an example that, although not completely realistic, proves that for steeply decreasing distributions the number of points above the average can exceed the number of points below the average: Assume that you have a population of 100 people and that 55 of them live 60 years, 30 live 70 years, and 15 live 75 years. The average age of death is (45 x 60 + 35 x 70 + 20 x 75) / 100 = 66.5, with 55 people above average and only 45 below average. Convinced now?

Another example of his “pseudo-scientific” but confusingly imprecise presentations is in the small “Speak Geek” section at the end of Chapter 5. He states that A man who weighs 100kg at the North Pole would weigh only 99.65kg at the equator.

This is roughly correct (it’s actually close to 99.67kg). It is due to the fact that the surface of Earth, which spins at the rate of about 40,000km every 24 hours, is not an inertial system. We are kept in place by the presence of gravity. Otherwise, we would keep moving on a straight line along the tangent. From our local point of you, this appears to us as a force directed away from the axis of the Earth. This is the same apparent force that pushes us away from the centre of a car when we take a curve.

So far so good. Tattersall calls the centrifugal force an “upward ‘flinging’ force”, and that is were I have a problem. After mentioning gravitation, he explains the centrifugal force as follows: The second force is upwards, and is caused by the rotation of the Earth constantly trying to fling your body into space like a stone on a string being swung round your head.

This is misleading in too many ways to be acceptable. First of all, Earth doesn’t try to fling us anywhere. Secondly, the slingshot works because we exercise a pull on it off-centre. Therefore, to compare Earth to a slingshot is not right.

You could dismiss my objections by saying that he explains things in a colourful way and that I am just being my usual hair-splitter and boring self. But when he says that the centrifugal force is upward, he is plainly wrong. The centrifugal force is outward. It is upward at the equator, but to say it in general is badly misleading.

Finally, I would like to report a horrible mistake in Chapter 7. He starts it with the following two sentences: Captain Picard has been sorting out a spot of bother on some planet or other, while the Enterprise orbits at a safe distance. A radio command is sent to the ship: ‘Beam me up, Scotty.’

How can he not know that Scotty was the chief engineer under Captain Kirk? What a shame! And he calls himself a geek? It is painful.

I usually report problems in a book after reading it. In this case, I couldn’t hold back after reading about a third of it. I will probably have to report more...

Thursday, December 20, 2012

Authors' Mistakes #6 - Michael Crichton & Richard Preston

I just read Micro, the novel left unfinished by Michael Crichton when he died and completed by Richard Preston.


Crichton is a great author and I enjoyed several of his novels, and especially Timeline, Airframe, and Jurassic Park. Micro is also a very entertaining story, but this time, in my opinion, he went too far with his scientific (or not) speculations.

That an extremely high magnetic field can shrink objects is difficult to take, but what happens when objects are shrunk violates the basic laws of Physics in an unacceptable way.

According to Crichton and Preston, when people are compressed, they become very light and strong. This might make for a nice story, with people jumping around like fleas and falling from great heights without getting hurt, but it is not scientifically credible.

For one thing, where does the mass go? The principle of conservation of energy-matter has been proven correct uncountable times. Even if we accept that the space between the molecules and atoms (and perhaps subatomic particles?) is reduced, why should the mass of a person be reduced as well? It doesn’t make any sense.

Then, there is the issue of body temperature and thermoregulation. Crichton and Preston mention that shrunken humans have some problems with maintaining their body temperature, but it is not an issue that one can dismiss easily.

If you reduce the linear dimensions of something by a factor of 1000 while maintaining its shape, its volume is reduced to one billionth of the original one and its surface to one millionth. This means that the surface per unit of volume would grow by a factor of 1000. No way that the human body could function under those condition. It would lose all its heat and die.

Finally, the authors don’t even try to explain how the shrinking process can be reversed. To shrink people to 1/1000 of their original size, they apply three times a strong magnetic field. Every time, the people shrink by a factor of ten. Then, without any explanation, when they apply again a magnetic field, the people return to their original sizes. This goes against logic. What is it? Three is a charm? Give me a break...

Tuesday, December 18, 2012

Authors' Mistakes #5 - Academic textbook on research methods

I have been reading Practical Research Methods for Media and Cultural Studies: Making People Count, by Máire Messenger Davies & Nick Mosdell, Edinburgh University Press 2006, ISBN 978-0-7486-2185-9.


On page 62, to explain methods for random sampling, the authors describe how to make a sample of 50 students from a population of 150.

When they explain the stratified method for random sampling, they say: your population list may be divided into two lists of seventy-five males and seventy-five females and you sample each list randomly until you have twenty-five of each.

The statement that out of a total population of 150 students the genders are equally split is in general not correct. You might think that it doesn’t really matter whether the two lists don’t have exactly the same length, and that the method remains valid. But this is not the case, because if there are, say, 90 males and 60 females, a sample built with 25 males and 25 females would obviously not represent the population.

Now, rather than studying a sample that represents the whole population, you might like to investigate differences between male and female students, regardless of how many of each gender are present in the population. Then, it would make sense to do the split and pick equal numbers of students from the two lists.

But how they put it, they are definitely wrong. And they keep doing it. Here is how the text continues: You then subdivide the two lists into age groups to ensure you have sufficient numbers of, say under- and over-thirties (assuming that age is relevant to your research question). As you can see, if you are going to subdivide your sample into particular demographic subgroups, your subgroups will become smaller and smaller the more categories you include, until the sample size for these subgroups cannot be seen as reliably representative.

So far so good, but now comes the blunder: For instance twenty-five females, split equally into under- and over- thirties will give you only twelve or thirteen people in each group.

How can you split equally 25 students on the basis of age (if you define the discriminating age in advance)? It’s plainly wrong.

And they go deeper and deeper in their nonsense: If you want to look at four age categories, you will get only six or so people in each age and gender subgroup.

You might think that with a couple of “on average” added in the crucial places, everything would make sense. But that is not the case, because their explanation would still imply that the age distribution of students is flat, which is not.

It’s sad to see such mistakes in academic textbooks...

Tuesday, November 20, 2012

Misunderstood Science #2 - Another question of probability

In this article, I want to describe a problem that illustrates how to correctly estimate probability in a way that will surprise the non-statistician.

Here is the problem: An American friend of yours has two children. You know that one of them is a girl, but cannot remember the gender of the other child. What is the probability that they are both girls?

Many people would equate the lack of information concerning the gender of the other child with equal probability of the two possible genders, and answer 50%.

Their reasoning would be completely wrong but, as it turns out, their answer would be correct, at least in practical terms. If you read and understood my previous article on probabilities http://giuliozambon.blogspot.com.au/2012/11/misunderstood-science-question-of.html, you might know why. But let’s proceed in order.

For the genders of two children, there are four possibilities: MM, MF, FM, and FF, which represent the sample space of the problem. If we assume that boys and girls are equally probable, the four possibilities are also equally probable, at 25% of probability each. As you know that one of the children is a girl, you can exclude the MM case. As a result, you are left with three possibilities and can conclude that the probability of both children being girls is 1/3, or approximately 33.3%. And obviously, the probability that your friend’s other child is a boy is 66.7%.

But then, why did I say that the 50-50 answer is correct from a practical point of view? There are two reasons:
1. No parent gives the same name to their two daughters.
2. The frequencies (and hence, the inferred probabilities) of given names are very low.

Let’s start by taking into consideration that two daughters in the same family always have different names. We do so by splitting the ‘F’ of the above possibilities into ‘x’ and ‘f’, where ‘x’ indicates the girls with a particular first name, and ‘f’ the other girls, who have any other name. This results in a sample space consisting of MM, Mf, Mx, fM, ff, fx, xM, xf, and xx.

‘x’ can be any name we want, including the name of the daughter we know to belong to your friend’s family (even if we don’t know that name). Then, after discarding MM as we did before, we can also discard the possibilities that don’t contain ‘x’. This leaves us with Mx, fx, xM, xf, and xx. Now, as parents never give to two daughters the same name, we can also discard xx, and remain with the four possibilities Mx, xM, fx, and xf.

If we assume that boys and girls are on average equally probable and that the genders of children of the same family are independent from each other, we can calculate the probabilities associated with the four possibilities:
PMx = PxM = PM * Px
Pfx = Pxf = Pf * Px

We can use the frequency ‘y’ with which the name ‘x’ occurs among girls as an estimate of its probability, and rewrite the two expressions as follows:
PMx = PxM = PM * PF * y = 0.25 * y
Pfx = Pxf = PF * (1 – y) * PF * y = 0.25 * (y – y2)

As you can see, if y2 is much less than y (i.e., much less tan 1 as stated in our condition 2), all four possibilities have, for all practical purposes, the same probabilities. Then, the probability that the children are both girls is indeed 50%.

But is it true that all names have a frequency much less than 1? If you look at the web site of the US Social Security Administration, you will find the page http://www.ssa.gov/oact/babynames/limits.html from which you can download the number of children born in any particular year and given any particular name (but only if that name was given to at least five children).

Let’s say that your friend’s daughter was born in 2011. Then, you quickly find out that of the 33,723 names listed, out of a total of 3,623,043 girls, the most frequent girl name was Sophia, which was given 21,695 times. If your friend gave to his daughter the name Sophia, with y = 21,695 / 3,623,043 = 0.060, the resulting probability for two girls is around 48.45%.

Perhaps 48.45% is not close enough to 50%, but consider that the average occurrence of any name is 33,723 / 3,623,043 = ~108, which provides y = 0.00003. Then, the probability of two girls not knowing the name of the daughter your friend certainly had becomes 49.999%. Or perhaps you find out that the name of your friend’s daughter is Hilde, which in 2011 only occurred 5 times out of 3,623,043. In that case, the probability of him having two daughter is almost exactly 50%.

All in all, we can conclude that 50% is, for all practical purposes, correct, even if the reasoning of many people to reach that value is wrong.

Thursday, November 15, 2012

How to calculate the twin paradox

The Web is full of pages about Special Relativity and how it is responsible for slowing clocks and creating the twin paradox. But what if a star ship accelerates during the first half of its journey and slows down during the second half?

WAIT A MINUTE! Shouldn’t we switch to general relativity when dealing with accelerated systems? Not necessarily. If you accelerate and decelerate along a straight trajectory that joins two star systems and ignore the curvature of space caused by other objects, you are fine with special relativity.

This article tells you how to calculate the time spent by a subluminal (i.e., no warp drives!) star ship constantly accelerating half of the way towards its destination and then constantly decelerating during the second half of its voyage. Its purpose is to support Science Fiction writers who need to write about interstellar travel. I adapted the formulae from an article from the University of California Riverside and, for fun, I re-obtained them from the standard Lorentz transformations for length and time. Initially, I thought I would also explain how it is done, but it would have been a bit too complicated for most people. If you are curious, I found a paper from the University of Leipzig and another article from UCR to be useful.

First of all, let’s define some terminology:
  • ‘a’ is the acceleration of the ship measured on the ship itself. Technically called the proper acceleration of the ship, which is the acceleration felt by the passengers. That is, what an accelerometer placed on the ship will measure.

  • ‘D’ is the distance between the point of departure and point of arrival, measured when the ship is moving at a speed much lower than the speed of light and with its engines off. Basically, you can take it as the distance we would measure from Earth.

  • ‘T’ is the time needed by the ship to make its journey, as measured on Earth. Earth orbits the Sun at 30km/s and the Sun moves at 370 km/s with respect to the cosmic microwave background. But we can ignore these speeds, because they only represent some 0.1% of the speed of light. On Earth, we are also subjected to its gravity and both Earth and the Sun move on curved trajectories, but these accelerations can also be ignored for our purposes. I just read that special-relativity effects slow down the clocks on GPS satellites (orbiting at 20,000km above sea level and travelling at one orbit per 12 hours, or 3.83km/s) by 7μs/day, while general-relativity effects (Earth’s gravitational force is much weaker up there) speed them up by 45μs/day.

  • ‘t’ is the time needed by the ship to make its journey, as measured on the ship.

Here we go. Let’s start with the time measured on Earth.  This is given by:

T = 2 sqrt[(D/2/c) 2 + D/a]

To make our life easier, we will measure time in y (year), distances in ly (light year, the distance light covers in one year), and speeds as fractions of c (the speed of light ~300,000km/s). In this units, g (gravitational acceleration on Earth’s surface, 9.81 m/s2) turns out to be 1.03 ly/y2. With this choice of units, c disappears from the above formula, because c = 1.

For example, let’s suppose that we want to reach Proxima Centauri, the nearest star to our solar system (4.24 ly) and that our ship can sustain the acceleration of 0.1g. For the people left back on Earth, the journey will take:

T = 2 * sqrt[(4.24/2) 2 + 4.24/(0.1 * 1.03)] = 13.51y

With an acceleration of 1g (ten times higher), still from the point of view of Earth-bound people, the journey would take 5.87y (you only need to remove the 0.1 from the above expression).

The time measured on the ship is given by:

t = c / a * 2 * arcsinh[a*T/c/2]

With c = 1, the formula becomes:

t = 2 * arcsinh[a*T/2] / a

arcsinh is the inverse function of the hyperbolic sine. You’ll probably find it in Excel (haven’t checked). I have it in the calculator application on my Mac when I set it to scientific mode.

So, how older do the passengers of our ship become when they travel to Proxima at 0.1g and 1g? You only need to plug a and T into the formula and obtain 12.61y and 3.55y.

Not a big deal, is it? In case you are wondering, the top speed, when the ship is half a way to Proxima and switches from 1g of acceleration to 1g of deceleration, is given by:

v = a* T / 2 / sqrt[1 + (a*T/2/c) 2] = 1.03 * 5.87 / 2 / sqrt[1 + (1.03*5.87/2) 2] = 0.95c

If you go to Tau Ceti, a star similar to ours that is 11.9 ly away, you get, for 1g acceleration:

T = 23.98y
t = 6.23y
v = 0.9967c

Now the differences become more significant. Still, you would have imagined a more dramatic difference, wouldn’t you? I did.

OK. Let’s look at the planet HD 40307g. It is the latest Earth-like planet discovered. It might have a gravity twice as strong as Earth’s, but it orbits a star slightly cooler than ours with a 200-day period. It also seems that it rotates on its axis, which would imply a day-and-night cycle. It could have liquid water and be able to sustain life. Its distance from us is 42 ly.

T = 43.90y
t = 7.40y
v = 0.9990c

Well, here the twin paradox is definitely dramatic. After a round trip, the twin on Earth would be 2 * (43.9 – 7.4) = 73y older.

Now, what type of propulsion could possibly accelerate a ship at 1g for more than seven years? You tell me!