Sunday, June 23, 2013

Feynman's heir - Dr. Murray Salby on the atmospheric physics of the anthropogenic global warming hypothesis


I've no way to describe the following presentation, other than to call it absolutely brilliant.

Take an hour and watch it.  Salby definitively wrecks the last 20+ years of "consensus" climate science, and he does it with math.

And he closes with a direct quote from Richard Feynman, which ought by rights to serve as the professional epitaph of all global warming alarmists; all of their self-serving, scientifically illiterate political dupes; and all those members of the permanent bureaucracy who have spent the last several decades as the intellectual equivalent of the Inquisition, stamping out dissent in order to protect the political orthodoxy of so-called "anthropogenic" global warming:

"If it disagrees with observations - it's wrong.  That's all there is to it."


Monday, May 6, 2013


Hello all,

I've decided to make one of my fantasy novels, The Sea Dragon, free for the whole summer!

That's right - if you visit my author site at Smashwords, you can download it for free until 6 September 2013.

Just head over to the website and enter this coupon code:


I hope you enjoy reading it as much as I enjoyed writing it.

- Don

Monday, March 4, 2013

SMASHWORDS Read an e-book week!

Hello all,

Smashwords is hosting Read-an-e-book week, and I've signed up.

From now until 9 March 2013, all of my books will be 50% off at my Smashwords author site.

If you feel like a little escapist fantasy (or some hard-core strategic analysis of NATO's crisis decisionmaking organs and how they functioned in the Kosovo campaign), head on over and pick up my books for half price.

And thanks!

- Don

Saturday, December 15, 2012

9 February 2012 – Where’s the Yorktown?

The most costly of all follies is to believe passionately in the palpably not true.  - H.L. Mencken


The other day I was reading an article from the July 2011 edition of the Journal Of Military History - "Some Myths of World War II", by Gerhard Weinberg - when one of the passages leapt out at me.  I'll cite it in its entirety for you in a moment, but I wanted to explain first what it was that had caught my eye. 

As those of you who read a line or two into these missives before pressing the delete key have no doubt noticed, I'm a strong proponent of evidence as the core and foundation of science.  Evidence, I've argued in the past, plays the key role in every step in a rigorous scientific endeavour.  The first step in science - observation - involves noticing some sort of data, recording it, and wondering what caused it.  The second step, hypothesization, requires coming up with a thesis that explains and is consistent with one's initial observations.  The third step, experimentation, is an organized search for more evidence - especially evidence that tends to undermine your hypothesis; and the fourth, synthesis, requires the scientist to adapt his hypothesis so that it explains and is consistent with not only the original observed evidence, but also all of the evidence garnered through research and experimentation.  This last criterion is key to the scientific method: if one's hypothesis is not consistent with observed data, it must be changed; and if it cannot be changed to account for observations, it must be discarded.  Science is like evolution in that regard; only the fittest hypotheses survive the remorseless winnowing process, and progress requires enormous amounts of time, and enormous amounts of death.  The graveyards of science are littered with extinct theories.

But there's a fifth stage to science, one that tends to be overlooked because it's technically not part of the "holy quaternity" I've outlined above.  The fifth stage is communicating one's conclusions - not only to one's fellow seekers-of-fact, but also to those on whose behalf one's efforts are engaged.  This stage requires telling the whole story of the endeavour, from start to finish, including all of the errors and pitfalls.  Publicizing one's misapprehensions and mistakes is a scientific duty, for two reasons: first, because it improves the efficiency of science by helping one's colleagues avoid the errors of observation and analysis that you've already made (in army terms, "There's no substitute for recce"); but more importantly, because total disclosure of data and methods, and frank, transparent discussion of errors of fact and judgement, are what give science its reliability and rigour.  Transparency and disclosure are the keystone of the scientific method, and are the reason that science works - and the reason, I might add, that science has produced steady progress since the Enlightenment, and has come to be trusted as the only reliable means of understanding the natural world.

The thing is, disclosure of data and methods, and frank discussion of errors of fact and judgement, might be strengths in the scientific field, but in other fields of endeavour, they are often interpreted as weaknesses.  In a competitive business environment, for example, disclosure of data and methods makes it easier for competitors to appropriate or undermine your ideas.  In such a cut-throat atmosphere, the individual who conceals his activities, who refuses to discuss with colleagues (whom he likely views as competitors) what he is doing, and how, and why, will enjoy a comparative advantage vis-à -vis those who are more open about their data, methods and conclusions.  As for discussion of errors of fact and judgement, in a non-scientific atmosphere, the admission that one has made errors in experimental design or in data collection, or the confession that one may have misinterpreted observations or the results of an experiment, may be taken as a sign of incompetence or lack of ability, rather than as an indication of the type of honest transparency that is the hallmark of good science.

Acknowledgement of error, of course, is also key to correcting an erring course of action - and this is true in every area of activity in life, not merely in science.  The first step, as they say, is admitting that you have a problem - and this leads me back to the citation from the article that prompted this line of thought in the first place.  In the article I mentioned above, Weinberg - a distinguished scholar who has been writing about the Second World War for the better part of 70 years - discusses a wide variety of some of the "myths" about some of the well-known leaders of World War II.  When he comes to Admiral Isoroku Yamamoto, one of his more trenchant questions pertains to why the famous naval commander, in preparing for the pre-emptive attack on Pearl Harbour, was "so absolutely insistent on a project with defects that were so readily foreseeable."  Why, for example, would you attack a fleet in a shallow, mud-bottomed harbour, where damaged vessels could be so easily raised and repaired (18 of the 21 ships "severely damaged" at Pearl Harbour eventually returned to service)?  Weinberg suggests that it may have been just as well for Yamamoto's peace of mind that he was dead by the time of the battle of Surigao Strait in October 1944, "when a substantial portion of the Japanese Navy was demolished by six American battleships of which two had allegedly been sunk and three had been badly damaged" three years earlier at Pearl Harbour.  Weinberg speculates that Yamamoto may have been so "personally invested" in his tactical plan for the Pearl Harbour attack that he was "simply unable and unwilling to analyze the matter objectively":

There is an intriguing aspect of the last paper exercise of the plan that he conducted in September 1941, about a month before the naval staff in Tokyo finally agreed to his demand that his plan replace theirs.  In that exercise, it was determined that among other American ships sunk at their moorings would be the aircraft carriers including the Yorktown.  Not a single officer in the room had the moral courage to say, “But your Excellency, how can we sink the ‘Yorktown’ in Pearl Harbor when we know that it is with the American fleet in the Atlantic?”  None of these officers lacked physical courage: they were all prepared to die for the Emperor, and many of them did.  But none had the backbone to challenge a commander whose mind was so firmly made up that none dared prize it open with a touch of reality.[Note A]

Where do we lay the blame for this error of fact?  On Yamamoto, for coming up with a scenario whose fundamental assumptions were known to be wrong?  On his subordinates, who accepted those assumptions, knowing them to be wrong?  Remember, this was not a matter of a difference of opinion between experts; it wasn't a question of differing interpretations of equivocal results.  Nor was this a notional, peace-time exercise; this was an exercise that took place in the work-up stage of what was intended to be the very definition of a critical strategic operation - a knock-out blow that would curtail the ability of the US to interfere with Japan's plans for the Pacific.  In such circumstances, one would think that accuracy of data would be at a premium, and that fundamental errors of fact would be corrected as soon as they were noticed.  But this didn't happen.  This is a very telling anecdote, and Weinberg distils from it a devastatingly important observation: that men who had no fear of death, and who would (and did) fight to their last breath, nonetheless somehow lacked the moral courage to disagree with their superior on a matter of indisputable fact: that the exercise scenario did not reflect the real world.  Where the USS Yorktown actually was (it was conducting neutrality patrols between Bermuda and Newfoundland throughout the summer and autumn of 1941) was in fact very important - and what is more, Japanese intelligence had that information.  Yet no one stepped up to correct the scenario.  No one challenged the planning assumptions when actual data showed those assumptions to be wrong. 

Yorktown's location should have mattered to Yamamoto.  When the Japanese Navy struck Pearl Harbour on 7 December 1941, USS Yorktown was in port in Norfolk, Virginia.  She left Norfolk on 16 December, and was in San Diego by 30 December.  She was later involved in launching the Smiley raids against the Marshall Islands.  Her air wing accounted for numerous Japanese warships and landing craft at the Battle of the Coral Sea. 
She was also at Midway, where her air wing destroyed the Japanese carriers Soryu and Hiryu, the latter while flying from USS Enterprise after Yorktown was disabled. 
In less than six months, that one ship accounted for 25% of Japan's carrier strength. 
Where she was in the fall of 1941 mattered, because none of these things would have happened if Yorktown had been badly damaged or sunk at Pearl Harbour.

Why did Yamamoto's exercise plan assume that Yorktown would be among the ships tied up in Battleship Row, when he knew it was on the other side of the world?  Why did no one challenge the scenario?  It's easy, as many historians have done (and not without justification) to write off staff subservience to a commander as the inevitable consequence of centuries of deification of the Emperor, the warrior philosophy of bushido, and decades of Imperial Japanese militarism.  But how can we simultaneously accept that such an organizational entity is at once both a dangerously innovative and adaptive enemy, and yet so rigidly hierarchical that the thought processes of the senior leadership are utterly impervious even to objective fact?  The two qualities, it seems to me, are mutually exclusive.  One cannot be both intellectually flexible and intellectually hidebound.

Yamamoto's refusal to incorporate known facts into his exercise plan, and his subordinates' refusal to insist on their incorporation, resulted in flawed planning assumptions.  Over time, the impact of these flawed assumptions came back to bite them in their collective posteriors, in some cases personally.  But they bit the Empire, too.  In her 1984 treatise on folly, Barbara Tuchman spends most of her time castigating governmental elites for long-term, dogged pursuit of policy contrary to the interests of the polity being governed; but she also investigates the sources of folly.  One of these, upon which she elaborates with characteristic rhetorical verve, is the refusal of those in positions of authority to incorporate new information, including changes in objective facts, into their plans:

Wooden-headedness, the source of self-deception, is a factor that plays a remarkably large role in government.  It consists in assessing a situation in terms of preconceived fixed notions while ignoring or rejecting any contrary signs.  It is acting according to wish while not allowing oneself to be deflected by the facts.  It is epitomized in a historian’s statement about Phillip II of Spain, the surpassing wooden-head of all sovereigns: ‘No experience of the failure of his policy could shake his belief in its essential excellence’. [Note B]

Or to put it more simply, as economist John Maynard Keynes is reputed to have said when he was criticized for changing his position on monetary policy during the Great Depression, "When the facts change, I change my mind.  What do you do, sir?"

I could go on about this at great length, citing innumerable examples (as I expect we all could) but I thought that just one would do - surprisingly, from the field of climate science.  In its 4th Assessment Report, issued in 2007, the IPCC claimed that Himalayan glaciers were receding faster than glaciers anywhere else in the world, and that they were likely to "disappear altogether by 2035 if not sooner."  In 2010, the IPCC researchers who wrote that part of the report confessed that they had based it on a news story in the New Scientist, published in 1999 and derived from an interview with an Indian researcher - who, upon being contacted, stated that his glacial melt estimates were based on speculation, rather than formal research.  Now, according to new, peer-reviewed research based on the results of satellite interferometry - i.e., measured data rather than speculation - the actual measured ice loss from glaciers worldwide is "much less" than has previously been estimated. 

As for the Himalayas - well, it turns out they haven't lost any ice mass at all for the past ten years.(Note C)  As one of the researchers put it, the ice loss from high mountain Asia was found to be "not significantly different from zero." 

Amongst other things, this might explain why satellite measurements show that sea levels, far from rising (as the IPCC said they would), have been stable for a decade, and now appear to be declining.  After all, the amount of Aitch-Two-Oh on Earth is finite, so if the glaciers aren't melting, that would explain why the sea levels aren't rising, non?

Remember, according to the IPCC AR4, under the most optimistic scenario for GHG reductions (i.e., massive reductions in GHG emissions, which certainly haven't happened), sea level would increase by 18-38 cm by the year 2099, which equates to an absolute minimum increase of 1.8 mm per year (see the AR4 SPM, pp 7-8, Figure SPM.5 and Table SPM.1).  According to the IPCC's model projections, over the past decade sea levels should have risen by at least 1.8 cm (on the above chart, that's half the Y-axis, which is 35 mm from top to bottom.  In other words, the right hand end of the curve should be well above the top of the chart).  But according to the most sensitive instruments we've yet devised, sea levels haven't risen at all over the past decade, and for the past three years have been steadily falling.  This isn't a model output; it's not a theory.  Like the non-melting of Himalayan glacier ice, it's observed data.

Why point this out?  Well, our current suite of planning assumptions state that, amongst other things, Himalayan glaciers (along with glaciers all over the world) are melting, and that as a consequence, sea levels are rising.  "Rising sea levels and melting glaciers are expected to increase the possibility of land loss and, owing to saline incursions and contamination, reduce access to fresh water resources, which will in turn affect agricultural productivity." (Note D)  Those assumptions were based on secondary source analysis - basically, the IPCC reports, which were themselves largely based on secondary source analysis (and, as shown above, occasionally on sources with no basis in empirical data) - but when verified against primary research and measured data, they are demonstrably wrong.  In essence, stating that defence planning and operations are likely to be impacted by "rising sea levels" or "melting glaciers" is precisely as accurate as Yamamoto planning to sink the USS Yorktown at Pearl Harbour when everyone knew that it was cruising the Atlantic.

A big part of folly consists of ignoring facts that don't conform to one's preconceptions.  Actual planning or policy decisions may be above our pay grade as mere scientists - but what isn't above our pay grade is our moral and professional obligation to point out the facts, and to remind those who pay us that when facts conflict with theories or assumptions, it's the theories or assumptions that have to change. 

So when we see people basing important decisions on inflammatory rhetoric from news reports – for example, a CBC piece hyping an already-hyped “scientific” study arguing that sea level rises have been “underestimated”...

Some scientists at an international symposium in Vancouver warn most estimates for a rise in the sea level are too conservative and several B.C. communities will be vulnerable to flooding unless drastic action is taken.

The gathering of the American Association for the Advancement of Science heard Sunday the sea level could rise by as little as 30 centimetres or as much as one metre in the next century.

But SFU geology professor John Clague, who studies the effect of the rising sea on the B.C. coast, says a rise of about one metre is more likely. behooves us, as scientists, to look at the data.  And what do the data say in this case? Well, they say that over the past century, the sea level at Vancouver hasn’t risen at all.

The average sea level at Vancouver for the past century has been 7100 mm.  If, as the above-cited report claims, Vancouver's sea level had risen by 30 cm, then the average at the right-hand end of the screen should be 7400 mm, close to the top of that chart.  But it's not; it's still 7100 mm. The trend is flat. Absolutely flat.

So as it turns out, according to one of the supposed "key indicators" of anthropogenic climate change is concerned, nothing is happening. If sea levels are the USS Yorktown, the Yorktown isn't at Pearl Harbour at all. She's nowhere near Hawaii. She's not even in the Pacific; she's on the other side of the world, off cruising the Atlantic. But nobody has the moral courage to say so. All of the planning assumptions in use today are firmly fixated on the idea that the Yorktown is tied up at Pearl.

Well, assume away, folks. But no matter how many dive bombers you send to Battleship Row, you are simply not going to find and sink the Yorktown, because the facts don't give a tinker's damn about your assumptions. She is just...not...there.

Of course, finding out the facts – the observed data – is only the first part of our duty as scientists. The next and even more important part is...what do we do with them?  Unlike Yamamoto's juniors, whose ethical code condemned them to obedience, silence and eventually death, our ethical code requires us to sound the tocsin - to speak up when decisions are being made on a basis of assumptions that are provably flawed. And more than that; we shouldn't just be reporting contrarian evidence when we stumble across it, we should be actively looking for it.  A scientific theory can never be conclusively proved; it can only be disproved, and it only takes a single inexplicable fact to disprove it.  Competent scientists, therefore, hunt for facts to disprove their theories - because identifying and reconciling potential disproofs is the only legitimate way to strengthen an hypothesis.  We have to look for things that tend to undermine the assumptions that are being used as the foundation for so much time, effort, and expenditure, because that's the only way to test whether the assumptions are still valid. If we don't do that - if we simply accept the assumptions that we are handed instead of examining them to see whether they correspond to known data, and challenging them if they don't - then we're not doing the one thing that we're being paid to do that makes us different from the vast swaths of non-scientists that surround us. 

Playing the stoic samurai when important, enduring, expensive and potentially life-threatening decisions are being based on demonstrably flawed assumptions is fine if your career plan involves slamming your modified Zero into the side of a battleship and calling it a day.  But if you'd prefer to see your team come out on top, sometimes you've got to call BS.  That should be easier for scientists than it is for most folks, because it's not just what we're paid to do; it's our professional and ethical obligation to speak out for the results of rigorous analysis based on empirical fact.  
So when you think about it, "Where's the Yorktown?" is in many ways the defining question that we've all got to ask ourselves: is she where our assumptions, our doctrine, last year's exercise plan, the conventional wisdom, the pseudo-scientific flavour of the month, or "the boss" says she ought to be? 

Or is she where, based on observed data, we know her to be?
How we answer that question when it's posed to us determines whether we are scientists...or something else.



Post Scriptum
Today, the USS Yorktown is in 3 miles of water off of Midway Island.  She was rediscovered by Robert Ballard on 19 May 1998, 56 years after she was lost.

For the record, the average sea level at Midway Atoll has increased by about 5 cm since 1947 (65 years). This amounts to 0.74 mm per year, which is less than 1/4 the alleged minimum rate of sea level increase expected due to "climate change", according to the IPCC. And even that tiny increase is due not to "anthropogenic climate change", but rather to isostatic adjustment as the weight of the coral atoll depresses the Earth's crust, and - according to this recent study - to tidal surges associated with storminess that correlates with the increasing Pacific Decadal Oscillation over the past sixty years.

It's all about the evidence, friends. You either have it or you don't. And if you have it, you either try to explain it - like a real scientist - or you ignore it. Ignoring the evidence didn't turn out well at all for Yamamoto, the Imperial Japanese Navy, or in the long run, Imperial Japan.

Photos courtesy the
US Navy History and Heritage Command.

A) Gerhard L. Weinberg, “Some Myths of World War II – The 2011 George C. Marshall Lecture in Military History”, Journal of Military History 75 (July 2011), 716.

B) Barbara Tuchman, The March of Folly from Troy to Vietnam (New York: Ballantine Books, 1984), 7.


D) The Future Security Environment 2008-2030 - Part 1: Current and Emerging Trends, 36.

Friday, December 14, 2012

Apollo 17 - 40 years, and still waiting


40 years ago today, Gene Cernan was the last human to walk on another planet.

It had only been 11 years since Alan Sheppard had flown into space for a grand total of five minutes, atop a glorified bottle rocket. Cernan and his crewmates Ron Evans and Harrison Schmitt spent three days on the Moon.

To quote Tom Hanks (as Apollo 13 commander Jim Lovell), "It's not a miracle.  We just decided to go."

Man's reach must exceed his grasp, else what's a heaven for?

Monday, December 10, 2012

Bad Gas


Last year, when the EPA announced its new fuel efficiency standards for automobiles, I penned this little analysis. I thought I'd post it now, seeing as how Canada, courtesy the irrepressible bureaucrats at Environment Canada, has decided to follow Obama down the rabbit hole.



Just a short note today to signal Wednesday's announcement by the White House of new EPA regulations that will require automobiles, by 2025, to achieve a fuel efficiency standard of 54.5 miles per gallon (Note A).

…yeah, okay then.  For the record, the auto industry, and the internal combustion engine that is it's principal component, is over 100 years old.  Automakers have been attempting to improve fuel efficiency for pretty much the whole of that period.  How far, you ask, have they gotten?  Well, according to Ford, the top fuel efficiency of the Model-T - the Flivver, the infamous Tin Lizzie, mass production of which began in September of 1908, a little over 104 years ago - was 25 MPG.  The highway fuel efficiency rating of a 2012 4WD Ford Fusion is…25 MPG.  The latter is naturally a little more comfortable, what with air conditioning, a CD player, and cushioned as opposed to wooden seats, but the fuel efficiency is pretty much the same.

Think I'm joking?  Let's go to the data.  In 1978, a 6-cylinder Jeep CJ got 18.1 MPG on the highway.  In 2011, a Jeep Patriot 2WD, after 33 years of gas crunches, climbing gas prices, and EPA efficiency targets, got 29 MPG on the highway.  Not interested in SUVs?  In 1978, the top fuel efficiency for a production car was 35.1 MPG by the Chevy Chevette (it beat the famously fulminatory Pinto by half-a-MPG) with 35.1 MPG on the highway.  The Chevette, for those of you who never got to see Star Wars in a theatre, was a two-door hatchback weighing in at less than a ton.  A comparable vehicle today?  How about a Honda Civic?  It gets 36 MPG.

I expect you see the pattern.  Let's make it graphical.  Here's US Government data on the MPG characteristics of the current suite of 2012 production vehicles on offer.  Along the Y-axis we have highway fuel efficiency in MPG; along the bottom, vehicular class, as follows: 1=2-seaters, 2=minicompact, 3=subcompact, 4=compact, 5=midsize cars, 6=large cars, 7=small station wagons, 8=midsize station wagons, 9=large station wagons, 10=small 2WD pickups, 11=small 4WD pickups, 12=standard 2WD pickups, 13=standard 4WD pickups, 14=cargo vans, 15=passenger vans, 17=2WD special purpose vehicles, 20=2WD minivans, 21=4WD minivans, 22=2WD SUVs, and 23=4WD SUVs.

2015 is three years away.  Which cars meet the EPA's mandated fuel efficiency targets for that year?  Well, using composite city-highway milage, there are 8 that do (they don't all show up because of overlap on the graph).  Here they are:  The Toyota Prius, the Prius wagon, the Honda Civic Hybrid, the Toyota CT200H, the Ford Fusion Hybrid, the Ford Lincoln MKZ hybrid, the Chevy Volt, and the Toyota Scion iQ.

Guess what they all have in common?  That's right.  They're all hybrids.  Notice something else?  They're all small.  Even the one that nominally rates as a wagon - the Prius v - isn't exactly a battleship.  In other words, it's impossible for today's automakers to make a car that meets the EPA's 2015 fuel efficiency standard without making it both small and a hybrid.

What if we look just UNDER the EPA standard?  Well, the Volkswagon Passat gets 35 MPG combined (it gets 43 MPG on the highway, which is better than every hybrid vehicle except the Prius and the Civic hybrid).  Moreover, it's the same size as the Prius (bigger than the Volt or the Civic).  So what's the difference?  It's in the MSRP, amigos.


2012 Volkswagen Passat Sedan - base price $19,995

2012 Chevy Volt - base price $41,545


With that kind of price differential, who'd buy a Volt?  Well, would it change your mind if you knew that Dalton McGuinty would give you back $10,000 if you did? (Note B)  That still leaves you paying a $12,000 premium for a smaller vehicle.  At a difference in fuel efficiency of only 5.4% (combined; the Passat is actually 7% more fuel-efficient than the smaller Volt in highway driving), it's going to take a long time to make up $12K.

Now, bearing in mind that the fuel efficiency of the Model-T 100 years ago was 25 MPG, take a look at how many production vehicles currently meet the EPA fuel efficiency target for 2025.  The answer is "none".  The only one that even comes close is the Prius, and it's still 10% shy of the gold standard.  In fact, the only vehicles that presently meet that target are the all-electric Volt and the Tesla - all-electric vehicles that don't burn any fuel at all.

NOW do you get the picture?  The purpose of the EPA's 2025 target is to eliminate the internal combustion engine.

Here's my question: just how likely do you think that is?

A) []
B) []


Just to drive the point home (so to speak), the average increase in fuel efficiency for cars over the past century has been zero. The Ford Fusion gets the same MPG as a Model T. But Obama's EPA - and now Environment Canada - expect auto manufacturers to achieve a 100% increase in fuel efficiency by 2025. That's thirteen years away. The laws of physics haven't changed. The only possible conclusion - the ONLY conclusion - is that this is an ideologically-driven attempt to regulate the internal combustion engine out of existence.

And why? As an auto-da-fe; an act of faith to propitiate the Gods of Carbon Dioxide.  As a little reminder, there has been no statistically significant warming for 16 years despite a 10% increase in atmospheric carbon dioxide concentrations. There is no justification in science or reason for the anthropogenic global warming thesis - and thus no justification in science or reason for costly, pointless government regulations aimed at outlawing one of the fundamental technologies that drive the western world.

A few more words on the Canadian take on this little endeavour. Here's how the government plans to sell the standards change to Canadians.

"These new regulations improve fuel efficiency so that by 2025 new cars will consume 50% less fuel and emit 50% less GHGs than a similar 2008 model, leading to significant savings at the pump," said Environment Minister Peter Kent. "At today's gas prices, a Canadian driving a model year 2025 vehicle would pay, on average, around $900 less per year compared to driving today's new vehicles."
Awesome. So at a savings of $900 per annum, it'll only take the average Canadian family thirteen and a half years to pay off the extra $12,000 that their Volt will cost them. Assuming, of course, that they didn't finance the difference, and assuming that the Ontario government can afford to continue subsidizing every Volt that's sold to the tune of $10,000 a pop.

Ontarians buy 45,000-50,000 new passenger cars per month. That's 540,000 - 600,000 new cars per year. If everyone buys Volts - and once these new regulations are in force, that's all that anyone in Ontario will be allowed to buy - then the Ontario government is going to be on the hook for $10,000 x 550,000 = $5,500,000,000 in hybrid car subsidies every year.

Ontario's budget deficit in 2012 was already $15.2 billion. These subsidies would bump that up by a third.  All to support sales of a car that can go at most 80 km on a 10-hour charge, provided it's not too cold.

Can't politicians do arithmetic?

Thursday, December 6, 2012

Doha, dear (A repeat from December 2011)


In honour of the UN's annual climate alarmapalooza at Doha last week, I thought I'd reprise a little piece I wrote a year ago, after the 17th Climate Kerfuffle at Durban, South Africa - because since then, the only thing that's changed is that there is more empirical evidence than ever that the Sun controls terrestrial climate, and still no empirical evidence whatsoever that carbon dioxide - let alone the 5% of atmospheric CO2 attributable to human activities - plays any role whatsoever in twisting the planetary thermostat.


The UN's annual climate conference (17th Conference of Parties, or COP) at Durban wrapped up this past weekend.  I'll spare you all the gory details; basically, delegates kicked the can down the road, agreeing to hammer out a new binding framework agreement by 2015 with provisions for greenhouse gas reductions that would kick in around 2020.  Negotiators painted this as a victory for diplomacy, climate skeptics as a victory for skepticism, and environmental crusaders as a guarantee of apocalypse.

I'm not going to rant and rail about this; I'm simply going to offer 3000 words on it in the form of three pictures.  Charts, actually.  Together, these explain the outcome at Durban.

The first one explains the problem with climate science:

The AGW thesis asserts that temperature responses will scale linearly with forcings, the most important of which, according to the IPCC, is anthropogenic carbon dioxide emissions.  However, anthropogenic carbon dioxide emissions account for less than 5% of atmospheric carbon dioxide - and as this chart shows, according to measured data, there is no correlation over the course of the past decade between atmospheric carbon dioxide concentration and average global temperature.  All of the arguments underlying the Kyoto Protocol, and all conceivable successor agreements, are based on linking anthropogenic CO2 to temperature - and yet temperature does not appear to respond to CO2 at all.  That's the first problem.

Here's the second problem:

The data are from British Petroleum.  That's a chart showing the CO2 emissions of the world's top 14 emitters - every country that currently puts out more than 400 million tonnes of CO2 per year.  Only three of them matter: the US (dark blue), whose emissions are declining; China (scarlet), whose emissions are skyrocketing; and India (beige), whose emissions are climbing slowly but accelerating.  Every other major emitter is lower than these three, and is showing stagnation or decline.  Accordingly, any emissions control or reduction agreement that requires reductions from the US or any other western state, but does not require significant (i.e. massively disproportionate) reductions from China and India, is utterly pointless - as is talk of any "carbon debt" owed to the "Third World".

Incidentally, activists at Durban made much of Canada's alleged "delinquency" on the carbon file.  Take a look at the data and explain to me why Canada (a delightful fuschia) is a bigger problem for the environment than, say, China.  Or India.  Or the US.  Or Japan.  Or Germany.  Or for that matter South Africa, whose emissions are at the same level as ours.  And don't be fooled by arguments about "per capita" emissions; that matters to us, but Gaia doesn't care how many people live in your country; she only cares how much CO2 you pump out.

Speaking of "carbon debts", here's the third and last chart:

This is data from Japan's Ibuki satellite, which measures carbon dioxide flux.  What is shows is the daily rate of change in CO2 flux over various parts of the planet, in the four different seasons.  As we all know, most of the world's human-produced CO2 is produced by northern hemisphere countries, because that's where most of the industry is (the only state in the above list that isn't in the northern hemisphere is South Africa).  And we see from this data that the CO2 flux changes massively from season to season.  In high summer, the northern hemisphere is a massive carbon sink.  In autumn, with the exception of the northereastern US states and parts of Siberia, the northern hemisphere is a net CO2 producer.  In the depths of winter, the northern hemisphere - except for the Middle East, the Subcontinent, and Central Asia - is a net CO2 producer; and in the spring, the northern hemisphere, except for Europe and Northeastern Canada, is a net CO2 producer.  Looks pretty simple, doesn't it?  After all, we burn more fuel to stay warm in the autumn, winter, and spring, don't we?

Well, it's not that simple. Take a look at China.  It's the only country that doesn't change colour.  Throughout the year, China is a net CO2 producer.  The oceanic bands are interesting, too; in the winter, when the northern hemisphere is producing a lot of CO2, the North Atlantic is absorbing CO2 like crazy.  This is because cold water can hold more dissolved gas than warm water.  It's also worth noting that the temperate southern hemisphere oceanic bands seem to be absorbing a lot of CO2.

Why does this matter?  Well, take another look at Canada.  We're supposed to be climate criminals - and yet it's the seasons, not human activity, that are the key determinant of our status as a net carbon emitter or consumer.  It's tempting to blame the change on the use of heating fuel, but that explanation just doesn't hold water; if you look at fuel consumption patterns for the US, fuel oil consumption peaks in the winter, when gasoline consumption troughs, and vice-versa - except that US consumers use twice as many barrels of gasoline as they do of fuel oil at any given time.  In other words, you would expect CO2 emissions to be higher in the summer time, when gasoline consumption is high, and fuel oil consumption is low; but summer is when CO2 flux over North America is at its lowest. 

The answer lies in the forests, and in photosynthesis.  When it's summer, the northern hemisphere - the boreal and temperate forest regions - becomes biologically more active and consumes more CO2 than it produces.  In the winter, photosynthetic activity declines, and CO2 emissions climb.  CO2 concentrations, in other words, seem to respond less to fuel consumption patterns than to seasonal variation leading to change in the metabolic rates of plants.

Bottom line is this: as these satellite observations illustrate, we understand an awful lot less about where CO2 comes from and where it goes than we purport to understand. Maybe the caliphs of carbon at Durban [and Doha! -ed.] ought to consider taxing states that don't grow enough trees per capita.

Yeah - can you imagine anyone from Qatar agreeing to that?



Tuesday, December 4, 2012

3 February 2012 – That’s no moon…


You may recall that shortly before Christmas I sent around a message in which I discussed the design, developmental work and testing that had been done on the Vought SLAM - the nuclear ramjet-powered, H-bomb sowing flying leviathan that was one of many unbelievable but terrifyingly realistic weapons systems dreamed up by atomic eggheads in the 1950s and 1960s.  Not surprisingly, this little trot down memory lane sparked a good many comments, most of them concerning the sheer lunacy of creating something that carried a belly-full of nuclear weapons, irradiated anything it flew over, and had a virtually unlimited range.  In one subsequent conversation, however, the point came up that, with such maniacal inventions cluttering up our collective history, there didn't seem to be much point in unleashing speculation in an attempt to posit the sorts of innovations ("disruptive technologies", if you like) that might pop up in the future.  This is not to suggest that speculation isn't fun, just that there isn't much point in it - particularly when there's no way to predict where technology will go, and especially when we're so woefully ignorant about our own past, and haven't figured out how to deal with things that we ourselves invented half a century ago, but just somehow didn't get around to putting into production.  We don't have to go to the history of the space race for such examples; we only need to look into our own archives.

For example, we're all familiar with HEAT rounds - they've been around since WWII, and are fairly simple in concept.

The British PIAT - Projector, Infantry, Anti-Tank - relied on a HEAT warhead to (occasionally) penetrate enemy armour.  A HEAT warhead consists of an explosive charge with a conical well in the centre, lined with metal (usually copper).  The charge is initiated by a base fuze.  In the case of the PIAT warhead (below), the projectile is fired at a target; the extended probe on the nose fuze ("transit plug") transmits the shock of impact to the fuze at the base of the HE charge.  When the HE charge detonates, the shock wave compresses the copper cone into a jet of molten metal travelling at the speed of the explosion - roughly 7000 m/s in the case of a conventional TNT or Composition B fill.  The liquid metal jet penetrates the armour of the target and does corresponding damage to the interior of the vehicle, and its crew.  HEAT rounds are very effective, which is probably why they continue to constitute part of the basic load (along with kinetic penetration munitions, like APFSDS) of main battle tanks and armoured fighting vehicles even today.  They also continue to make fantastic infantry AT weapons; virtually all current light, medium and heavy AT missiles and rockets, from the venerable RPG-7 to the modern TOW2 missile use HEAT warheads.

Defending against HEAT rounds requires different strategies.  First, you can keep the jet away from your armour plate.  That means hanging something on your vehicle to make the incoming round detonate further away.  Second, you can keep the jet from forming; one way of doing so being explosive reactive armour panels, which detonate when the HEAT round strikes them, destroying the round as the jet is forming.  Third, you can thicken up your armour (bearing in mind the requirement that the vehicle still has to be able to move and carry stuff).  And fourth, you can try to disrupt the jet and prevent it from penetrating all the way through to the interior of the vehicle (which led to layered armour, with various materials sandwiched between plates to disperse the jet horizontally).
Research in the 1980s and later on took the HEAT concept somewhat further, into explosively formed projectiles (EFPs, also known as self-forging fragment projectiles).  During my first visit to Suffield as a staff officer back in the early 90's, I was shown test fragments and videos from trials on a new type of experimental munition: a scaled-up version of an EFP.  By thickening the conical well liner in the explosive charge, or by changing to a different, tougher metal than copper (e.g., iron), the charge, when detonated, would - instead of forming a liquid metal jet - compress the metal cone into a slug moving at very high speed.  The slug would not be affected by stand-off detonation mechanisms or explosive-reactive armour panels, and layering armour to disperse a metal jet horizontally wouldn't be much help. 
Moreover, you could make the slug big.  Really, really big.

THIS big.  That's from a test at Suffield back in the 90's.  I recall handling something like this during a visit.  It was more than a foot long and weighed about 30 pounds.  Imagine that thing coming at you at several thousand metres per second.  And the creation of them, by the way, is an exercise in perfect machining backed up by mathematics.  Here's an image from a DRES paper from 1995 (by one of our own colleagues - see note A) on modelling EFPs:

Note the similarities - and the caption which states that the mathematical models were confirmed by experimentation.  That hunk of metal started out looking like a wok about an inch thick, and after being whapped with a couple dozen kilos of HE, ended up looking like the lawn dart from Hell.  Math is awesome.
There's no point in going into too much more detail on EFPs, because that isn't what I really wanted to talk about in this message anyway.  I simply wanted to emphasize the fact that this technology is now old - so old that the Iraqi insurgents, al Qaeda, and other jihadist adversaries have adapted self-forging fragment technology to off-road mines and IEDs, and we're still having a heck of a time dealing with it.  What I'm getting at is that we don't need to invent science-fictiony "future" threats like "tunable weapons" and "gray goo nanobots" and "hyper-empowered individuals" if we're already facing things invented decades ago, but that have got us completely boggled.
Which takes me to today's topic - the Death Star.  Or at least the Soviet equivalent, Polyus. 

A few years back I penned a tech note looking at the arms control implications of space testing missions, specifically the October 2009 LCROSS experiment in which NASA slammed a rocket body into the Moon as part of its search for water on the lunar surface. The paper attracted its fair share of mocking laughter due to the title, which I wrote in jest ("Bombing the Moon"), but anyone who'd taken a moment to read the thing - it wasn't long - would have realized that I was trying to point out the implications of arms control treaties, agreements and regimes for otherwise legitimate space exploration and testing exercises, and vice versa. More knowledgable individuals with a higher security clearance who read that note would have recognized that I was trying to discuss in synecdoche a much more profound incident with significant legislative implications.

Of course, these days most folks don't seem to go in for specialized knowledge, and those who do often seem to lack the security clearance (or the simple interest) to delve deeper into important, paradigm-altering problems that actually impact us on a daily basis. People styling themselves "scientists" seem to prefer to fiddle with models rather than data and evidence, blathering on in bland, meaningless generalities devoid of any linkage to the real world rather than grappling with current problems. I guess that's easier and safer. Whether it's anything more than a complete and utter waste of time and taxpayer money, on the other hand...that's for other folks to decide.

But I digress. In the course of that tech note, I discussed the arms control prohibitions against space-based weapons, briefly mentioning the 1987 launch of an 80-tonne orbital object by the USSR.  This vehicle, it has been suggested, was to have been the forerunner of a series of orbital battle platforms intended to neutralize the US Strategic Defence Initiative systems (which of course were never deployed). 

Polyus failed to achieve orbit and ended up in the Pacific Ocean, and the Soviets never tried again - but the point is, they tried once.  Polyus wasn't some postulated "disruptive technology" or theorized "future threat". It was very real. 

And yes, I know the above picture has "MIR" on the side of the big black thing; according to the official article on Polyus from the Buran website, MIR space station modules were used in its construction.  Here's a pic of the vehicle on the launch pad at Baikonur in 1987; the "Polyus" name is clearly visible on the side (you can sort of see it in the colour pic above, too):

For the sake of reference, the Polyus vehicle in the image above is 40 m long, about 4 m in diameter, and weighed 80 tons.  The Space Shuttle Orbiter is 37 m long, and weighs about 70 tons empty (its gross liftoff weight is about 109 tons).  So this was no mere firecracker.

How real was all this?  Well, real enough that Mikhail Gorbachev showed up at Baikonur on 11 May 1987 to see the thing shortly before it was launched.

According to one news report, one of the purposes of Gorby's visit was to confirm that Polyus was not carrying any weapons.  Did it, or didn't it?  The actual story is a little hard to get a grip on; there are numerous pictures available from different archives, and various articles published by project personnel over the years tell different stories.  Schematics abound on the innerwebz:

 If your Russian is as good as mine, you won't have gotten any of that.  Here's an alleged translation, according to an article penned by one Ed Grondine:

There isn't much in the way of empirical support for Grondine's assertions.  Most of what is available in the public domain about Polyus comes from official websites (for example, Buran), which don't mention self-defence armaments, much less "nuclear space mines".  A lot of the funkier stuff comes from a 2005 article by Konstantin Lantratov, a former press officer in the Russian space industry, entitled "Star Wars That Didn't Happen".  According to the Buran website, for example, the Polyus vehicle carried ten separate scientific experiments - the first of which was testing the USSR's ability to orbit super-heavy Polyus.  Interestingly, the site contains dozens of photos of the spacecraft in the assembly stages; it appears to have been cobbled together out of spare parts:

The service block looked like a "Salyut" slightly modified for this task and was made up from parts of the ships "Cosmos-929, -1267, -1443, -1668" and from modules of MIR-2 station. In this block took place the management systems and on-board displacement, the telemetric control, the radiocommunication, the heating system, the antennas and finally the scientific installations. All the apparatuses wich not supporting the vacuum were installed in the hermetic section. The part of the engines made up of 4 propulsion engines, 20 auxiliary engines for stabilization and the orientation, 16 precision engines, as well as tanks and pneumo-hydraulics conduits. Lastly, the production of electricity was made by solar panels which were spread when Polyus was into working orbit. (Note B)

The size, design and components of the vehicle - not to mention the secrecy with which it was fabricated and launched (which was not at all uncommon during the Cold War, remember) - would naturally spark all manner of conspiracy theories.  The vehicle according to Buran contained large quantities (420 kg) of xenon and krypton in 42 cylinders of 32 L capacity, with an injector to squirt the gas into the upper atmosphere to "generate ionized signals with long waves".  Grondine argues that the purpose of this was to produce light by fluorescence, in order to signify that a container (possibly holding a nuclear space mine?) had been launched without generating radio energy, which could be tracked.  According to other sources (e.g., the always infallible Wikipedia), the gases were intended to be used to test, with the appearance of innocence, the venting apparatus for a zero-torque exhaust system for a 1 megawatt carbon dioxide laser intended to damage Strategic Defence Initiative satellites.
Grondine also commented, as many others did, on the "optically black shroud" covering the whole thing.  Painting a space object black is one way to make it more difficult to see via reflected light - although the point of doing so when you've got huge solar panels sticking out of the sides of the thing escapes me.  Also, painting it black would tend to make it hot, as it would absorb rather than reflect solar radiation; and unless the thing incorporated stealth technology, it would still be easily visible by radar, which is how SpaceCom tracks large orbital objects anyway.
The fate of Polyus was in any event not a happy one.  It was launched on 15 May 1987, two days after Gorby's visit to Baikonur ended.  The Buran website has a comical description of why the GenSec missed the launch:

The first launch of Energia and Polyus was so important for the direction of the party that the General Secretary of the Central Committee of the Communist Party itself, Mikhaïl Sergeevich Gorbatchev, went. However, it is well-known that any apparatus, so simple is it, have a strong probability of breaking down during a demonstration or in the presence of VIPs, this is why the Management committee had decided (on May 8) to delay the departure on May 15, under pretext of technical problems, knowing that M.S. Gorbatchev could not remain because it had a voyage to the head office of UNO at New York.(Note B)

Their precautions turned out to be well-founded.  Because the Energiya had been designed with hang-points for the Buran space shuttle system, Polyus had to use the same connection mechanisms.  This led to it being mounted backwards, i.e. with the main thruster engines facing forward, resulting in a complicated mission profile.  In order to achieve orbit, after about 8 minutes into the flight program, at an altitude of about 110 km, the Polyus would jettison its engine shroud, separate from the Energiya booster, and execute a 180 degree turn using its thrusters.  Once this was complete, about 15 minutes into the flight and at an altitude of about 155 km, it would fire its main engines periodically to level the craft, and eventually achieve a stable orbit at 280 km altitude, by about 30 minutes after launch.
That's not what happened.  Only one of the positioning thrusters functioned, and the Polyus, instead of making a 180 degree rotation, made a full 360, leaving the main engines pointing forward.  Instead of accelerating the craft into orbit, the engines decelerated it, and Polyus deorbited into the Pacific Ocean, reportedly landing in water that was several kilometres deep.  According to open sources, the spacecraft was never retrieved.
So, Polyus was real.  The Soviets really built it, and they really launched it.  Did they arm it? Was it supposed to be the first real space battle station?  Would it have worked?  A 1-megawatt laser isn't much in atmosphere, where blooming and attenuation quickly destroy beam coherence; but in space, it might be fairly effective over a reasonably long range.  Could it also have carried "nuclear space mines", presumably for use against US orbital assets?  I think a more important question is, could it have carried nuclear warheads as part of a fractional orbital bombardment system, or FOBS?  That was one of the big worries of the 1960s, and it was one of the key reasons that the US and USSR negotiated the 1967 Outer Space Treaty, which prohibited placing "nuclear weapons or other weapons of mass destruction" either in orbit or on celestial bodies.(Note C - and here we are back at the "Bombing the Moon" technical note again. Funny how this arms control nonsense keeps coming back to haunt us. Almost like it was relevant or something.) 
Would the Soviets have broken the OST? Well, when you can't figure out why somebody's doing what they're doing, or whether they're likely to be doing something they shouldn't, you've got two choices: pull a guess out of your nether regions (the preferred option for "analysts" who don't know anything about anything and think that history is "stuff that's in books"); or use actual evidence. In such cases, the only evidence we have to go on is historical precedent - i.e., what have the suspects done in the past, and why.  Would the USSR have abrogated the 1967 OST by placing nuclear weapons in orbit?  Well, they signed the 1972 Biological and Toxin Weapons Convention, which prohibited producing biological weapons...and then went on to build the biggest biological weapons complex in the world, churning out weaponized anthrax, smallpox, and a host of other pathogens literally by the metric tonne.  By the late 1970s, the USSR was consuming 400,000 fresh eggs per week simply to incubate the weaponized India-1 strain of Variola Major, and had developed refrigerated, heat-dissipating ICBM warheads specifically designed to keep viral and bacterial agents alive during re-entry.
So you could say that, when it comes to the former USSR and its adherence to non-proliferation, arms control and disarmament conventions, there are some legitimate trust issues.
Soviet-era fermenters in Building 221 at Stepnogorsk, Kazak SSR.  Fool me once, shame on you.  Fool me twice...

I guess the final take-away from this is that when it comes to trying to figure out what a potential enemy might be able to do in the near future, one of the best guides is knowing what they've done to you in the near past.  If nothing else, the existence of things like the Vought SLAM and the Polyus Space Battle Station should give us a smidgeon of perspective on some of the prerequisites and challenges involved in creating massive and potentially threatening items of military hardware.  In other words, if we want to figure out whether somebody might put an orbital battle station in Low Earth Orbit and use it to dazzle or destroy our satellites or FOB a nuke onto one of our cities, the first thing we should do is make a list of folks who (a) can build space stations, (b) have a heavy-lift rocket capability, and (c) don't like us.  The intersection in that Venn diagram is where we ought to start looking. 
And if the intersection is empty, maybe we shouldn't waste our time making up non-existant things to fill it.
Anyway, if anyone wants to read Lantratov's article and feels like slogging through 28 pages of "Google-translated" grammar, just let me know.  He gives all the details about cannons, targets, gas generators, and mentions that the black finish on the vehicle was to help maintain working temperature by absorbing solar energy.  It's a cornucopia of awesome, and by the time you're finished reading it you'll be muttering "Commence primary ignition!" under your breath.
Cheers - and may the farce be with you!


A) The paper is available from the DRDC online archive.
C)  The OST also prohibits laying claim to celestial terrain. 

Thursday, November 29, 2012

Silviu the Thief - Now Available!

Hello all,

My new book, Silviu the Thief - the first book in the Hero's Knot series - is now available for sale at Smashwords and Amazon.

As always, you can also find my books via my website,

I had a lot of fun cranking this one out during National Novel Writing Month, and I'm looking forward to following the adventures of Raven/Silviu through at least two more books.

I hope you enjoy it!


- Don

26 January 2012 – Sunshine, staple crops and rainbows


Those of us living in the Ottawa area got seriously ripped off on Tuesday.  It was cloudy, and we missed the best light show in a decade.

Late on Sunday 22 January, the Sun erupted with an M8.7 class solar flare.  The resulting coronal mass ejection (CME), a burst of highly energetic protons, left the Sun's surface at about 2,000 km per second, and impacted the Earth at a little after 1000 hrs EST on Tuesday.  The Space Weather Prediction Centre at the National Oceanic and Atmospheric Administration in the US rated the storm as an S3 (the scale goes up to S5), making it the strongest space weather event since 2005, possibly 2003.  The resulting geomagnetic storm was rated G1 (minor).

Thanks to our atmosphere, solar storms can't harm humans, but when charged particles hit the Earth's magnetosphere, they can play hob with radio communications, and at high energy levels can damage satellites.  They also produce bursts of charged particles in the upper atmosphere, leading to tremendous auroral displays.  NASA had predicted that the Aurora Borealis could be visible at latitudes as low as Maine.  The following image was taken in Sweden:

See what I mean?  Awesome.  Elsewhere, the aurora was reported to have looked like a rainbow in the night sky.  Too bad we missed it here.

Who cares, though, right?  I mean, it's pretty, sure, but where's the relevance to what we do?  Well, this kind of solar activity is strategically very relevant, in a couple of ways.  First, powerful geomagnetic storms can impact humans - not through ionizing radiation, but through their immediate electromagnetic effects.  Geomagnetic perturbations can induce powerful currents on Earth.  Here's a plot from a Swedish lab showing the arrival of the CME:

Massive geomagnetic events can wreak havoc with power grids because, pursuant to Faraday's Law, a time-varying magnetic field induces a current in a conductor - and the surface of the world is positively covered with conductors.  A strong enough magnetic perturbation can create enough current in high-tension lines to overload switches and cause circuit breakers to blow.  Take a look at the magnetometer declination on that chart; last Tuesday's event induced a perturbation about 8 times the average maximum perturbation experienced over the hours preceding the event.  Frankly, the effects shouldn't be surprising; we're talking about billions of tonnes of charged particles hitting the planet at millions of kilometres per hour.  The kinetic energy alone of such an impact is staggering.  It's enough to bend the planet's magnetosphere out of shape.

There weren't any reports of power grid failures on Tuesday, but remember, this was only a G1 storm.  A CME that hit the Earth on 13 March 1989 knocked out Quebec's power grid for nine hours.  That one temporarily disabled a number of Earth-orbiting satellites, and folks in Texas were able to see the northern lights.  Another big geomagnetic storm in August of that same year crippled microchips, and shut down the Toronto Stock Exchange.

Those were bad, but the Carrington Event was something else entirely.  Back on 1 September 1859, British astronomers Richard Carrington and Richard Hodgson independently recorded a solar superflare on the surface of the Sun.  The resulting CME took only 18 hours to travel to the Earth, giving the mass of charged particles a velocity of ca. 2300 km/second, somewhat faster than last Tuesday's bump.  When the CME hit, it caused the largest geomagnetic storm in recorded history.  According to contemporary accounts, people in New York City were able to read the newspaper at night by the light of the Aurora Borealis, and the glow woke gold miners labouring in the Rocky Mountains.  The more immediate impact of the Carrington Event, though, was the fact that the intense perturbations of the Earth's magnetic field induced enormous currents in the telegraph wires that had only recently been installed in Europe and across North America.  Telegraph systems failed all over both continents; according to contemporary accounts, sparks flew from the telegraph towers, telegraph operators received painful shocks, and telegraph paper caught fire.

Geomagnetic storms like the Carrington Event are powerful enough to leave traces in the terrestrial geology.  Ice core samples can contain layers of nitrates that show evidence of high-energy proton bombardment.  Based on such samples, it's estimated that massive solar eruptions like the Carrington Event occur, on average, every five hundred years or so.  Electromagnetic catastrophists (I've derided them before as "the Pulser Crowd") point to the Carrington Event, and lesser impacts like the 1989 solar storm, as an example of the sort of thing that could "bring down" Western civilization by crippling power grids (and they also tend to posit that a deliberate EMP attacker could accomplish the same thing via the high-altitude detonation of a high-yield thermonuclear weapon over the continental US - as if the US strategic weapons system weren't the one thing in the entire country that was hardened against EMP from top to bottom).  It hasn't happened yet - and the protestations of the Pulsers notwithstanding, the grid is a lot more robust than the old telegraph lines used to be, if only because we understand electromagnetism a lot better now than they did back in the days of beaver hats, mercury nostrums, and stock collars.

That said, this sort of thing does tend to make one look at our ever-benevolent star in a new light (no pun intended).  The second reason this sort of thing is strategically relevant is because recent studies and observed data seem to be confirming some disturbing solar activity trends identified a few years ago - and the potential consequences aren't pleasant.

As I've mentioned before, one of the perplexing claims in the IPCC's list of assumptions upon which all of the general circulation ("climate") models are built is that the aggregate impact of "natural forcings" (a term that the IPCC uses to lump together both volcanic aerosols and all solar forcings) is negative - i.e., that the sum total of volcanic and solar activity is to cool the Earth, rather than warm it.  The IPCC also argues that these natural forcings “are both very small compared to the differences in radiative forcing estimated to have resulted from human activities.” [Note A] The assumption that solar activity is insufficient to overcome the periodic cooling impact of large volcanic eruptions (which, let's face it, aren't all that common - the last one to actually have a measurable impact on global temperatures was the eruption of Mt. Pinatubo on 15 June 1991) is, to say the least, "unproven."  The IPCC has also dismissed the Svensmark hypothesis (the argument that the warming effect of seemingly minor increases in solar activity is magnified by the increase in solar wind, which interrupts galactic cosmic radiation and prevents GCRs from nucleating low-level clouds, thereby reducing Earth's albedo, and vice-versa - you'll recall this from previous COPs/TPIs), which, unlike the hypothesis that human CO2 emissions are the Earth's thermostat, is actually supported by empirical evidence.

This is probably why the IPCC's climate models have utterly failed.  Temperature trends are below the lowest IPCC estimates for temperature response to CO2 emissions - below, in fact, the estimated temperature response that NASA's rogue climatologist, James Hansen, predicted would occur even if there was no increase in CO2 emissions after 2000. 
(Dotted and solid black lines - predicted temperature trends according to Hansen's emissions scenarios.  Blue dots - measured temperatures.  Red line - smoothed measured temperature trend.)

The lowest of the black dotted lines is what Hansen predicted Earth's temperature change would be if global CO2 emissions were frozen at 2000 levels.  Obviously, that hasn't happened; in fact, global CO2 concentrations have increased by 6.25%.  Temperatures haven't increased at all.  In other words, while CO2 emissions have continued to skyrocket and atmospheric CO2 concentrations have continued to increase, actual measured temperatures have levelled off, and the trend is declining.  Don't take my word for it; the data speak for themselves. 

I particularly like the next graph, which shows the last 10 years of average global temperature trends as measured by the most reliable instruments available to mankind:

The satellite temperature measurement data are maintained by the University of Alabama at Huntsville, and the short puce line at the left of the graph shows 2012 temperatures to date.  That's right - according to satellite measurements, this is the coldest winter in at least a decade.  Have you heard anything about that from the mainstream media?  Have you heard anything about it from NASA?  Probably not; Hansen just released another statement shrieking that 2011 was the "11th-warmest" year on record.  This was after modifying the GISS temperature dataset - again - to make the past colder, and the present hotter.  Seriously, where apart from climate science is it considered acceptable to change the past to conform to your theories about the future?  Well, apart from communist dictatorships, I mean.

Why aren't the data following the GCM predictions?  Well, let's look back a few years.  In a paper I wrote back then, I took a look at what solar activity trends had to suggest about the likelihood of continued, uninterrupted global warming.  Here's an excerpt.  Bear with me here, and please excuse the dated charts; the paper, after all, was written in January 2009, and updated in April 2009, using sources and data available to that point:

Solar physicists have begun to speculate that the observed, and extremely slow, start to solar cycle 24 may portend an unusually long, weak solar cycle.  According to NASA, in 2008 the Sun experienced its “blankest year of the space age” – 266 spotless days out of 366, or 73%, a low not seen since 1913.  David Hathaway, a solar physicist at NASA’s Marshall Space Flight Center, noted that sunspot counts were at a 50-year low, meaning that “we’re experiencing a deep minimum of the solar cycle.”[1]  At time of writing, the figure for 2009 was 78 spotless days out of 90, or 87%, and the Goddard Space Flight Centre was calling it a “very deep solar minimum” – “the quietest Sun we’ve seen in almost a century.”[2]

This very low solar activity corresponds with “a 50-year record low in solar wind pressure” discovered by the Ulysses spacecraft.[3]  The fact that we are simultaneously experiencing both extremely low solar wind pressure and sustained global cooling, incidentally, may be considered prima facie circumstantial corroboration of Svensmark’s cosmic-ray cloud nucleation thesis.

Figure 17 - Solar cycle lengths 1750-2007; and the 2 longest cycles of the past 300 years [4]

Measured between minima, the average length of a solar cycle is almost exactly 11 years.  The length of the current solar cycle (Solar Cycle 23, the mathematical minimum for which occurred in May 1996), was, as of 1 April 2009, a little over 12.9 years.[5]  This is already well over the mean, and at time of writing, the minimum was continuing to deepen, with no indication that the next cycle has begun.[6]  Only one solar cycle in the past three centuries has exceeded that length – solar cycle 4, which lasted 13.66 years, 1784 to 1798 (see figure 17).  This was the last cycle before the Dalton Minimum, a period of lower-than-average global temperatures that lasted from approximately 1790-1830. The Dalton Minimum was the last prolonged “cold spell” of the Little Ice Age, from which temperatures have since been recovering (and which, as noted above, the IPCC and the proponents of the AGW thesis invariably take as the start-point for their temperature graphs, in a clear demonstration of the end-point fallacy in statistical methodology).[7]  On the basis of observations of past solar activity, some solar physicists are predicting that the coming solar cycle is likely to be weaker than normal, and could result in a period of cooling similar to the Dalton Minimum.[8]

If we were to experience a similar solar minimum today – which is not unlikely, given that, as noted above, we are emerging from an 80+-year Solar Grand Maximum, during which the Sun was more active than at any time in the past 11,000 years – the net result could be a global temperature decline on the order of 1.5 degrees over the space of two solar cycles, i.e. a little over two decades.[9]  According to Archibald, during the Dalton Minimum, temperatures in central England dropped by more than a degree over a 20-year period, for a cooling rate of more than 5ºC per century; while one location in Germany – Oberlach – recorded a decline of 2ºC during the same period (a cooling rate of 10ºC per century).[10]  Archibald predicts a decline of 1.5ºC over the course of two solar cycles (roughly 22 years), for a cooling rate of 6.8ºC per century.  This would be cooling at a rate more than ten times faster than the warming that has been observed since the mid-1800s.  “At this rate,” Monckton notes wryly, “by mid-century, we shall be roasting in a new ice age.”[11]

Well, since predictive analysis ought to be subject to review, what do things look like today, three years after that paper was written?  Let's turn to NASA's David Hathaway, who - as a solar physicist - continues to track and refine predictions for the depth and duration of the next solar cycle.  Back in 2006, Hathaway, looking at the Sun's internal "conveyor belt", predicted that the next solar cycle - #24, the one we're currently in - would be higher than cycle #23, and that #25 would be lower.
(Source: NASA, David Hathaway, "Solar Cycle 25 peaking around 2022 could be one of the weakest in centuries", 10 May 2006 [Note B])

Hathaway's prediction was based on the conveyor belt model (look it up if you're interested).  In 2010, two different authors, Matthew Penn and William Livingston of the National Solar Observatory, developed a physical model based instead on the measured magnetic fields of sunspots.  Based on their model, which seems to have greater predictive validity, they argued that the sunspot number would not be low (75 or so) like Hathaway predicted, but exceptionally low, less than a tenth of that - a total peak sunspot number of around 7, or the lowest in observed history.

Independent of the normal solar cycle, a decrease in the sunspot magnetic field strength has been observed using the Zeeman-split 1564.8nm Fe I spectral line at the NSO Kitt Peak McMath-Pierce telescope. Corresponding changes in sunspot brightness and the strength of molecular absorption lines were also seen. This trend was seen to continue in observations of the first sunspots of the new solar Cycle 24, and extrapolating a linear fit to this trend would lead to only half the number of spots in Cycle 24 compared to Cycle 23, and imply virtually no sunspots in Cycle 25. [Note C]

The authors predict that umbral magnetic field strength will drop below 1500 Gauss between 2017 and 2022.  Below 1500 Gauss, no sunspots will appear.  This is what the predictive chart from their paper looks like:

For the record, that's not a happy prediction.  A maximum sunspot number of 7 is virtually unheard-of in the historical record.  Using the Penn-Livingston model and NASA's SSN data, David Archibald, another solar expert, has projected sunspot activity over cycle 25...and this is what it looks like [Note D]:

First, note that measured data have already invalidated Hathaway's 2006 prediction about solar cycle 24; turns out it is proving to be considerably weaker than cycle 23.  If Penn and Livingston are correct, however, cycle 25 could be less than 1/5th the strength of cycles 5 and 6, which in the first quarter of the 19th Century marked the depths of the Dalton Minimum.  During this period, as noted above, temperatures plummeted, leading to widespread crop failures, famine, disease, and the delightful sociological and meteorological conditions that entertained Napoleon during the retreat from Moscow, and that Charles Dickens spent most of his career writing about.  A SSN of less than 10, in fact, would be considerably lower than the Dalton Minimum; it would put the world into conditions not seen since the Maunder Minimum in the 17th Century, which was even worse.

How significant could this be?  Well, look at the above chart.  The last time there was an appreciable dip in solar activity - cycle 20 (October 1964 - June 1976), the smoothed SSN curve peaked at 110, more than 10 times as strong as cycle 25 is expected to be...and the entire world went through a "global cooling" scare.  In 1975, Newsweek published an article entitled “The Cooling World”, claiming, amongst other things, that “[t]he evidence in support of these predictions [of global cooling] has now begun to accumulate so massively that meteorologists are hard-pressed to keep up with it.”  The article's author had some grim advice for politicians struggling to come to grips with the impending cooling: “The longer the planners delay, the more difficult will they find it to cope with climatic change once the results become grim reality.” [Note E]

Sound familiar? 

That was 36 years ago.  Now here we are in solar cycle 24, which is looking to be a good 20% weaker than solar cycle 20, which prompted all the cooling panic.  Temperatures are once more measurably declining.  Models of solar activity (models based on observed data, remember, not hypothetical projections of mathematically-derived nonsense) project that the modern solar grand maximum that has driven Earth's climate for the last 80+ years is almost certainly over.  This means that, based on historical experience, the coming solar cycles will probably be weak, and the Earth's climate is probably going to cool measurably.  "Global warming" is done like dinner, and to anyone with the guts to look at the data and the brains to understand it, there is no correlation whatsoever between average global temperature and atmospheric carbon dioxide concentrations (let alone the tiny proportion of atmospheric CO2 resulting from human activities).  The only questions left are (a) whether we are on our way to a Dalton-type Minimum, where temperatures drop a degree or two, or a much worse Maunder-type Minimum, where temperatures drop more than two degrees; and (b) whether we have the intelligence and common sense as a species to pull our heads out of our nether regions, look at the data, stop obsessing about things that are demonstrably not happening, and start preparing for the Big Chill. 

Based on observed data, I'm guessing we're definitely in for colder weather - but I'm doubtful that we have the mental capacity to recognize that fact and do something about it.  Like, for example, develop the energy resources that we're going to need if we're going to survive a multidecadal cold spell. 

Oh, and why is this relevant?  Well, because we live in Canada.  A new solar minimum - the Eddy Minimum, as some are beginning to call it - would severely impact Canada's agricultural capacity.  As David Archibald pointed out in a lecture last year (ref F), Canada's grain-growing belt is currently at a geographic maximum, thanks to the present warm period (the shaded area in the image below).  However, during the last cooling event (the one capped off by Solar Cycle 20, above), the grain belt shrank to the dotted line in the image.  A decline in average temperature of 1 degree - which would be consistent with a Dalton Minimum-type decline in temperatures - would shrink the grain belt to the solid black line; and a 2-degree drop in temperature would push that black line south, to the Canada-US border. 

In other words, if we were to experience a drop in temperatures similar to that experienced during the Maunder Minimum - which, if Solar Cycle 25 is as weak as Penn-Livingston suggest it might be, is a possibility - then it might not be possible to grow grain in Canada. 

This could be a problem for those of us who enjoy the simple things in life, like food.  And an economy.

So what we should be asking ourselves is this: What's a bigger threat to Canada's national security? 

·         A projected temperature increase of 4 degrees C over the next century that, according to all observed data, simply isn't happening - but that if it was, wouldn't improve our agricultural capacity one jot, because that shaded area is a soil-geomorphic limit, and sunshine doesn't turn muskeg or tundra into fertile earth no matter how warm it gets; [Note G]


·         A decline in temperatures of 1-2 degrees C over the next few decades that, according to all observed and historical data, could very well be on the way - and which, if it happens, might make it impossible to grow wheat in the Great White North?

You decide.  I'll be looking for farmland in Niagara.




27 January 2012 – Update to ‘Sunshine’, etc.


It figures that the same day I send out a TPI featuring a 6-month-old slide, the author of the slide would publish an update.[Note A]  I thought I'd send it along as the refinements to his arguments sort of emphasize the reason that we should be taking a closer look at the question of what might really be happening to climate, and why.

David Archibald, who produced that grain-belt map I cited above, has refined his projection based on new high-resolution surface temperature studies from Norway, and on new solar activity data, including data on the cyclical "rush to the poles" of sunspot clusters, of which our understanding has improved considerably over the past five years.  Here's how one paper puts it:

“Cycle 24 began its migration at a rate 40% slower than the previous two solar cycles, thus indicating the possibility of a peculiar cycle. However, the onset of the “Rush to the Poles” of polar crown prominences and their associated coronal emission, which has been a precursor to solar maximum in recent cycles (cf. Altrock 2003), has  just been identified in the northern hemisphere. Peculiarly, this “rush” is leisurely, at only 50% of the rate in the previous two cycles.”

So what?  Well, it means that cycle 24 is likely to be a lot longer than normal.  And what are the consequences of that, you ask?

If Solar Cycle 24 is progressing at 60% of the rate of the previous two cycles, which averaged ten years long, then it is likely to be 16.6 years long.  This is supported by examining Altrock’s green corona diagram from mid-2011 above.  In the previous three cycles, solar minimum occurred when the bounding line of major activity (blue) intersects 10° latitude (red).  For Solar Cycle 24, that occurs in 2026, making it 17 years long.

The first solar cycle of the Maunder Minimum was 18 years long.  That's the last time the world saw solar cycles as long as the coming cycles are projected to be.

For humanity, that is going to be something quite significant, because it will make Solar Cycle 24 four years longer than Solar Cycle 23.  With a temperature – solar cycle length relationship for the North-eastern US of 0.7°C per year of solar cycle length, temperatures over Solar Cycle 25 starting in 2026 will be 2.8°C colder than over Solar Cycle 24, which in turn is going to be 2.1°C colder than Solar Cycle 23.

The total temperature shift will be 4.9°C for the major agricultural belt that stretches from New England to the Rockies straddling the US – Canadian border.  At the latitude of the US-Canadian border, a 1.0°C change in temperature shifts growing conditions 140 km – in this case, towards the Gulf of Mexico. The centre of the Corn Belt, now in Iowa, will move to Kansas.

Emphasis added.

Now remember, that's the center of the corn belt.  What about the northern fringes of it?  Here's Archibald's updated grain belt map.  Note the newly-added last line:

"All over."  That's 25-30 years from now - or as some folks around this building like to call it, "Horizon Three".  And here we are, still talking about melting sea ice, thawing permafrost, and building deep-water ports in the Arctic.  Maybe we should be talking about building greenhouses in Lethbridge and teaching beaver trapping in high school.

Just something to think about the next time somebody starts rattling on about how the science is settled and global warming is now inevitable.  We'd better hope they're right, because the alternative ain't pretty.

Cheers...I guess.




Notes (From original post)

A)    4th AR WG1, Chapter 2, 137.

E)    Peter Gwynne, “The Cooling World”, Newsweek, 28 April 1975, page 64. 

G)    H/T to Neil for the "sunshine doesn't turn muskeg into black earth" line.

[1][1] NASA, “Spotless Sun: Blankest Year of the Space Age”, NASA Press Release, 30 September 2008 [].
[2][2] “Deep Solar Minimum”,, 1 April 2009, [ 01apr_deepsolarminimum.htm].
[3][3] NASA, “Spotless Sun: Blankest Year of the Space Age”, ibid.  The Sun is also going through a 55-year low in radio emissions.
[4][4] Data obtained from the National Geophysical Data Centre of the NOAA Satellite and Information Service [].
[5][5] At time of writing the minimum for Solar Cycle 24 had not yet been established.  The longer Solar Cycle 23 continues, the more likely a prolonged period of cooling becomes.  For those wishing to perform their own calculations, all of the data on sunspot numbers (and much more) are available at the website of the National Geophysical Data Centre of the NOAA Satellite and Information Service [].

[6][6] Based on current science, the date for the solar minimum ending a prior cycle is generally determined from sunspot counts and is generally agreed by scientists post-facto, once the subsequent cycle is under way.  However, in addition to very low smoothed sunspot numbers, solar minima are also defined in terms of peaks in cosmic rays (neutrons) striking the Earth (because the Sun’s magnetic field, which shields the Earth from cosmic rays, is weakest during the solar minimum.  For more information on this point, see chapter 5).  Because the neutron counts, at time of writing, were still increasing, it is unlikely that the solar minimum separating solar cycles 23 and 24 has yet been reached.  See Anthony Watts, “Cosmic Ray Flux and Neutron monitors suggest we may not have hit solar minimum yet”,, 15 March 2009 [].  For anyone interesting in charting the neutron flux data for themselves, these can be obtained from the website of the University of Delaware Bartol Research Institute Neutron Monitor Program [].

[7][7] Jeff Id, “Sunspot Lapse Exceeds 95% of Normal”, posting at, 15 January 2009 [].  Id’s data, and the data one which this chart is based, are drawn from official NASA figures.
[8][8] C. de Jaeger and S. Dunham, “Forecasting the parameters of sunspot cycle 24 and beyond”, Journal of Atmospheric and Solar-Terrestrial Physics 71 (2009), 239-245 [].
[9][9] Archibald (2006), 29-35.
[10][10] Archibald, ibid., 31.
[11][11] Christopher Monckton, “Great is Truth, and mighty above all things”; valedictory address to the International Conference on Climate Change, 10 March 2009, 3 [ Great_ Is_Truth_and_Mighty_Above_All_Things.html]