All the news that fits
17-Mar-20
As Easy As Riding A Bike [ 3-Oct-18 11:08am ]

The gyratory system around Victoria station in Westminster has been a genuinely horrible place to cycle for as long as I can remember. Getting to and from the station, or cycling past it, involves dealing with multiple lanes of one-way motor traffic, zooming off towards Park Lane, or thundering south towards Vauxhall Bridge.

The gyratory makes absolutely no concessions to cycling. If, for instance, you want to get from the station to the safety of Cycle Superhighway 3 – central London’s flagship cycle route, you have to make your way around two sides of a terrifying triangle, holding a position in the right hand lane of traffic heading north onto Grosvenor Place, before taking primary position on the left hand side as you skirt the edge of Buckingham Palace.

Cycling from Victoria to CS3

Cycle Superhighway 5 should have arrived in this area from Vauxhall Bridge, and should – quite sensibly – have connected up with Superhighway 3 in the vicinity of Buckingham Palace. However, it seems to have stalled right on the boundary of (guess who!) Westminster City Council, leaving anyone attempting to get between the two to negotiate a mile or so of unpleasant roads without any mitigation for cycling whatsoever.

Right in the middle of the Victoria gryatory stands the new Nova development. An incidental detail is that one of the buildings here won 2017’s Carbuncle Cup for the UK’s ugliest building, but I doubt that anyone cycling past has any time to assess its aesthetic qualities, given that they are busily trying to stay alive. Like Superhighway 5, this development should have represented an opportunity to make the roads around Victoria a bit less lethal for anyone attempting to cycle here. There’s even a detailed 60-page  Transport for London strategy document dating from 2014, the Victoria Vision Cycling Strategy (link opens a download automatically), which explicitly sets out the key challenges and requirements in the Victoria area, in the context of the then-Mayor’s Vision for Cycling.

However, while there have been some improvements in the area around the Nova development – in particular, widened footways, better public realm, and a surface-level crossing that has replaced a subway – it is unfortunate that, despite this golden opportunity to make some serious changes, cycling has been almost completely ignored as the roads have been rebuilt.

One of the biggest issues is that the gyratory around the Nova development has been retained. The new buildings still sit in the middle of what is effectively a giant multi-lane roundabout. The problem of trying to negotiate these roads without being diverted around hostile one-way systems remains, to say nothing of the total lack of protected space for cycling.

Buckingham Palace Road, 2017. New buildings, new footway, new trees, new road surface -the  same three lanes of one-way motor traffic.

Cycling towards the camera here remains impossible. And when the bus lane is occupied, cycling away from the camera is – while possible – an unpleasant and potentially dangerous experience.

Buckingham Palace Road, summer 2018.

4 metre wide bus lanes aren’t so great for cycling when they’re full of buses.

Much the same is true on Victoria Street, lying between Victoria station and the Nova development. Again, we have 2-3 lanes of one-way motor traffic thundering through here, exactly as before.

Victoria Street, looking east. The Nova development is on the left.

And again, this arrangement make no concession for anyone trying to cycle east (away from the camera).

Worst of all, it introduces a significant collision risk at the junction itself, where I am standing to take the photograph. On the approach to the junction, a wide bus stand narrows down significantly, leaving perhaps a metre of width between the kerb and stationary vehicles as a ‘channel’ through which people can cycle to reach an inviting advanced stop line (ASL). The area in question is indicated with the arrow, below.

Approaching the junction on Victoria Street.

That ASL looks very inviting, but getting there could be very risky indeed. There’s absolutely no guarantee that any large vehicle progressing through the junction will remain a safe distance from the kerb. Three separate examples below, taken within the space of a few minutes.


Anyone cycling up to the lights – forced into a tight merge by the narrowing of the road, and tempted to advance by a cycle lane leading to an ASL, could very easily find themselves squeezed between a lorry, or a bus, and the kerb. If any of these vehicles are turning left, like the National Express coach in the photograph below, the consequences could be lethal.

Someone has already had a very narrow escape here, taken to hospital in a critical condition after going under a left turning lorry at precisely this location.

From Get West London. The lorry is in nearly exactly the same position as the National Express coach in the previous photograph.

This is dreadful design, and it’s shocking that new road layouts this are appearing right in the centre of our capital city, with a blank slate to do so much better.

It may not be apparent from these photographs, but the footway on this corner is now very large indeed – nearly twenty metres wide, at the apex.

This is obviously a very good thing, in its own right. A left-turn slip lane for motor traffic has been removed and replaced with this footway, making the junction far more attractive for anyone walking here. But it seems extraordinary that, simultaneously, so little thought has been given to the safety of people attempting to cycle through here. They are almost literally being thrown under the bus. At a location where the building-to-building width is  30 metres, it is simply unacceptable to squeeze people cycling into a tiny space where they are already ending up under the wheels of HGVs.

Blink and you’ll miss it. The tiny, narrow and dangerous concession to cycling in this enormous space.

How can things be going so wrong with brand-new road layouts? How can we we rebuilding roads with 2-3 lanes of one-way motor traffic, without any apparent thought for cycling?

The distinct, unavoidable impression created from the new roads around Victoria is that it seems sufficient to treat cycling as a mere afterthought once the road layouts and widened pavements have been planned. Once the kerb lines have been defined, all that’s left to do is to add a painted bicycle symbol in a box just behind the stop line, and perhaps a tokenistic line at the side of the road, where there isn’t any parking, or a bus lane. Even if that might make a dangerous junction even more dangerous.

That’s just not good enough. These roads could and should have been rebuilt with protected cycleways, allowing people to travel to and from the Vauxhall area and central London in safety, or from west London towards central London. Instead they are still being put in danger, and cycling here will continue to remain the sole preserve of the fit and brave.

UPDATE

Alex Ingram points out that – in addition to the Transport Initiatives/PJA 2014 report for Transport for London on cycling in Victoria, the Victoria BID also produced a report on public realm in the area in 2015. It has this to say –

Cycling in Victoria can feel dangerous and intimidating. High volumes of traffic on the Inner Ring Road and the associated Victoria gyratory have a significant negative impact on cycling through the area. One-way streets in general are a hindrance to the desire lines of cyclists and create longer and more difficult journeys.

… Cycle routes and safety are key considerations when upgrading streets and spaces … In April 2013 a cyclist was fatally injured during the morning rush hour at the junction between Victoria Street and Palace Street. A number of minor injuries have also occurred at junctions along Victoria Street and more serious cycling injuries around the Buckingham Palace Road-Lower Grosvenor Place junction. Here the fast-flowing multiple lanes of one-way traffic include many coaches and large service vehicles which create significant hazards. Major arteries such as Vauxhall Bridge Road and Grosvenor Place are also accident hotspots.

Doubtless many of you will have seen this video of a ‘near miss’ on the A38 in Bromsgrove, in which a child narrowly escapes serious injury, thanks to the quick reactions of a driver – a fireman, Robert Allen.

I wasn't sure whether to post this but if it stops a child from being killed on the road it's worth it! Today a child rode out in front of me, across the dual track, without looking! Thankful I was driving under the speed limit & reacted quickly! #neverchanceit @BromStandard pic.twitter.com/NuDgFqcdDj

— Robert Allen (@HWfireRAllen) August 5, 2018

I wasn’t the only one to notice that the way this incident was framed – both on social media, and in the media more generally – focussed entirely on human actions. On the one hand, the quick thinking, forward planning and skill of the driver, and on the other, the mistakes and foolishness of the children.

Framed in this context, the only way to prevent near misses (or even serious injuries and fatalities from occurring in future) is to ensure that all drivers are as quick-thinking and careful as this one, and also to ensure that children don’t behave impulsively, and don’t make mistakes and misjudgements.

But unfortunately both of those things are actually very difficult, if not impossible, to achieve. Children, especially younger children, have serious problems judging the speeds of approaching vehicles, due to their difficulty in perceiving visual looming (HT AndyR_UK). On top of that they will inevitably be impulsive, fail to concentrate, or become distracted. Equally, drivers won’t be paying attention 100% of the time. They will also get distracted. They are fallible. They will not all be as cautious and as quick to react as Robert Allen. Because they are human beings, not robots.

So the only realistic way of preventing these kinds of incidents from happening in the future is to design the danger out of the crossing. We can’t rely on human beings not do stupid things, or to not make mistakes, because it’s not who we are. The only rational response is to minimise the chances of collisions occurring in the first place, and to minimise the severity of those collisions if they do happen. The alternative – attempting to get children to behave properly in the context of this type of crossing – is nothing more than applying the flimsiest of sticking plasters to a gaping wound.

If we look at the location, it’s a little bit ambiguous whether the posted speed limit is 60mph or 70mph, because the crossing is at exactly the point where two lanes (60mph limit) become a four lane dual carriageway (70mph limit).

But whether it’s 60mph or 70mph doesn’t really matter – as Ranty Highwayman observes, either way, these are still very high speeds for children to be processingespecially where drivers will be distracted by the process of merging back down to one lane in the in the oncoming direction, or focused on accelerating up to 70mph as they move into two lanes from one in the facing direction.

On top of that, we have the pedestrian barriers – presumably installed with the intention of stopping people from cycling straight out into the road – acting to steer anyone cycling up to the crossing into a position parallel to the road, where any oncoming motor traffic will be directly behind them. 

Rather than naturally facing that oncoming traffic, children (or anyone else cycling here) will have to look right back over the shoulder to process it. Frankly the entire layout is a recipe for casualties, which the ‘Sign Make It Better’ warning does nothing to fix (not least because it’s only about 50 feet from the actual crossing – not a great deal of help when it comes to alerting drivers of the potential danger).

I’m not sure when this road was built, and the period in which it was thought this was an appropriate type of crossing for a road of this context – but it’s far from unique.

Here's another A38 ped crossing just outside Lichfield The gap in the barrier is where people are supposed to cross pic.twitter.com/3l9jtAt2sK

— Andy (@Bluecrossbar) August 7, 2018

There are several similarly lethal crossings of 70mph dual carriageways in West Sussex, usually the result of existing routes or lanes being severed by the construction of new roads and bypasses, with absolutely no consideration given to the safe passage of people walking and cycling across them. I can think of at least three on Horsham’s northern bypass, which was built in the late 1980s. Below is just one of them.

Location

There’s housing behind the trees on the left hand side of this location, and a railway station a couple of minutes’ walk down the lane to right. It’s not only the danger that is infuriating – it’s the fact that people could be walking and cycling, easily, to and from these locations, but have these horrendous barriers put in their way. The road simply shouldn’t have been built like this – it should have had underpasses integrated into it during construction, to allow people to cross it freely, and in safety.

The Bromsgrove example is perhaps even more pressing, however, as the road is a clearer example of severance – with housing on both sides of the road. If the A38 were a Highways England road, then under the IAN 195/16 standard (which I’ve covered here) a grade-separated crossing would be a mandatory requirement for a 60mph limit. If that’s not possible, then the speed limit should be lowered, the motor traffic lanes should be narrowed significantly, and the crossings should involve clear sightlines, with only one lane crossed at a time. Something like this kind of thing, which I saw on a distributor road in the city of Zwolle.

There are no signals; this is just a simple priority crossing, with cycles having to give way to motor traffic. However, only one lane has to be crossed a time, motor traffic speeds are much lower, and the visibility is excellent, for all parties. This really isn’t rocket science.

If the council wish to retain a 60mph limit, or four lanes of motor traffic, then that obviously means that human beings should be separated entirely from the road, to insulate them from the increased danger that attempts to cross such a road at-grade would involve. An underpass is the obvious answer in that traffic context.

That would plainly be an expensive undertaking, but really there’s no other safe way of addressing the severance posed by a multiple lane road with such a high speed limit.

Obviously I don’t know the local situation, but I suspect it may be much more appropriate to ‘downgrade’ the road to an urban distributor, with single lanes in each direction, separated by a median, and with a much lower speed limit. That would allow the ‘Zwolle’ type of crossing to be employed, and safely. But there are many other places where that is impossible, or at least undesirable. The Horsham example is one location where the traffic volumes and road context – an explicit bypass – should really necessitate grade separation.

To a large extent, we’re reaping the harvest of decades of road-building and planning with little or no thought for the safety and convenience of anyone who wasn’t in a motor vehicle – people cycling and walking along these roads, or attempting to cross them. It’s going to be difficult and costly to undo that damage. Perhaps it’s not surprising, therefore, that we all reach for the superficially easy option of attempting to change human behaviour, rather than changing the system, when we’re confronted with incidents like the one in Bromsgrove.

There’s recently been some silly-season noise about making the use of bells compulsory in our newspaper of record, The Times.

Frothing gibberish on Page 3 of The Times today. Remember when this paper took cycling safety seriously? pic.twitter.com/iemsfa2seJ

— Mark Treasure (@AsEasyAsRiding) July 14, 2018

This story seems to have been based entirely on five written questions from the MP Julian Lewis.

Re @thetimes article about bicycle bells, it is based on 5 parliamentary questions posed by Julian Lewis (MP for New Forest, Conservative) to @transportgovuk .
All 5 receive 1 answer from Transport Minister @Jesse_Norman https://t.co/ijCdWJMTIL pic.twitter.com/Qap6brq38F

— always last (@lastnotlost) July 14, 2018

The non-committal response from the Minister has been spun into a story; the Minister in turn dismissed it.

But let’s (charitably) take this seriously, just for a moment. What does ‘mandatory use of bells’ actually mean? Am I supposed to ding every time a pedestrian hoves into view? In a town or a city, my bell would be ringing relentlessly. Let’s also bear in mind that plenty of people will object, often quite aggressively, to the ringing of a bell, interpreting it as akin to the honking of a car horn. A basic starting principle – before any of this nonsense ever gets anywhere near legislation – would have to involve getting some basic agreement and consensus about what people actually want and expect, when it comes to a form of audible cycling warning (or even whether they want people making a noise at all). It you can’t get the general public to agree, which I would imagine is more than likely, then there’s no point even embarking on legislation in the first place.

In any case, the general issue of bells, warnings and ‘silent rogue cyclists’ is symptomatic of basic design failure. I’ve probably cycled at least 500 miles in the Netherlands over the last five or six years. Not a huge amount, but enough to get a good flavour of the country. In all that distance – in cities, in towns, through villages, across the countryside – I can’t honestly remember ever having to ring my bell to warn someone walking that I was approaching. Not once.

A large part of that is probably down to the fact that people walking in the Netherlands are – understandably – fully aware that they will encounter someone cycling quite frequently. In general, it’s unwise to assume that, just because you can’t hear anything approaching, nothing is approaching – and this is especially true in the Netherlands. Being aware of cycling is just an ordinary part of day-to-day life, because everyone cycles themselves, and because they will also encounter cycling extremely frequently.

However, I suspect my lack of bell use is also due to the fact I rarely ever come into conflict with pedestrians, because of the way cycling is designed for. Unlike in Britain, where walking and cycling are all too frequently bodged together on the same inadequate paths, cycling is treated as a serious mode of transport, with its own space, distinct from walking.

No need for bells here, to warn people you are approaching

I don’t need to ring my bell to tell someone walking I am coming up behind them because we’re not having to share the same (inadequate) space. There are of course many situations in the Netherlands where walking and cycling are not given separate space – a typical example below.

However, these will almost always be situations where the numbers of people cycling, and of people walking in particular, will be relatively low. In practice, these paths function as miniature roads, marked with centre lines, and used by low amounts of low-impact traffic. Pedestrians treat them as such, walking at the sides, and the dynamics of path use are obvious and well-understood. If demand for these paths increases, such that people walking and cycling begin to get in each other’s way, separation of the two modes becomes a necessity. It is all blissfully rational.

Contrast that with Britain, where ‘cycle routes’ will often be nothing more than putting up blue signs to allow cycling on existing – often quite busy – footways.

It isn’t hard to see why people walking will not be expecting cycling in these kinds of environments. It looks like a footway; feels like a footway. It is a footway. So users who are cycling then have to decide how best they approach someone from behind.

  • Do they ring their bell?
  • Do they try to make a noise with their bike?
  • Do they call out? And if so, what do they say?
  • Do they try to glide past without any noise at all?

Bear in mind that there is absolutely no consensus on which of these techniques is preferable to people walking. Some people hate bells, because they think it implies they are being told to get out of the way. Some people don’t like noises, or calls, and apparently prefer the clarity of a bell, and what it signifies. Some other people might be deaf.

As you cycle up behind someone, there is obviously no way of knowing how that particular person will react, and what they will prefer. It is entirely guesswork.

My own technique is usually to approach, slow down a bit, and hope that the person gradually becomes aware of my presence. If they don’t, then I usually say ‘excuse me’. My bell is reserved for occasions when someone is stepping into the road without looking, or similar situations where I can foresee a potential collision occurring.

A short snippet below, on a path I use on a daily basis.

I can see that the two girls are aware that I am approaching, but I slow down in any case, until I am sure. The woman is not aware, so again I slow down, and have to use a verbal ‘excuse me’ to let her know I am there.

This probably isn’t perfect. Maybe there is a better way. But really, I don’t think there even is a ‘perfect’ way of dealing with these kinds of minor conflicts. They are all flawed. You are going to startle someone; you are going to do the wrong thing without even realising it; you  are going to annoy someone. It’s unavoidable.

But the solution to this problem is not MOAR BELLS or MANDATORY USE OF MOAR BELL. The basic issue here is crap cycling and walking environments. Every single location where people are being expected to use bells (or some other form of audible warning) will be one where cycling is not expected; where someone cycling is having to share the same space as someone walking; where there is not enough width for the two modes to peacefully co-exist. Bells are not the solution to this problem. Better design is.

The path in my video above is only a couple of metres wide, and has to accommodate cycling and walking in the same space. That’s just a straightforward recipe for conflict. If you think the answer to that conflict is bell legislation then you don’t care about cycling, and frankly you don’t really care about walking either. I don’t want to be cycling at little more than walking pace, having to ring my bell every few metres. I doubt people walking want to be having to deal with that either. I certainly don’t when I am on foot.

Let’s stop dribbling on about bells and instead ensure that our walking and cycling environments work for both modes, with clarity about where people should be and about expected behaviour, and with comfortable space for everyone.

01-Mar-20
HotWhopper [ 1-Mar-20 6:51am ]
Every three or four years Anthony Watts (who owns a conspiracy blog called Watts Up with That, or WUWT) claims steam pipes in vast empty spaces in remote and largely unpopulated areas of Russia are what's causing global warming. This year he's at it again.

I don't need to write much about this, you can see it in pictures. In fact, Anthony himself put up a photo of steam pipes in a small town called Omyakon (population ~500), one of the coldest permanently inhabited places on earth.

The map below shows where the tiny settlement of Omyakon is located on the GISTEMP January temperature map . It's not in the middle of where the highest temperature anomalies were recorded.



Figure 1 | Temperature anomalies for January 2020 from the 1951-1980 mean, showing the location of Omyakon and its steampipes that conspiracy nutters blame for global warming. Data source: NASA GISTEMP


In fact, most of the huge hot area over Russia and parts of Europe has very low population density as you can see when you move the arrow to the right over the images below. (I've lined up the maps but it's a bit rough.)

May 18April 18

Figure 2 | Maps showing mean surface temperature anomalies for January 2020 from the 1951-1980 mean; and population density. Data sources: GISS NASA and SEDAC, NASA

Anthony Watts decided there's a Russian conspiracy at NOAA, and wrote:
In a report generating substantial media attention this month, the National Oceanic and Atmospheric Administration (NOAA) claimed January 2020 was the hottest January on record. In reality, the claim relies on substantial speculation, dubious reporting methods, and a large, very suspicious, extremely warm reported heat patch covering most of Russia.




How (and why) does Russia keep moving its steampipes?
Anthony doesn't explain how or why Russia moves its steampipes around the world each January - from sparsely populated regions of Russia to North America and further, then back again. I'll let you try to figure out for yourself how and why they do this.


In January 2019 Russia turned its steampipes down a bit and shifted their Russian steampipes they'd installed in the USA and Canada a bit west. (Move the arrow to the right to compare January 2020 with January 2019).

May 18April 18

Figure 3 | Maps showing mean surface temperature anomalies for January 2020 and January 2019 from the 1951-1980 mean. Data source: GISS NASA

In 2018, Russia turned of quite a few of its steampipes, leaving any residents to suffer the freeziing cold. There were quite a few more Russian steampipes in North America in 2018.

May 18April 18

Figure 4 | Maps showing mean surface temperature anomalies for January 2020 and January 2018 from the 1951-1980 mean. Data source: GISS NASA

In 2017 Russia had cut down its steampipe operations, but it didn't turn them off altogether. That was probably so it could vastly expand its steampipes in North America.

May 18April 18

Figure 5 | Maps showing mean surface temperature anomalies for January 2020 and January 2017 from the 1951-1980 mean. Data source: GISS NASA

In 2016, when there was a big El Nino and the temperature anomaly was just a smidgen below that of this January (with no El Nino), Russia got rid of most of its steampipes, moving them from Russia to North America.

May 18April 18

Figure 6 | Maps showing mean surface temperature anomalies for January 2020 and January 2016 from the 1951-1980 mean. Data source: GISS NASA

The WUWT waste heat conspiracy
Anthony finished his article with this, and no, he didn't tag it as satire:

It appears that the "warmest ever" January might simply have been influenced by Russian temperature data warmed up by waste heat. Maybe the U.S. House of Representatives will start an inquiry into Russian collusion to interfere with global temperature data and climate change legislation - but don't hold your breath.
That reminds me of his "waste heat from the little warm pockets of humanity" he found in a tent in a remote and isolated part of the remote continent of Antarctica.

From the HotWhopper archives





29-Feb-20
As Easy As Riding A Bike [ 29-Feb-20 4:49pm ]
The power of e-bikes [ 29-Feb-20 4:49pm ]

On a sunny September day last year, I headed out from the city centre of Utrecht to take a look at the town of Bilthoven, about five miles away. Despite being a fairly small settlement (Bilthoven itself only has a population of around 20,000 people, although it closely adjoins the larger town of De Bilt) the area around the station has been extensively redeveloped, with the road diverted, and a new underpass built that only allows walking and cycling.

This is the railway station for a town of just over 20,000 people. pic.twitter.com/g1P0lG9vEi

— Mark Treasure (@AsEasyAsRiding) September 19, 2019

I had arranged to meet my partner back in Utrecht city centre at midday (she wasn’t quite as interested in me in pedalling around ten miles to go and look at some cycle paths and new development!), and I was running a bit late, stopping off to take videos of roundabouts and cycle paths on the way.

Cycle path running parallel to main road, on the approach to Bilthoven.
The path is actually built in the woods, some distance from the road. It has its own street lighting. It then crosses a bridge, before skirting a roundabout (with priority). Just brilliant. pic.twitter.com/tlOPg244DU

— Mark Treasure (@AsEasyAsRiding) September 20, 2019

Fortunately, on my route back, I stopped at a red light in de Bilt behind a lady on an e-bike, with distinctive bright green panniers.

I say ‘fortunately’ because over my years cycling in the Netherlands, I’ve found people on e-bikes are a real bonus when you are cycling along on a human-powered bike. You can tuck into their slipstream and roll along at a fairly steady 15mph, a speed which would take some effort to sustain on a heavy utility bike if you are cycling on your own. The couple in the photograph below, pedalling along effortlessly on their e-bikes, were an absolute godsend on a baking hot day when I was struggling towards Delft along a dead straight and seemingly unending road.

There’s no need to feel guilty about drafting either, because the person on the e-bike is expending very little effort acting while they act as your personal windbreak.

On my way back to Utrecht, it turned out that I managed to get a very pleasant tow all the way back to the city centre. Here we are, just leaving De Bilt, on the quiet road that runs through the town centre (the main through-road is behind the hedge to the left).

Then along the service road that runs parallel to the main road. (Note the induction loops built into the asphalt, that almost always ensure you have a green signal as you approach these minor junctions).

And then under the famous ‘Bear pit’ junction on the outskirts of the city centre. Now on Biltstraat, heading into the centre of Utrecht, on a protected cycleway. The road here is only one-way for private motor traffic, with a two-way bus lane off to the left.

Soon after this, the lady gets away from me a bit because… she jumped a red light – a pretty minor violation given the traffic context, but I didn’t want to risk it myself. She didn’t gain much advantage from her light jump, however, because soon enough I caught her up again, right into the city centre, along the busiest cycle path in the Netherlands (if not in the world) –

Then through the large underpass that takes you straight under the sixteen platforms of Utrecht railway station –

Before our journeys separate, and she peels off to join the cycle parking at the station itself – presumably to catch a train.

We travelled 6km together, nearly four miles, and according to my GPS track we did it in a little over 14 minutes, right from the eastern side of de Bilt, to the railway station in the city centre of Utrecht.

Perhaps unsurprisingly, this works out at an average speed of a fraction over fifteen miles an hour, or 25kph, the legal limit for e-bikes. We only had to stop twice – at the red light I stopped for, and she didn’t, and then to cross Vredenburg.

Given the point at which we met, she had almost certainly come from even further away, either from Bilthoven, where I started, or from Zeist. These are both around five miles away, but at e-bike speeds, only twenty minutes or so from the centre of Utrecht. Indeed, 5 miles can be covered in exactly 20 minutes on an e-bike, which means that a huge area is within a negligible cycling time of both the city centre and the train station – pretty much the whole Utrecht agglomeration (well over half a million people), as well as other outlying towns and villages.

A five mile radius circle, centred on Utrecht train station. At e-bike speeds, it would take just twenty minutes to cover this distance.

That’s pretty remarkable when you think about it – that all these people are (potentially) less than twenty minutes away from the centre of the city, and able to get there in comfort and safety, with minimal inconvenience, just by pedalling, and without any need for a car. When you consider that these trips can then be made in combination with the Netherlands’ excellent rail network, vast swathes of the country are quickly within reach to people even in apparently ‘remote’ areas. Hypothetically, the lady I was following could have come from a remote village miles away from Utrecht, caught a train to Rotterdam, and have gone from door-to-door in less than an hour.

Naturally, these benefits accrue to people on ‘ordinary’ bikes too – people like me – although 5 miles is more likely to take 30 minutes or more to cover, under our own steam. That’s an entirely manageable distance, but e-bikes will obviously reduce the effort required. It has the potential to seriously start eating into those longer car trips that are not quite so appealing by bike.

In the UK we can only dream of having the kind of comprehensive, high-quality cycling networks that surround places like Utrecht, and that enable the types of journeys described here. Where I live, we have absolutely no meaningful cycle network to speak of, and a negligible rail network. Even our former railway lines – which could serve as excellent, flat connections between towns and villages, either by bike or e-bike – have been allowed to disintegrate into bumpy, muddy bogs that are unattractive and unpleasant to use.

A section of the former Horsham-Shoreham railway line, north of Partridge Green

Meanwhile our roads are increasingly clogged with dangerous, polluting and environmentally damaging motor vehicles, travelling in the vast majority of cases short trips that could be converted to other modes.

There is no reason why we can’t match the Netherlands. E-bikes erode the argument that it is too hilly, or too difficult, to cycle, and in combination with high quality cycle networks we could have people effortlessly travelling the kinds of distances described in this post.

22-Feb-20

As with many British towns in the wake of the 1963 Traffic in Towns report, Horsham responded to the coming age of the motor car with a mixture of enlightenment and destructiveness. In doing so, it largely reflected the nature of the Report itself, which presciently diagnosed the enormous problems mass motoring would present, but offered damaging remedies that essentially accommodated ever-expanding demand for driving right in the heart of our towns, alongside a more benign banishment of it from limited areas within them.

In Horsham, that destructiveness involved the construction, in several stages, of a four-lane inner ring road that now encircles most of the town centre, and the construction of several large multi-storey car parks to accommodate increasing numbers of private cars.

The red line indicates the approximate route of the four lane inner ring road, over the previous street pattern. Original map here.

Although that ring road was (and remains) a blight on the town, the area within it has fared rather better, with a fairly deliberate policy of either complete removal of motor traffic, or minimising its levels. Through-traffic is discouraged by means of a 20mph zone (one of the first in the country) combined with a winding, circuitous route through the town centre, while many other streets have been either fully pedestrianised, or part-pedestrianised.

While these changes within the ring road are largely to be applauded, the enlightened planners and councillors who implemented them sadly neglected to consider cycling in any way, shape or form. One of the biggest issues is that the one-way flow through the centre, while successful at keeping motor traffic on the inner ring road in an east-to-west and north-to-south direction, also completely excludes cycling. I’ve previously written about this specific issue here.

Another longstanding problem for cycling lies to the western edge of the town centre. Here the former main north-south road across the town (shown in green and blue in the overhead view below) has been bypassed to the west by the four lane inner ring road (in red), leaving short sections of road with a pedestrianised area in the middle (highlighted in green), that still allows cycling in a north-south direction, but in a very half-hearted and ambiguous way. In other words, it’s not at all clear that it’s legal to cycle there.

This is actually a fairly important area for cycle journeys, because as well as potentially allowing you to cycle in a north-south direction avoiding the unpleasant, fast and busy four lane inner ring road (which naturally makes no concessions to cycling at all), it should also allow journeys in an east-west direction – particularly, people coming from the north and the west to enter the town centre. All these potential routes are shown on the overhead view below.

The red lines indicate entry and exit points for cycling. To the left is the large inner ring road.

The real difficulty lies at the southern end, where a new bus station was built around twenty years ago. It lies in the middle of the red ring, above. The building itself is attractive, but once again there was absolutely no consideration of cycling when it was planned (are you sensing a pattern here?).

The area where buses arrive and depart is buses-only – so the area ringed in green, below, is a no-go area for cycling.

That means all the movements through this area have to pass through the gap between this green area and the building on the corner, which is at present a pedestrian crossing, connecting the pedestrianised area with the bus station. This is a very awkward fit for cycling.

The video below shows me cycling along the line of the red arrow. This is at a particularly quiet time of day, early in the morning, so it is free of the potential conflict with people walking to and from the bus station.

It’s not even clear to me how legal this is. I take the option of crossing into the bus station and then moving across the solid stop line (the lights will only change for buses, so jumping the lights is unavoidable). The alternative is to cycle onto the pedestrian crossing, but that doesn’t seem particularly appealing either.

Short of rebuilding the bus station and starting all over again from scratch, to my mind there are no obvious fixes here to formalise cycling through this area. Perhaps a short term bodge is simply to convert the pedestrian crossing into a toucan that is at least legal to cycle onto, but then you are left with the inelegant solution of cycling off of it to join the road where the heads of the red arrows are located. Furthermore this toucan crossing would not help with cycling in the opposite direction, where people have to cycle (the wrong way!) into the bus station entrance from a signalised road junction, and then somehow ‘merge’ onto a toucan crossing which may well have people walking on it.

To demonstrate, here is another video of me on this desire line, cycling from the east, then heading north, along the line of the upper red arrow. Currently I take the approach of cycling onto the footway before the red light, to avoid conflicts with the pedestrian crossing. Although cycling in the pedestrianized area is legal, it probably isn’t on this bit of footway. But I’m not sure what else to do.

For pure north-south cycling journeys, the most obvious option is some kind of route running down the western edge of the bus station. There is a new-ish hedge that could potentially be sacrificed, and some parking bays that are occasionally used by service vehicles from the bus companies.

Here is a family walking south down the footway along the western edge of the bus station, with the hedge and the parking bay to their left.

This would solve these purely north-south journeys. However, it wouldn’t do anything to address the most of the journeys across the area, which will involve some east- or west-component, and therefore will involve the difficulties shown in my videos.

Indeed, the junctions around the bus station are an almost perfect case-study in how people cycling are turned into lawbreakers (or at least flexible rule-benders) because nobody has given any thought into how people would actually cycle through the area.

From the east, the ‘least worst’ option is to cycle on a short bit of footway (which may or may not be legal), and from the north the ‘least worst’ option is either to cycle onto a pedestrian crossing, or to cycle through a red light designed only for buses.

It’s a mess. And without a total redevelopment of the area, I’m not sure how it can be substantially improved. But any thoughts on how it might be done would be welcome! This area is important, as it is right in the town centre, and dealing with how to cycle across it in at least a legal manner needs to be solved.

IMG_6205Bus station edit
19-Feb-20
HotWhopper [ 19-Feb-20 4:25am ]
An interesting if ominous paper was recently published in Nature Climate Change. It came out just before Christmas, at the height of the holiday season here in Australia while fires were raging. For some weeks I've been meaning to write about it. That moment has finally arrived.

The authors of the Nature Climate Change paper, Andrew D. King, Todd P. Lane, Benjamin J. Henley and Josephine R. Brown (from The University of Melbourne) tell us that it's up to us to a large degree (excuse the word play). We know that already, and we also know that recent history and current weather-related events in Australia, the UK, Africa and elsewhere demonstrate we've not yet been willing to take enough action.

However the authors weren't writing about our reluctance to do enough to save ourselves. They were in effect exploring what will happen if we can slow down global warming compared to if we let it continue to warm as quickly as it is. It probably won't surprise HotWhopper readers that the rate of warming makes quite a difference.

You may have seen the excellent series of articles in the Washington Post late last year: 2°C: Beyond the Limit: Dangerous new hot zones are spreading around the world. These articles were describing the impact of global warming in different parts of the world, where warming has already exceeded 2°C. On average, the world has warmed maybe a bit more than 1.1 °C above pre-industrial temperatures; however, some places are warming faster than others. This includes some ocean areas as well as land areas. Most of us live on land, so what happens on land is of particular interest.

As you may know, when the world is heating up, the land heats up faster than the ocean. This is because very large bodies of water have to absorb a huge amount of energy (heat) for the temperature to rise much whereas it doesn't take much energy to warm up the land surface. Specific heat capacity is a measure of a substances capacity to absorb energy compared to how hot it gets. It is defined as the amount of heat needed to raise the temperature of 1 gram of a substance 1 degree Celsius (°C). Water needs a lot of heat to raise its temperature. Land doesn't need nearly as much.


The faster it warms the hotter we get
The authors of King19 decided to look at the different effects around the world when the climate is warming compared to when it's more or less stable. It's clear that the rate of warming makes a big difference. Their results indicate that as it warms it gets hotter on land compared to how hot it would be on land at the same global average temperature when the climate is in equilibrium. In other words, the land gets hotter the faster it warms. (I'm putting words into the authors' mouths here. It's fair IMO.)

A difference between 1.5 °C warmer at equilibrium and 2 °C warmer at equilibrium would certainly be noticed. However, it's the difference between how hot it gets during the transition to equilibrium that we'd notice a whole lot more.

As described in the abstract:
...more than 90% of the world's population experiencing a warmer local climate under transient global warming than equilibrium global warming. Relative to differences between the 1.5 °C and 2 °C global warming limits, the differences between transient and quasi-equilibrium states are substantial. For many land regions, the probability of very warm seasons is at least two times greater in a transient climate than in a quasi-equilibrium equivalent. In developing regions, there are sizable differences between transient and quasi-equilibrium climates that underline the importance of explicitly framing projections.

Transient vs quasi-equilibrium
The transient state refers to the state while change is happening, while the world is getting hotter. The quasi-equilibrium state is, as you can probably work out, a state where the climate is unchanging, is more or less steady.

Say the world is on average 2 °C hotter than in pre-industrial times. Now if it's come from 1 °C hotter (like now) and just hit 2 °C hotter on its way to 3 °C hotter, it's in a transient state. If it's been sitting at around 2 °C above and not varying much over time (no major forcings, just some internal variability) it's in a state of equilibrium. [The term "quasi-equilibrium" is used because that state is based on CMIP5 models for 2300 (mid-range greenhouse gas emissions - ECP4.5). The climate of the 2300s is not in full equilibrium, but getting close, hence "quasi-equilibrium".]





The land gets hotter faster until we reach equilibrium, then it cools a bit
What does this all mean? It means much of where we live will get a lot hotter until we stop warming the planet, then the land will cool down a bit as it reaches equilibrium with the ocean. That is, even while global surface temperature is steady, after warming stops, the hotter areas of land will cool a bit and the cooler parts of the oceans will keep warming a bit until equilibrium is reached.

The lead author, Andrew King, put it simply in an email, saying:
"For a given level of global warming, in a transient climate most land areas are warmer and experience more heatwaves than in an equivalent equilibrium climate with the same global temperature. So, if we were to hold the global temperature constant then most land areas would cool over time." 

Land vs oceans 
Consider what happens at 1.5 °C and 2 °C while the world is heating up, compared to the situation if the global surface temperature is more or less steady. While the world is heating up, land warms quickly and is out of sync with the oceans. In the transient 1.5 °C world, the continental land regions are warmer than they would be in a quasi-equilibrium 2 °C world. It's different for the slow warming oceans. With slow-warming ocean areas, the transient 2 °C world is cooler than the quasi-equilibrium 1.5 °C world.

The impact is illustrated below. The maps show the difference in temperature between transient warming and how warm it would be in a stable climate. Some of the oceans are cooler while most of the land is warmer in the transient climate compared to a climate at equilibrium.

Move the arrow to the right to see what happens in June to August and in the December to February period.

May 18April 18

Figure 1 | Transient minus quasi-equilibrium difference. In a transient climate, where the world is rapidly warming, land areas are warmer and ocean areas cooler than in a stabilised climate. These two figures shows the pattern of temperature difference between a transient scenario relative to an equilibrium climate for both the June-August period (l) and December-February period (r). Data source: Article by Dr Andrew King in Pursuit (The University of Melbourne)

More heat waves and hot seasons as the world warms
The research also indicates that there'll be a lot more severe heat events while the world is heating up than there will be after some sort of steady state temperature is reached. Where a hot season may be a one-in-ten year event in a quasi-equilibrium climate, in a transient climate this could be a one-in-five year event.

From the paper (my para break):
This is particularly true in boreal summer where, for almost all land regions of the world, the likelihood of a hot season is significantly higher in a transient climate compared with the equivalent quasi-equilibrium state. Regions with at least a doubling in the probability of hot summers in a transient climate compared with a quasi-equilibrium world of equivalent global warming encompass major cities including New York City, Istanbul, Baghdad, Seoul and Tokyo.

Although this is a global-scale analysis, one could infer that for these densely populated locations, the heat-related impacts of human-induced climate change would be lessened in a stabilized climate compared with a rapidly warming climate at the same level of global warming.

It's the coming generations who'll be hardest hit
Think what that means while global heating continues. If it keeps warming as it has, then the land will warm up faster, there will be more and worse heat waves, more severe fires, worse droughts etc. After the new climate approaches equilibrium, things should settle down a bit. The biggest upheaval will be during the transition. If society can survive that, generations far into the future will have a fighting chance.

This is an important piece of work because policy and planning people need to think about what's going to happen in their region over coming decades. If they merely focus on the likely scenario should the global temperature stabilise at 1.5 °C, 2 °C, 3 °C and hotter, they'll not be prepared for what happens along the way. Like the Australian Government this year, countries could be woefully unprepared for the changes to come during the transition to a new climate equilibrium.

The only comment I'll add is that the researchers discussed 1.5 °C and 2 °C. The world has not yet taken sufficient action to limit warming to 2 °C. We are still heading for 3 °C and hotter over the coming decades. It's time to take action and slow things down to limit the number and frequency of worse disasters than the ones we're now experiencing.


References and further reading
King, Andrew D., Todd P. Lane, Benjamin J. Henley, and Josephine R. Brown. "Global and regional impacts differ between transient and equilibrium warmer worlds." Nature Climate Change 10, no. 1 (2020): 42-47. https://doi.org/10.1038/s41558-019-0658-7

2°C: BEYOND THE LIMIT: Dangerous new hot zones are spreading around the world - a series of articles at The Washington Post, September to December 2019





17-Feb-20
The Early Days of a Better Nation [ 17-Feb-20 11:56am ]
The Sage of Freuchie [ 17-Feb-20 11:56am ]
Tom Nairn: 'Painting Nationalism Red'?
Neal Ascherson
Democratic Left Scotland, n.d. (2018)


This is an odd pamphlet which is well worth getting. Some day it'll be a collector's item. It's well-produced on glossy paper, with a striking cover and, inside, a fine reproduction of the portrait whose gift and sitter the pamphlet celebrates. In these pages three big names meet: the author Neal Ascherson, the subject Tom Nairn, and the painter, Sandy Moffat.

It has already been reviewed, briefly and enthusiastically by Davie Laing, and lengthily and discursively by Rory Scothorne. There's no need for me to review it here, inevitable quibbles though I may have - I can only recommend it, as a small piece of history, and a useful summary of an argument that is still influencing that history.

The title, apt as the pun on 'painting' no doubt was for the occasion, does less than justice to the content: a concise intellectual biography of Nairn by the journalist who did a great deal to make his ideas part of common sense. Ascherson saw Scotland in an international context provided by his own wide-ranging life; Nairn's intellectual formation was likewise cosmopolitan; and for both Scotland was key to dismantling the 'archaic' structures of the British state.


The pamphlet can be obtained by sending a cheque for £4 (10% discount for orders of 10 or more) to:

Democratic Left Scotland,
9 MacAulay Street, Dundee DD3 6JT

If the archaic structures of 'cheque' and 'post' are too constraining, you can always enquire of the publisher by telephony and the interwebs:

Telephone 07826 488492
Email stuartfairweather [at] ymail [dot] com
13-Feb-20
HotWhopper [ 13-Feb-20 8:27pm ]
It's been brought to my attention that there's another set of projections guesses about global surface temperature floating about, this time from Judith Curry.

I don't have time to go into her "arguments" in detail. Suffice to say she seems to be hanging on to the failed "stadium wave" theory and has maybe tossed in a few other ideas as well such as the Atlantic Multidecadal Oscillation flavoured with a smidgen of "it's the sun".

Judith has put up three options for the temperature change over the next 30 years: warmest +0.7C, moderate +0.11C and coldest -0.5C.

What I will do is what she hasn't (for reasons that seem obvious to me). I'll put up some charts showing her guesses. I can't tell from her post what she's used as a baseline, so I've taken it as the average global surface temperature for 2019. I've also made the assumption her predicted change relates to the last year of the prediction. That is, her prediction of 0.5 cooling is that in 2050 the average global surface temperature will be 0.5C colder than it was last year.

I've simplified the predictions by assuming a steady change from 2019 to the final temperature predicted, based on the above.

The charts are below. Not up to my usual standard with captions and labels, as I've not got time for that. Each chart shows the actual mean global surface temperature to 2019 based on NASA GISTEMP and is in Celsius.

First, the annual temperatures, with Judith's predictions. This chart also shows the linear trend line from mid-1970s to 2019 i.e. from the most recent change in trend to the present (0.19 C per decade).







Next her "warmest" prediction, as a decadal chart:


The "moderate" prediction as a decadal chart:


And the "coldest" prediction as a decadal chart:


If you're wondering why the next decade in all of them is warmer than the previous actual, it's because of averaging over the decades and the fact I've assumed a steady change with Judith's predictions.

Feel free to add your two bobs worth in the comments.





12-Feb-20
The Early Days of a Better Nation [ 12-Feb-20 5:45pm ]
Writers of a Better Nation? [ 12-Feb-20 5:45pm ]
The Literary Politics of Scottish Devolution: Voice, Class, and Nation
Scott Hames
Edinburgh University Press, 2020

How well I remember Scotland in the 1980s! Scunnered by the failure of even a majority vote to establish a Scottish Assembly, snookered by the Cunningham Amendment, gubbed in the first round of the World Cup, gutted and filleted by Thatcherism, disillusioned by repeatedly voting Labour and getting Tory ... only the writers and artists remained standing, to produce a body of self-confident work that firmly established the nation on the global cultural map. Together with dedicated political and civic activists they in due course lifted its spirits to the heights of gaining its own Parliament. They accomplished a devolution - or independence -- of the mind and heart, well in advance of its political achievement.

I remember it like this, of course, because I wasn't there. I was in London, reading all about it in the columns of Neal Ascherson and the volumes of Tom Nairn. Now and again I'd browse a journal or pamphlet from the Scottish literary or political edge. Scotland from afar seemed to have a more democratic, more socialist and more egalitarian spirit than England - particularly the South- East of England - and this consciousness showed through in the culture it had inherited as much as in the culture it now produced. And since I moved back in the early 1990s, the same story has become received wisdom, not least among writers and artists.

According to Scott Hames's new book, the real story is a bit more complicated than that. So much more complicated, indeed, that it's hard to summarise. If he's missed a magazine, a literary feud, a Commission or a Report, it's not for want of looking. The discussion is sometimes dry, the narrative always engaging. The savagery of the spats he disinters from the archival peat-bog is eye-opening. Hames contrasts 'The Dream' of literary nationalism with 'The Grind' of political procedure (characterised in this case more by friction than motion). Two features stand out, all the more because in retrospect they're often overlooked. The first is how radical an aim devolution seemed, and how bright it shone in the literary imagination. The second is how conservative - how conserving -- a manoeuvre its implementation was, driven far more by the need of the British state and the Labour Party to 'manage national feeling' than by the SNP, whose votes were read as fever-chart symptom rather than political challenge.

In focusing on the cultural and the political, Hames avowedly and explicitly omits the economic and the social. This is fair enough in its own terms, but it's liable to leave the reader's inner vulgar Marxist - if they, like me, they have one -- sputtering. The oversight, if we can call it that, is overcompensated in the novel whose analysis gets a chapter to itself: James Robertson's And the Land Lay Still. It's the most ambitious Scottish realist novel for decades, grand in scale and scope and an immersive read. Ranging from the late 1940s to the early 21st Century, the novel interweaves family sagas and stories of personal individuation with political and parapolitical history to tell one overarching epic: the growth of national consciousness.

And therein lies one problem with it. It's as if at the back of every honest, decent Scot's mind is a relentless yammer of 'You Yes Yet?' Older generations are permitted to die in the old dispensation, shriven by their invincible ignorance, but those who live in the light of the new have no excuse. If they step off the path to nationhood they sink in the slough of self-loathing - as the two major pro-Union characters, an alcoholic police spy and a Tory MP undone by a secret fetish, in the end do.

Robertson conducts a large and varied cast through a long time and a complex plot with great skill to a most satisfactory click of closure. But, Hames argues, the difficulty of integrating the characters' lives with a political history that mostly consisted of tiny conventicles and ceilidhs in literally smoke-filled rooms and debates in widely unread periodicals, and that now and then took public form as 'set-piece' events in parliaments and streets, can defeat even the best novelist - even though Robertson was himself on those marches and in those rooms. It's a problem familiar in science fiction: one reviewer cited refers to Robertson's 'info-dumping', a term from the lexicon of SF criticism.

Hames's final chapters deal with Scots, the language, in relation to Scots, the people - and 'people' too is ambiguous, referring as it can to the nation as a whole or to 'the people' as opposed to the elite. Here Scots is an abrasion almost as raw as Gaelic, and more widely felt. At the risk of rubbing it, here's how it went. Centuries ago, Scots was an official language, known as Inglis. It was used at Court and in courts, in poetry and prose. For readers outwith Scotland, you wrote in Latin like any other literate European. After the Union of the Crowns and the Treaty of Union, the United Kingdom conquered a third of the world, and English replaced Latin as the de facto lingua franca. Scots was pushed out of administrative, then everyday upper-and-middle-class speech. Its several dialects became the language of the working poor of town and country. (Except in the Highlands, where the people were schooled and regimented straight from Gaelic into Standard English, which of course they spoke in their own distinct way.)

In the first Scottish literary renaissance, MacDiarmid and others sought to revive Scots as a national language, which they called Lallans or (because of the fusion of Scots dialects) 'Synthetic Scots'. This produced some great poems, but in polemic and reportage it can come across as as affectation. Fights between the Lallans-scrievin old guard and the younger, more outward-facing literary intelligentsia flared in the 1960s and 1970s. But some new writers found another aspect of the language question, and one that far from being esoteric was central to everyday life, at least in the Central Belt. Modern vernacular Scots is different enough even from Scottish Standard English to separate the home and the school, the working class and the middle class. That difference could literally hurt, could smart and bruise, from the classroom tawse and the playground clout. At the same time, and very much as part of what Nairn excoriated as the conservative 'tartanry' of the proud Scot, the Scots language appeared in print as a quaint rustic dialect, in English spelling spattered with apostrophes, from Burns Night to the Broons patronised to within an inch of its life.

Now here I do remember personally, from the 1970s. Seeing for the first time urban West of Scotland demotic speech rendered phonetically in print, in Lament for a Lost Dinner Ticket by Margaret Hamilton, and 'Six Glasgow Poems' in Tom Leonard's Poems (1973), was a mental liberation. Almost as much, for me, as seeing for the first time Highland English dialogue accurately conveyed, in the children's novels of Allan Campbell McLean. The release came from not being patronised or mocked. Only that!

Not a lot to ask, you might think, but this modest request was seldom met with comprehension, let alone satisfaction. As an issue with which to elide class hurt with national grievance, it packed a wallop. But only, or mainly, on the individual level. In a country and at a time where upward social mobility is closely connected with further education and a change in language, the typical agonies of the intellectual of working-class origin growing away from their roots and the socialist of middle-class origin separated by accent and vocabulary from the class they most wish to speak to (or, problematically, for) are widespread enough to make these private pains a social force.

Scotland's peculiar development, however, has meant that Scots has very little chance of becoming the national language. Stranger things have happened, but... Naw. More likely, and well under way, is its official celebration as one of several languages spoken in Scotland. This provides gainful employment to some, and bewilderment to schoolchildren who speak what they think is English, but which they are now taught (in English) is another language, Scots.

As Hames suggests, this linguistic and social devolution within the devolved polity serves to defuse any class and national charge that spoken Scots still has, and offers its speakers symbolic representation in the place of - or at least, quite independently of - any actual power. The identity politics of a section of the working class is assimilated to the identity politics of the nation, to which its characteristic manner of speech is supposed to lend authentic voice. What this contributes to the material condition, let alone the social and political self-confidence, of the working class within Scotland is another matter entirely. All those years after Trainspotting, it's still shite being Scottish.

Like the devolution settlement as a whole, this uneasy arrangement leaves a lot of unfinished business. Looking back on the Scottish 1980s that I saw only from a safe distance, I have a wry suspicion that somebody was running a Gramscian strategy through those smoke-filled rooms. That self-effacing Modern Prince has yet to have their share of glory. Be that as it may, the smoke-free Scotland of 2020 cries out for an analysis of likewise Gramscian canniness. Scott Hames's book is avowedly not it, but points towards that, and beyond to an unknown 'utopian' future wherein we speak for ourselves.
08-Feb-20
Lauren Weinstein's Blog [ 8-Feb-20 6:08pm ]

Views: 73

For years — actually for decades — those of us in the Computer Science community who study election systems have with almost total unanimity warned against the rise of electronic voting, Internet voting, and more recently smartphone/app-based voting systems. I and my colleagues have written and spoken on this topic many times. Has anyone really been listening? Apparently very few!

We have pointed out repeatedly the fundamental problems that render high-tech election systems untrustworthy — much as “backdoors” to strong encryption systems are flawed at foundational levels.

Without a rigorous “paper trail” to backup electronic votes, knowing for sure when an election has been hacked is technically impossible. Even with a paper trail, getting authorities to use it can be enormously challenging. Hacking contests against proposed e-voting systems are generally of little value, since the most dangerous attackers won’t participate in those — they’ll wait for the real elections to do their undetectable damage!

Of course it doesn’t help when the underlying voting models are just this side of insane. Iowa’s caucuses have become a confused mess on every level. Caucuses throughout the U.S. should have been abandoned years ago. They disenfranchise large segments of the voting population who don’t have the ability to spend so much time engaged in a process that can take hours rather than a few minutes to cast their votes. Not only should the Democratic party have eliminated caucuses, it should no longer permit tiny states whose demographics are wholly unrepresentative of the party — and of the country as a whole — to be so early in the primary process. 

In the case of Iowa (and it would have been Nevada too, but they’ve reportedly abandoned plans to use the same flawed app) individual voters weren’t using their smartphones to vote, but caucus locations — almost 1700 of them in Iowa — were supposed to use the app (that melted down) to report their results. And of course the voice phone call system that was designated to be the reporting backup — the way these reports had traditionally been made — collapsed under the strain when the app-based system failed.

Some areas in the U.S. are already experimenting with letting larger and larger numbers of individual voters use their smartphones and apps to vote. It seems so obvious. So simple. They just can’t resist. And they’re driving their elections at 100 miles an hour right toward a massive brick wall.

Imagine — just imagine! — what the reactions would be during a national election if problems like Iowa’s occurred then on a much larger scale, especially given today’s toxic conspiracy theories environment. 

It would be a nuclear dumpster fire of unimaginable proportions. The election results would be tied up in courts for days, weeks, months — who knows?

We can’t take that kind of risk. Or if we do, we’re idiots and deserve the disaster that is likely to result.

Make your choice.

–Lauren–

03-Feb-20
Space War 2020 [ 03-Feb-20 4:22am ]

Not that I’m a fan of Trump, but the move to establish a US Space Force caught my attention.  There are two points of interest.  The lesser one is what apparently happened.  Of greater interest to me is how someone could use it in military science fiction, and what it might say about the future of space warfare.  And space cadets.   What apparently happened, and why the democrats agreed to founding the USSF.  Here’s ye olde Wikipedia page on the USSF, and if you dig into the details it’s a bit less revolutionary or boondoggle-y than one might first guess.  There were two things going on for the last I-don’t-know-how-many-decades (six decades?).  The bigger fight was the perennial one in the US Department of Defense, over which branch of the military got to control which resource.  The related fight, within the US Air Force, was how many resources went into their space division, versus resources to pilots and planes.

I got a peanut gallery seat, because in San Diego, right next to the I-5 freeway on the way into downtown, is this huge Navy building prominently labeled “SPAWAR.”  I’d always assumed that, in addition to running the Navy’s program on using dolphins and sea lions as patrol animals (among other things), it held some important chunk of the Navy’s space warfare command.  And it did, until June 2019, when it pivoted into Informational Warfare.  Long story short, the fight over whether each service had its own space arm, or whether one service got to bogart the satellites (so to speak) was finally settled (for now!) in favor of the US Air Force becoming the US Air and Space Force in all but name.

Trouble is, the satellite intelligence Force of the USAF was apparently not getting sufficiently funded, presumably because jet pilots are cool while others drool, and more importantly, because fraternal funding battles are where the echelons above reality get their combat experience and promotions.  Anyway, long story short, the funding and independence battle between air pilots and space cadets had been going on for decades, and in 2019 the solution (first proposed in the early 2000s) was to split of the space wing of the USAF into a semi-separate US Space Force which would run the military’s space efforts.  However, the USSF is under the Secretary of the Air Force, just as the US Marine Corps is under the Secretary of the Navy.  So the USSF is a separate force, but not very separate just yet.  I suspect the cadets in Colorado Springs who go in for the new BS in Space Operations are going to get heartily sick of the “space cadet” label.

So that’s the US Space Force.  It’s not boots in the sky just yet (or weapons deployed in space).  Right now it’s about flying satellites and doing things with them.  This IS a critical part of the US military, regardless of whether there are weapons up there or not, so I’m actually okay with them being their own force.

The fun part is going forward, what this means for science fiction, specifically the military culture of space.  To begin with, although I only watched a few episodes of Stargate, I’m perfectly aware that the USAF already has been represented in milSF quite successfully (for those who don’t know, the Stargate of the series was run by the US Air Force, who apparently cheerfully cooperated with the filming of the series).  While I’m not a military SF expert, my major exposure has been to a certain Honor Harrington, whose military is based rather more on the British Royal Navy of centuries past.  Basing a military SF story on the memes swiped modern USSF will be *extremely* different than something aping the Honorverse, possibly in useful ways.

Let’s start with the parameters of space warfare.  Assuming interstellar spaceships are possible, and especially assuming FTL is possible, how do you shoot at a spaceship?  It’s moving far too rapidly for a human to perceive, and probably far too small to see due to extreme distances.  Star Wars and kin notwithstanding.  bodies in space normally move 1-2 orders of magnitude faster than a bullet, so in real life, you don’t hit them by firing guns at them.  At best we’re looking at machines firing lasers or missiles at each other and maybe occasionally hitting, sort of like WW 1 torpedoes.  The idea of Cpl Luke or MSgt Han swinging a gun, acquiring a target, and hitting with the shot is orders of magnitude too slow.

Anyway, that’s space warfare interpreted as conventional warfare, 20th Century style.  And it works in stories. David Weber, to his great credit, made a lot of fun and profit in the Honorverse, through making space naval broadsides cool again.

However, we’re in the 21st Century, and hybrid warfare is the thing these days, rather than battleships or even aircraft carriers.  Can you destroy a starship with physical sabotage, cyber warfare, or social hacking?  Why yes, yes you can.  The speed of the spaceship is not only irrelevant to such attacks, it actually makes sabotage more dangerous and harder to detect in the resulting debris field.  And not so oddly, the USSF strongly appears to be a  hybrid warfare force.  It’s lack of guns in space may be completely irrelevant to its legitimacy or even its deadliness.

If someone wants to do military SF about interstellar warfare now, it is, yes, possible, to unlimber the gimbal-mounted laser cannon and go pew pew pew, as has been done for over 40 years.  Or you can create a universe where starships in full flight are moving too fast to hit with anything except maybe an exceptionally lucky shot with a laser, or perhaps a really well guided, really expensive rocket (and even then, the chance of a hit is fairly abysmal, considering how much the munition costs to launch).   Therefore, if you want to wage successful warfare against an enemy starship in flight, you attack when they’re in orbit around planets (moving slower in predictable paths) or on the ground.  Or you attempt to hack them, boobytrap the information going into, and hack the social networks running the ship, using every weakness you can find about the crew.  Countering such attacks, as we know now, is hard.  It also can make for interesting storytelling.  Is someone aboard ship a mole, a kamikaze saboteur, or just cracking slightly faster than everyone else under an unending bombardment of psychological warfare?  Therein lies part of a story.

There are parallels in older science fiction.  Here, I’m thinking particularly of James Schmitz and his psionics.  I suspect one can draw memes from Schmitz’s psionics stories and repurpose them to our emergent AI era, where you can use big data and machine learning to get inside someone’s skull almost as effectively as a budding telepath could.  Perhaps not so oddly, Schmitz served during WW2 in the US Army Air Corps, the predecessor to the USAF.

Then there’s the whole military culture thing.  I’m not a veteran, but I do like to read, and one of the books I’ve reread several times is  Carl Builder’s 1989 The Masks of War: American Military Style in Strategy and Analysis.  It’s obviously a bit obsolete, but it’s still relevant, because it talks about the different ways the Navy, Army, and Air Force go about dealing with reality. For universe building it’s a worthwhile read, combined of course with other, more modern sources (I’d suggest Chris Hadfield’s An Astronaut’s Guide to Life On Earth for one)

For example, SF traditionally has space admirals, because space ships are independent commands like ships, and…But that’s not how the Generals of the Space Force would work, if you believe Masks of War.  They’re less about crusty tradition, and far more about technological superiority, creating doctrine to implement long-term strategies, and using technical analyses to inform their opinions (I don’t think they’ll ever just trust their feelings, whether the Force is strong in them or not).  If this sounds like corporate America, Builder noted the similarity.

As an example of the cultural difference,  US naval aviators identify as Navy officers first, pilots second, while an Air Force aviators identify themselves by the kinds of planes they fly. It’s a different mindset, and it leads to a different culture of warfare.  Again, you can see this a bit in Schmitz’s writing, where battles are often less about naval engagements in space, and more about quick shoot-outs using highest tech guns, with a large side order of skullduggery.

Incidentally, that skullduggery has historical roots.  If I remember correctly, CIA officers who were required to be military officers often got themselves commissioned in the Air Force rather than the other services, and there’s currently a big overlap between the US Space Force and the “civilian” (hah!) National Reconnaissance Office, which does US satellite espionage.  The USAF and the black world of clandestine military activity have been closely associated for a very long time.

I could certainly go on, but if you’re interested in writing military SF, or even writing SF stories about interstellar flight, it’s worth taking a long, even sidelong look, at this new US Space Force and seeing whether the difference sparks your creativity.  Yes, you can still go from windjammers to sunjammers if you must (with space marines doing drops instead of landing on beaches! And crusty admiralty politics among the Lords of Space!).  But if you want to be new and different, maybe get into mind-hacking in space and starship sabotage, and see where that leads you.  If you’re writing your ToE chart,  instead of having the captain of the starship reporting to a commodore or the space admiralty, you might, alternatively have the captain of a flight (in USAF terminology, about 100 people, or 3-4 craft, perhaps a starship and attached drone crews) reporting to a lieutenant colonel running the squadron, who in turn reports to the colonel running the wing (in increasing size), who in turn reports to a general running the numbered space force .  And that’s just a trivial example.

Yes, yes, I know the USSF is really just a boondoggle.  Nothing here to see at all.  Whatever.  I figure it’s grist for the mill, and if it doesn’t fight, maybe it will still inspire something fun to read.

What did I miss?

02-Feb-20
Two-Bit History [ 2-Feb-20 12:00am ]

By now, you have almost certainly heard of the dark web. On sites unlisted by any search engine, in forums that cannot be accessed without special passwords or protocols, criminals and terrorists meet to discuss conspiracy theories and trade child pornography.

We have reported before on the dark web's "hurtcore" communities, its human trafficking markets, its rent-a-hitman websites. We have explored the challenges the dark web presents to regulators, the rise of dark web revenge porn, and the frightening size of the dark web gun trade. We have kept you informed about that one dark web forum where you can make like Walter White and learn how to manufacture your own drugs, and also about—thanks to our foreign correspondent—the Chinese dark web. We have even attempted to catalog every single location on the dark web. Our coverage of the dark web has been nothing if not comprehensive.

But I wanted to go deeper.

We know that below the surface web is the deep web, and below the deep web is the dark web. It stands to reason that below the dark web there should be a deeper, darker web.

A month ago, I set out to find it. Unsure where to start, I made a post on Reddit, a website frequented primarily by cosplayers and computer enthusiasts. I asked for a guide, a Styx ferryman to bear me across to the mythical underworld I sought to visit.

Only minutes after I made my post, I received a private message. "If you want to see it, I'll take you there," wrote Reddit user FingerMyKumquat. "But I'll warn you just once—it's not pretty to see."

Getting Access

This would not be like visiting Amazon to shop for toilet paper. I could not just enter an address into the address bar of my browser and hit go. In fact, as my Charon informed me, where we were going, there are no addresses. At least, no web addresses.

But where exactly were we going? The answer: Back in time. The deepest layer of the internet is also the oldest. Down at this deepest layer exists a secret society of "bulletin board systems," a network of underground meetinghouses that in some cases have been in continuous operation since the 1980s—since before Facebook, before Google, before even stupidvideos.com.

To begin, I needed to download software that could handle the ancient protocols used to connect to the meetinghouses. I was told that bulletin board systems today use an obsolete military protocol called Telnet. Once upon a time, though, they operated over the phone lines. To connect to a system back then you had to dial its phone number.

The software I needed was called SyncTerm. It was not available on the App Store. In order to install it, I had to compile it. This is a major barrier to entry, I am told, even to veteran computer programmers.

When I had finally installed SyncTerm, my guide said he needed to populate my directory. I asked what that was a euphemism for, but was told it was not a euphemism. Down this far, there are no search engines, so you can only visit the bulletin board systems you know how to contact. My directory was the list of bulletin board systems I would be able to contact. My guide set me up with just seven, which he said would be more than enough.

More than enough for what, I wondered. Was I really prepared to go deeper than the dark web? Was I ready to look through this window into the black abyss of the human soul?

The vivid blue interface of SyncTerm. My directory of BBSes on the left.

Heatwave

I decided first to visit the bulletin board system called "Heatwave," which I imagined must be a hangout for global warming survivalists. I "dialed" in. The next thing I knew, I was being asked if I wanted to create a user account. I had to be careful to pick an alias that would be inconspicuous in this sub-basement of the internet. I considered "DonPablo," and "z3r0day," but finally chose "ripper"—a name I could remember because it is also the name of my great-aunt Meredith's Shih Tzu. I was then asked where I was dialing from; I decided "xxx" was the right amount of enigmatic.

And then—I was in. Curtains of fire rolled down my screen and dispersed, revealing the main menu of the Heatwave bulletin board system.

The main menu of the Heatwave BBS.

I had been told that even in the glory days of bulletin board systems, before the rise of the world wide web, a large system would only have several hundred users or so. Many systems were more exclusive, and most served only users in a single telephone area code. But how many users dialed the "Heatwave" today? There was a main menu option that read "(L)ast Few Callers," so I hit "L" on my keyboard.

My screen slowly filled with a large table, listing all of the system's "callers" over the last few days. Who were these shadowy outcasts, these expert hackers, these denizens of the digital demimonde? My eyes scanned down the list, and what I saw at first confused me: There was a "Dan," calling from St. Louis, MO. There was also a "Greg Miller," calling from Portland, OR. Another caller claimed he was "George" calling from Campellsburg, KY. Most of the entries were like that.

It was a joke, of course. A meme, a troll. It was normcore fashion in noms de guerre. These were thrill-seeking Palo Alto adolescents on Adderall making fun of the surface web. They weren't fooling me.

I wanted to know what they talked about with each other. What cryptic colloquies took place here, so far from public scrutiny? My index finger, with ever so slight a tremble, hit "M" for "(M)essage Areas."

Here, I was presented with a choice. I could enter the area reserved for discussions about "T-99 and Geneve," which I did not dare do, not knowing what that could possibly mean. I could also enter the area for discussions about "Other," which seemed like a safe place to start.

The system showed me message after message. There was advice about how to correctly operate a leaf-blower, as well as a protracted debate about the depth of the Strait of Hormuz relative to the draft of an aircraft carrier. I assumed the real messages were further on, and indeed I soon spotted what I was looking for. The user "Kevin" was complaining to other users about the side effects of a drug called Remicade. This was not a drug I had heard of before. Was it some powerful new synthetic stimulant? A cocktail of other recreational drugs? Was it something I could bring with me to impress people at the next VICE holiday party?

I googled it. Remicade is used to treat rheumatoid arthritis and Crohn's disease.

In reply to the original message, there was some further discussion about high resting heart rates and mechanical heart valves. I decided that I had gotten lost and needed to contact FingerMyKumquat. "Finger," I messaged him, "What is this shit I'm looking at here? I want the real stuff. I want blackmail and beheadings. Show me the scum of the earth!"

"Perhaps you're ready for the SpookNet," he wrote back.

SpookNet

Each bulletin board system is an island in the television-static ocean of the digital world. Each system's callers are lonely sailors come into port after many a month plying the seas.

But the bulletin board systems are not entirely disconnected. Faint phosphorescent filaments stretch between the islands, links in the special-purpose networks that were constructed—before the widespread availability of the internet—to propagate messages from one system to another.

One such network is the SpookNet. Not every bulletin board system is connected to the SpookNet. To get on, I first had to dial "Reality Check."

The Reality Check BBS.

Once I was in, I navigated my way past the main menu and through the SpookNet gateway. What I saw then was like a catalog index for everything stored in that secret Pentagon warehouse from the end of the X-Files pilot. There were message boards dedicated to UFOs, to cryptography, to paranormal studies, and to "End Times and the Last Days." There was a board for discussing "Truth, Polygraphs, and Serums," and another for discussing "Silencers of Information." Here, surely, I would find something worth writing about in an article for VICE.

I browsed and I browsed. I learned about which UFO documentaries are worth watching on Netflix. I learned that "paper mill" is a derogatory term used in the intelligence community (IC) to describe individuals known for constantly trying to sell "explosive" or "sensitive" documents—as in the sentence, offered as an example by one SpookNet user, "Damn, here comes that paper mill Juan again." I learned that there was an effort afoot to get two-factor authentication working for bulletin board systems.

"These are just a bunch of normal losers," I finally messaged my guide. "Mostly they complain about anti-vaxxers and verses from the Quran. This is just Reddit!"

"Huh," he replied. "When you said 'scum of the earth,' did you mean something else?"

I had one last idea. In their heyday, bulletin board systems were infamous for being where everyone went to download illegal, cracked computer software. An entire subculture evolved, with gangs of software pirates competing to be the first to crack a new release. The first gang to crack the new software would post their "warez" for download along with a custom piece of artwork made using lo-fi ANSI graphics, which served to identify the crack as their own.

I wondered if there were any old warez to be found on the Reality Check BBS. I backed out of the SpookNet gateway and keyed my way to the downloads area. There were many files on offer there, but one in particular caught my attention: a 5.3 megabyte file just called "GREY."

I downloaded it. It was a complete PDF copy of E. L. James' 50 Shades of Grey.

If you enjoyed this post, more like it come out every four weeks! Follow @TwoBitHistory on Twitter or subscribe to the RSS feed to make sure you know when a new post is out.

Previously on TwoBitHistory…

I first heard about the FOAF (Friend of a Friend) standard back when I wrote my post about the Semantic Web. I thought it was a really interesting take on social networking and I've wanted to write about it since. Finally got around to it!https://t.co/VNwT8wgH8j

— TwoBitHistory (@TwoBitHistory) January 5, 2020
31-Jan-20
Scarfolk Council [ 31-Jan-20 8:34am ]
Welcome to Scarfolk... [ 31-Jan-20 8:34am ]

30-Jan-20
NHS Face Removals (1977- ) [ 30-Jan-20 2:55pm ]

While some children were born without faces simply because they didn't deserve them (see the Scarfolk Annual 197X), the government became increasingly concerned about citizens who did have them. They found that people with faces are more likely to have personal desires, hopes and dreams, in short: a will and ideas of their own. 
Such idiosyncrasies were not only thought of as needlessly self-indulgent, they were also deemed inconsistent with the smooth running of a successful society. Scarfolk's was the first council benevolent enough to offer face removals on the NHS.
In 1976, the council trialled face removals on stray foreigners, prisoners, children nobody wanted, unsuspecting people who were picked up leisurely walking in a park after sundown and volunteers (see leaflet above). 
When the full scheme was rolled out in 1977, the council soon lost track of which faceless citizen was which. By 1978 a new law was passed which dictated that all faceless people were required to have a tattoo of their old face over their lost one to make identification easier.
26-Jan-20

Vance’s Dying Earth Series (1950-1984) is one of the more famous series in fantasy, influential not least by killing off loads of magic users in Dungeons and Dragons with the Vancian “fire and forget” magic system.   However much you love or loathe the books, there’s a bunch of stuff Vance got wrong.  If an enterprising author wants to play in the far future of Earth/Dying Earth subgenre, given what we know now, it would be quite different than Vance envisioned.  And <i>Hot Earth Dreams</i> can help.  First, about the Dying planet genre:  It’s not just Vance.  There’s Burroughs’ Barsoom, Clark Ashton-Smith’s Zothique, Wolfe’s The Book of the New Sun, and many others.  Still, Vance’s is probably the best known (possibly after Barsoom), and it’s the one people think of.  Sun’s going out, civilization is decaying, magic has replaced science, and morality becomes, erm, more transactional.

What Vance got wrong was the idea of the sun going out like a guttering coal.  As we know now, the Sun instead is going to get hotter, ultimately evaporating off Earth’s oceans and making surface life impossible before (probably) ultimately it grows into a red giant and probably swallows our planet.  So pale white people wandering around on a shady world is almost certainly not going to happen.  But the sun sterilizing Gaia is around a billion years off, and there’s a lot of future between now and then.

Then there’s the whole supercontinent (“Zothique?”) in our future.  That may be 150-350 million years hence.  Here are four separate scenarios for how that supercontinent might form.  Here’s two views of a fifth, and there’s a sixth I can find a link to that’s totally different.  Also, there’s good reason to think that we don’t really understand plate tectonics as well as we might, now that we’re getting a better understanding of “Earth’s interior continents” and what happens after continents subduct.  Anyway, if you believe in the whole supercontinent cycle thing (tl;dr, supercontinents form every 300-500 million years, details endlessly argued over), we’ve got enough time for another two Pangeas in the next billion years.  Just because they’re so alien to us, I’m going to focus on these going forward, but the future could be just about any arrangement, given what we know now.

Here are some things to understand about a supercontinent-ish planet.

  • Earth has two modes, icehouse and hothouse.  We’re currently in an Icehouse, and the climate change disaster is disastrous only because we’re rapidly and temporarily shoving Gaia into hothouse mode.  While Earth has spent about 80% of the last 500 million years or so in hothouse mode, we are children of the ice, and most hominid evolution took place in the context of repeating ice ages.  Worse, perhaps, the last time the Earth jumped from icehouse to hothouse was the end of the Carboniferous, so we don’t have an “evolutionary memory” of living species that have gone through this. That’s what makes Hothouse Earth so dangerous to us and especially for global civilization.
  • Getting back to Pangea Nextia (a name chosen because it hasn’t been published for a model, unlike Pangea Proxima, P. Ultima, or P. Nova), it’s almost certainly going to be a hothouse, with no polar ice caps, a fairly low temperature gradient from the equator to the poles (meaning hot poles, hotter equator, large dead zones in the ocean deeps, really hot subtropical deserts, few if any everwet equatorial forests, and most of the diversity in the large number of para/sub/dry seasonal tropical forests everywhere from near the equator to around 50o north. Minus the deserts around 30 degrees north where the Hadley Cell comes down (except for islands, mountaintops, etc.).
  • Pangea Nextia will have a super-Saharan desert at the same latitude (north or south) as the current one, because that’s the way global climate works.  It will also likely have Himalayan-style mountains where continents plowed into each other (The Himalayas are where India plowed into Asia), Andes-style mountains where oceanic crust subducted under the edge of a continent, and Alps/Zagros/Mediterranean/etc mountains where large continents coming together crushed a bunch of ocean and a small continent between them.  The cordilleras will be sort of like the stitches on Frankenstein’s assembly scars, and for much the same reason.

Why is this all important?  Climate dictates how and where people live, so knowing how supercontinental biomes work helps set the scene.  Mountains are water towers, not just from mountain glaciers, but because water percolates into mountains and comes out in mountain springs.  This is probably why they’re so important in so many religions.  They’re not just places to get closer to god, the waters coming off mountains keep people alive, as well as providing refuges for all sorts of life.  Your scenario likely has rivers running from mountains, deserts, and/or monsoonal forests in it.  Understand why they’re important, and you’ll understand why people revere, fight over, and take refuge in them. Hint hint.

Speaking of life, what does the future hold?

  • The first question is about mass extinction events, and how many the Earth will experience before your Dying Earth scenario starts.  We may or may not trigger a mass extinction in the next 50 years (it’ll be submassive if not truly mass).  Other likely extinction triggers are large igneous provinces and asteroid impacts, of which the former is much more common.  If I had to guess where the next large igneous province is going to emerge under the African Rift  or the Canary Islands (look at the simulation in this article, and see where the big blobs of magma are close to the surface).  It’s possible a fat LIP will emerge well before the next supercontinent forms, so that’s one, if not two, mass extinctions prior to the scenario time.  And possibly a third if the suturing together of continents causes another mass extinction from radical amounts of mountain building.  If you want to go most of a billion years, that’s possibly another seventeen extinction events, including two asteroid strikes, any number of LIPs, and petroleum-based civilization recreating itself from resequestered petroleum at least five if not ten times (every 100-200 million years).
  • How does life survive extinction events?  The quick answer is underground, which is why the super-rich building tunnels for trolls survival bunkers may be evolutionarily significant.   I think there’s a decent case that animals and plants that can live extended periods underground tend to survive mass extinctions.  Animals do this by, erm, making their burrows part of their extended phenotype (see the book referenced above), while plants normally hide seeds underground, which is why plant evolution really doesn’t show the changes during extinction events that animal evolution does.  What tends to get erased by extinction events are complex ecosystems like coral reefs and forests.  Forests after an extinction event are often quite different than prior, not just because the large herbivores are missing, but so are the specialist symbiotes (pollinators, pathogens, and parasites).  These all take 5-20 million years to re-evolve, depending on the severity of the extinction event.  Ditto for coral reefs.
  • Going forward, as the Earth warms we can expect C4 photosynthesis to start dominating over C3 plants (the current norm).  C3 photosynthesis evolved billions of years ago before there was a lot of oxygen in the atmosphere, and when the sun was dimmer.  As a result, the key enzyme (rubisco) has a bad habit of screwing up when it’s too hot or there’s too little CO2 around.  Plants have a lot of cellular machinery to deal with this (see photo-oxidation, heat shock proteins, and others).  C4 basically adds a turbocharger to the photosynthetic cells to boost the level of CO2 encountered by rubisco (the “turbo” is a mnemonic, because C4 plants have a distinctive type of ring anatomy in the cells of their leaves which gives their nature away).  The way it works is that some cells do C4 photosynthesis, creating a 4-carbon compound that is passed to other cells doing conventional C3 photosynthesis, adding extra carbon that gets broken down.  C3 photosynthesis in the receiving cells then produces 3-carbon building blocks that get turned into 6 carbon sugar molecules.  Anyway, C4 has evolved a number of times, but entirely among angiosperrms and entirely in the last hundred million years or less.  The most familiar examples are maize, sugarcane, and sorghum, but it shows up in a number of dicot plants, mostly (but far from entirely) in the Caryophyllales and Euphorbiaceae.  With the exception of a few rare trees on the Hawaiian Islands, C4 plants are entirely herbaceous, and the Hawaiian species are probably an example of the phenomenon of insular woodiness, which is incredibly cool if you’re a plant nerd (look it up).

The reason for C4 plants being herbaceous, often weedy, is actually important to scenario building for two reasons.  One is that, in plants, radical new adaptations (flowers, compound flowers, etc) tend to pioneer their shtick as vagrants (e.g. weeds) in highly disturbed areas.  Once they succeed in in these edgy venues, they start colonizing more complex, intact ecosystems, eventually evolving into dominant forest species and the like.  This shows up in plant clades often starting off with small, wind- and gravity-dispersed seeds, then evolving towards bigger seeds and animal-dispersed fruits that are more suitable for competing in a forest.  If you’re thinking about how plants evolve from weeds to forest giants after extinction events, this is how they do it, and this is why C4 plants are among those that will likely become more dominant in the future–many of our currently really obnoxious weeds are C4 plants.

The other thing is that C4 plants do better with bright lights and high temperatures than do C3 plants, but only up to a point.  Corn, for example, overheats around 45oC, with grain production dropping rapidly as temperatures rise in this region and plants dying when they go past their limits.  As a result, corn production is likely to take a huge hit in the next century, as areas that grow corn in the summer find it too hot to deal.  C3 winter wheat production is modeled as being less harmed, because this cool season crop doesn’t hit its upper limits during climate change, and indeed it might do better.  A century from now, cool season corn might be the thing, but the bigger point is that every plant has its upper limits.  C4 stretches but does not eliminate those limits.  That’s what keeps C3 plants around, in cooler and shadier areas.  But yes, monsoon forests dominated by salt-oaks (the big-seeded descendants of todays chenopod salt bushes) could easily be a thing on Pangea Nextia.  So could giant sugar cane brakes.

  • Now let’s look at human evolution.  Why expect humans to be around in hundreds of millions of years?  This is what Hot Earth Dreams is about, and if you’re reading this blog, you may well have read the book.  The tl;dr version is that I think that humans have two inheritance systems: genes and culture.  We do evolve genetically, but culture evolves radically faster than genes do.  If we want to become oceanic piscivores, we don’t evolve webbed feet, we learn how to build boats and create and use fishing gear.  And if that no longer works as a lifestyle, we fisherfolk  can go ashore and go into symbiosis with large ruminants (become cowboys) or whatever.  The speed at which cultures adapt buffers what would otherwise be strong selective pressures on our genes, meaning our genetic evolution gets slowed.  Since I don’t think people get to be fisherfolk for hundreds of generations (or civilized, or farmers, or cowboys, or artists, or whatever), this slows our genetic evolution down, which is why I think it’s plausible to assume that humans could survive into the very deep future.

This doesn’t mean that I think humans won’t evolve.  In fact, some of the biggest genetic selection pressures on humans come not from evolution but from coevolution, from our relationships with other organisms.  These show up in things like the rapid spread of lactose tolerance (due to our symbioses with dairy animals),  various disease tolerance genes (due to exposure to epidemics due to living in settlements linked by long-distance trade routes), and possibly some genetic tolerance of things like alcohol and sugar (due to our symbioses with yeast and sucrose producers).  I’m using the terminology of symbiosis because I really like Thompson’s geographic mosaic theory of coevolution (read about it here and here), but it’s basically that evolution proceeds through interaction among species within particular environments, so it’s about genotype 1 affecting genotype 2 while both interact in ecosystem A. Different interactions happen between other populations of the same organisms in different environments, and Mosaic coevolution is (IMHO) a really handy theory for worldbuilding, because it helps you understand how every place becomes different.

Humans domesticating other species and doing agriculture, forest management, hunting, fishing, and so on are all examples of how we interact with particular populations of various species in different, geographically bounded environments.  Right now, at Peak Civilization, coevolution to deal with humans is the major selective force on a huge number of species.  Either they must become a symbiont/pet/agricultural species, become a pest, a commensal, or become utterly useless and ignored.  Oh, and they must survive with climate change, pollution, and anthropogenic habitat loss.  Our current, relentless evolutionary pressure will change drastically over the course of the next century, most likely as our civilization crashes, but possibly if we figure out this quasi-mythical sustainability thing and calm down. Regardless, it’s a huge thing now, and coevolution with humans across all the habitats we occupy will continue to be a big thing into the future.

  • Now imagine life coevolving with humans for hundreds of millions of years.  Right now, a lot of animals don’t really understand humans the way domesticated species like dogs and horses do.  But going forward, it’s likely that a wide variety (possibly a huge majority) of animals will evolve to become able to decode our signals and hack our cultures, again, the way dogs do.  They won’t be smart in a human sense, but they’ll be clever like Clever Hans. Plants and fungi will do their own versions of this adaptation (you can read about it in Botany of Desire).

This may seem abstract, so let’s talk about the difference between Africa and Papua New Guinea.  Modern humans first evolved in Africa over 300,000 years ago, while they got to Papua maybe 50,000 years ago.   Africa sustained the fewest megafauna extinctions of any continent, while Papua had only a few megafauna-type animals (bear-sized) that were wiped out tens of thousands of years ago.  However you feel about the whole Younger Dryas extinction thing, Africa seems to be a place where the wild animals know how to deal with people a lot better than just about anywhere else other than maybe south Asia.  It’s reasonable to think this is due in part to the animals coevolving with evolving humans for a really, really long time compared with what animals in most of the rest of the world experienced.

Going forward 200,000,000 years, with humans continually present, and what animals in Africa do now in the way of dealing with humans will seem quaint.  Animals will have coevolved with humans starting when they were rat-equivalents who had just survived an extinction event by hiding in bunkers with us.  And they may have evolved to elephant size over the ten million years afterwards, also in continual proximity to us, despite, or perhaps because, of all we did to them.  Whatever their relationships with us (partners, food, social parasites, predators, commensals, amensals, parasites, etc.), they will know us very, very well. And we’ll know them.

In the Papuan mountains, you can sit around a campfire, even go hunting at night, without worrying about anything more than an accident, getting malaria from a mosquito, or stepping on a snake.  In the African bush, you’ve got all that, plus lethal encounters with lions, leopards, hyenas, and hippos (among others), so you surround your campsite with a boma of thorny branches to keep the problem species from eating you (or at least you keep the fire going all night), and you don’t hunt at night.  The deep future will look like Africa, and it’s quite likely that the local megafauna will have coevolved with us.  A boma may be the minimum needed.  Or perhaps you’ll be able to make an arrangement with the equivalent of a local pack of hyenas to not eat you in exchange for you cooking whatever they catch and tending their den for them.  The possibilities are endless.

And that doesn’t even include crops.  There are several authors who argue (I think wrongly) that the rise of and continued existence of civilization depends on the extensive cultivation of grain, specifically barley, wheat, rice, or maize.  I’m not going to regurgitate their argument, or why Hawai’i disproved it, but we humans have intimate and complex relationships with the grasses, including corn, sugar cane, bamboo, rice, wheat, barley, rye, sorghum, millet, tef, and so on (even lawns!).  Given 100 million years or more of continued coevolution, and we may be as intimately connected to our grain crops as leafcutter ants are to their fungal colonies.  Or not.   But with crops, it’s not just worth considering what exotic new crops will evolve and what old crops stay, but also think about how our symbioses with plants will deepen and become richer and possibly more necessary over hundreds of millions of years.  And if grains bore you, think about 200,000,000 years of caffeine or alcohol coevolution.

As for human culture, in Hot Earth Dreams I mentioned the theory that languages effectively randomize (with the exception of baby talk) over maybe 10,000 years.  We’ll never know the languages of the last ice age, let alone the first human language, and in 10,000 years or less, English will have utterly vanished.  Second, archaeologically and culturally we seem to have a window of around 5,000 years where we can know anything at all useful.  Prior to that, we’re increasingly limited to whatever rare artifacts were randomly preserved.  Looking at the first 300,000 years of human existence, there is very little we can know about most of that history, and it grows less every year.

Going into the future this erasure will get worse. Humans will have to recycle the ruins of old cities, because we will have exhausted readily available ore bodies, so we’ll have to remix the resources of the past to make the present.  Do this for hundreds of millions of years, and that will be future culture.  People will likely know that humans have been around “forever,” and they’ll likely have some idea of how long “forever” was (American Indians had some notion of deep time, because they had both exposed fossils and geologic evidence of past climates that was obvious enough for them to get it).  But they won’t remember us.  Mass extinctions and cultural erosion will see to that.  Even things like race and ethnicity now appear only a few thousand years old, so they won’t look like us either.

For me, at least, the notion of hundreds of millions of years of coevolution with a world that complexifies the blurred boundaries between wild, feral, tame, domestic, and civilized is one of the chief appeals of a Far Deep Future scenario.  The world may well be slowly dying, and humans will certainly be as flawed as ever, but we certainly won’t be alone.  Instead, we’ll be surrounded by a full panoply of species, megaflora to nanofauna.  Some of whom will work with us, more of whom will live with us, increasing numbers of which will live on or in us (stirges!), and still others will have evolved so that they are useless to us and avoid us completely.  And some will love us for what they get from us, whether or not we reciprocate their feelings.  Humans that survive into the deep future will be recognizably human, definitely understandable, but they won’t be us, and the world they live with will think we’re standoffish, isolated, and socially inept (wild even, feral at best) compared to them.

What did I miss?

 

 

 

 

20-Jan-20
HotWhopper [ 20-Jan-20 4:31pm ]
I sometimes wonder at the shameless way deniers boast about their ignorance, particularly their lack of understanding of basic science. Willis Eschenbach is a prime example. He doesn't understand science and doesn't make any real effort to understand it. He balks at reading a basic textbook and I doubt he could bring himself to read a science website let alone scientific papers. Yet every now and then he'll decide he's come up with some brand spanking new notion that none of the hundreds of thousands of people who've studied a subject in depth have ever thought of.

Some time ago he figured out what every student (and interested layperson) knew long ago, that storms carry heat from the surface upwards into the atmosphere, thereby cooling the surface; his thunderstorm theory.

This week he's decided there are three what he calls "theories" to the greenhouse effect, demonstrating that he doesn't understand that radiation is the emission or transmission of energy. He was trying to attack a tweet thread by Gavin Schmidt and his attack was laughable (and very very longwinded).

After 1,149 words of what is presumably meant to be an introduction, Willis finally gets down to business and "starts", writing:
Let me start by saying he is badly conflating three very separate and distinct theories.
  • Theory 1) Increasing CO2 increases atmospheric absorption, which affects the overall temperature of the various layers of the atmosphere, and increases downwelling so-called "greenhouse" radiation.
  • Theory 2) In the short term, large changes in downwelling radiation change the surface temperature.
  • Theory 3) In the long term, small continuing increases in downwelling radiation lead to corresponding small continuing increases in global surface temperature.
Here the spoiler alert: I think that the first two of these are true (with caveats), but we have virtually no evidence that the third one is either true or untrue.
The "he" is Gavin Schmidt, Director of the NASA Goddard Institute for Space Studies (GISS). Dr Schmidt understands more about climate than Willis could ever hope to learn. As you can tell, Willis doesn't even understand what radiation is or he'd never have split the above into "very separate and distinct theories".




In Theory 1, I wonder what Willis thinks is being absorbed by the atmosphere. Whatever it is, he decides it's "affecting" the overall temperature but he doesn't say how. Is it raising the temperature or lowering it? The words he uses are odd: "and increases downwelling so-called "greenhouse" radiation". Does he know what that means? I'd say not because he says that his Theory 2 is quite "separate and distinct" from his Theory 1.

His Theory 2 is that if radiation is transmitted downward it changes the surface temperature. Yet that's a corollary of his Theory 1, not a separate notion. He's already said that Theory 1 includes downwelling of radiation, so how can Theory 1 be separate and distinct from Theory 2. I can only conclude that Willis doesn't know what radiation is.

Going on to his Theory 3, the only difference between that and his Theory 2 is time. What he's saying is that after whatever his "short term" time has elapsed, physics stops working. That is, increases in downwelling radiation no longer warm the surface.

Being of a curious nature, I held my nose and dived into the article further, to see where his self-acclaimed brilliance led him. Well, after a lot more verbiage, Willis finally grandly announces his own notion. He claims the climate is stable. He goes even further and writes:
My theory, on the other hand, arose from my being interested in a totally different question about climate—why is the temperature so stable? For example, over the 20th Century, the temperature only varied by ± 0.3°C. 
It's clear that Willis hasn't ever looked at what's been happening on this planet. Here's a chart of temperature change over the twentieth century, and I've included the temperature change for the entire record from NASA GISS, right up to the end of last year:



Data source: NASA GISS
Anyone who tries to portray that as "stable" is pulling your leg, making a joke. Anyone who portrays and increase of 0.7 C as varying "by ± 0.3°C" is misleading you. (I can imagine the yarn he tells to his "gorgeous ex-fiancee" when she asks where $20,000 disappeared to. He'll just say - oh, no problem. Our savings just varied by ± $10,000 in the last year. Happens all the time.)

Willis says the climate is stable (huh? No ice ages, no hothouses?) because of what he calls "emergent phenomena". Basically, he's claiming that the planet can't get hot even if less energy leaves the system. He doesn't "believe in" physics. Instead he boasts how he's invented "how the climate works", and points to his "40 or so posts" at the self-same climate conspiracy blog WUWT. I've commented on quite a number of those 40.

The last 670 words could have a heading "ode to Willis", where he argues for why he's a genius, despite being a college dropout, and why he's God's gift to climate science. He ends up with these gems:
I have great confidence in what I've written about my theory, for a simple reason. Watts Up With That is the premier spot on the web for public peer-review of scientific theories and ideas about climate. This doesn't mean that it only publishes things known to be valid and true. Instead, it is a place to find out if what is published actually is valid and true. There are a lot of wicked-smart folks reading what I write, and plenty of them would love to find errors in my work.

So when those smart folks can't find errors in what I've written, I know that I have a theory that at least stands a chance of becoming a mainstream view.
Ha ha ha - "the premier spot on the web for public peer-review" - oh my! It's a damn conspiracy theory blog, Willis. It's got nothing to do with peer review or science. The fans are scientifically illiterate. Most of them regard WUWT as nothing but their personal notice board on which to randomly pin their various crazed ideas, and which have no relevance to the article above their pinned notion.

Despite their illiteracy, in the past there were quite a few WUWT fans who didn't like Willis much. They might have left by now. Lots of people, even at WUWT, have pointed to flaws in his "40 or so posts" and all he does is spit the dummy. Very rarely he'll acknowledge an error, but mostly he just gets irate.


From the WUWT comments
I decided to see how deniers react to him effectively saying the climate never changes, when one of their favourite rallying cries is "the climate is always changing". I was disappointed. Nobody picked him up on that one. I have only scanned the comments and, as some of you have noticed, the quality is declining. (Yes, what you thought was impossible is in fact possible.) There was little discussion of Willis' article. Mostly it was used as an excuse to post various conspiracy theories, silly attacks on scientists and the usual denier nonsense.

There was one chap who calls him or herself Burl Henry, who wrote a novel notion (at least I've not come across it before). Instead of "it's the sun" he reckons "it's SO2":
I have yet to find any large change, either increase or decrease, which is not related to changing levels of SO2 in the atmosphere, and this is documented in the reference cited.
Nick Stokes found another problem with Willis' article (here) and politely pointed it out. Willis didn't take kindly to that and went off the rails in his usual style. Willis didn't and couldn't deny they were a problem for him. I doubt he understood.


Further reading from the HotWhopper archives
If you want more about Willis' grand theories and other wonderings, there are plenty of HotWhopper articles to choose from.






SnowgumThe weird get weirder. A bloke called Paul Driessen, whose job includes telling lies about climate change and bringing back smog to the USA, has come up with a wild idea and it's been posted at WUWT. This time he's really gone bananas. What he's saying is that Australia should get rid of all its trees, or those of the eucalypt species which is pretty much the same thing, and that would stop fires. In other words, he's suggesting we get rid of almost all our forests. That's one solution to stopping fires, though not original.

Don't believe me? Here's what he wrote:
In both California and Australia, people bemoan the loss of eucalyptus trees in fires. But many don't want them removed or even thinned out.

Many don't want them removed? Really? How about almost no-one wants them removed. It's only a few shock jocks in Australia, and Paul Driessen, and probably Rupert Murdoch, who want to chop down all our forests. I can understand that people in California would regard the Australian blue gum as a pest - in California. (That's probably the only eucalypt they know.) What I don't understand is why anyone would want to remove all the eucalypts in Australia. All 894 varieties, coast to coast? (What about the koalas?)


In case anyone is thinking, well, Australia has forests dominated by other species, you'd be right. However, most of our forests are eucalypt forests. Here's a map showing the different types, courtesy the Australian Bureau of Agricultural and Resource Economics and Sciences (ABARES):



Many of the other species: acacia, melaleuca etc also burn quite readily. Should we chop them down as well? And what about the rainforests that have burnt for the first time in thousands of years? How about the rivers, firebreaks, cleared areas, areas burnt several times already? Maybe we should just cover the country in tar and cement!

Paul Driessen is really and truly suggesting we strip bare our continent of the tree species that defines us and our forests, the eucalypt, destroying the homes of birds, kangaroos, koalas, possums, gliders and protects so many species of other flora.



Deniers want us to chop down millions of hectares of forests and become a treeless nation.Deniers really are most peculiar.

BTW, I've been watching out for this and Paul is the first denier I've come across who says "CO2 is plant food" is one reason Australia's fires were so bad (those weren't his exact words).

Oh, and ignore the rest of his article, it's complete and utter nonsense. Paul got most of it from various wacko websites in the USA and people who know nothing about Australia (like himself). It's a mix of lies, conspiracy theories and nut-job politics that WUWT is known for (for anyone who is familiar with the blog). Paul is pushing the crazy line that scores of people spent days hiking and climbing for miles deep into inaccessible parts of the Great Dividing Range, waited till they were caught in the middle of electrical storms, then set the forest alight. These people, he must assume, were highly coordinated and set off fires all over NSW and Victoria at the same time as lightning was shooting about. These arsonists who want to get rid of Australia's forests are very cunning, aren't they.

WUWT is not recommended for the sane.




17-Jan-20
Lauren Weinstein's Blog [ 17-Jan-20 7:43pm ]

Views: 14

One of the most poignant ironies of the Internet is that at the very time that it’s become increasingly difficult for anyone to conduct their day to day lives without using the Net, some categories of people are increasingly being treated badly by many software designers. The victims of these attitudes include various special needs groups — visually and/or motor impaired are just two examples — but the elderly are a particular target.

Working routinely with extremely elderly persons who are very active Internet users (including in their upper 90s!), I’m particularly sensitive to the difficulties that they face keeping their Net lifelines going. 

Often they’re working on very old computers, without the resources (financial or human) to permit them to upgrade. They may still be running very old, admittedly risky OS versions and old browsers — Windows 7 is going to be used by many for years to come, despite hitting its official “end of life” for updates a few days ago.

Yet these elderly users are increasing dependent on the Net to pay bills (more and more firms are making alternatives increasingly difficult and in some cases expensive), to stay in touch with friends and loved ones, and for many of the other routine purposes for which all of us now routinely depend on these technologies.

This is a difficult state of affairs, to say the least.

But there’s an aspect of this that is even worse. It’s attitudes! It’s the attitudes of many software designers that suggest they apparently really don’t care about this class of users much — or at all.

They design interfaces that are difficult for these users to navigate. Or in extreme cases, they simply drop support for many of these users entirely, by eliminating functionality that permits their old systems and old browsers to function. 

We can certainly stipulate that using old browsers and old operating systems is dangerous. In a perfect world, resources would be available to get everyone out of this situation.

But of course we don’t exist in a perfect world, and these users, who are already often so disadvantaged in so many other ways, need support from software designers, not disdain or benign neglect.

A current example of these users being left behind is the otherwise excellent, open source “Discourse” forum software. I use this software myself, and it’s a wonderful project.

Recently they announced that they would be pulling all support for Internet Explorer (except for limited read-only access) from the Discourse software. Certainly they are not the only site or project dropping support for old browsers, but this fact does not eliminate the dilemma.

I despise Internet Explorer. And yes, old computers running old OS versions and old browsers represent security risks to their users. Definitely. No question about it. But what of the users who don’t understand how to upgrade? Who don’t have anyone to help them upgrade? Are we to tell them that they matter not at all? Is the plan to try ignore them as much as possible until they’re all dead and gone? Newsflash: This category of users will always exist!

This issue rose to the top of my morning queue today when I saw a tweet from Jeff Atwood (@codinghorror). Jeff is the force behind the creation and evolution of Discourse, and was a co-founder of Stack Exchange. He does seriously good work.

Yet this morning we engaged in the following tweet thread:

Jeff: At this point I am literally counting the days until we can fully remove IE11 support in @discourse (June 1st 2020)

Lauren: I remain concerned about the impact this will have on already marginalized users on old systems without the skills or help to switch to other browsers. They have enough problems already!

Jeff: Their systems are so old they become extremely vulnerable to hackers and exploits, which is bad for their health and the public health of everyone else near them. It becomes an anti-vaccination argument, in which nobody wins.

Lauren: Do you regularly work with extremely elderly people whose only lifelines are their old computers? Serious question.

Somewhere around this point, he closed down the dialogue by blocking me on Twitter.

This was of course his choice, but seems a bit sad when I actually had more fruitful discussions of this matter previously on the main Discourse discussion forum itself.

Of course his anti-vaxx comparison is inherently flawed. There are virtually always ways for people who can’t afford important vaccinations to receive them. Not so for upgrading computer hardware, software, or getting help working with those systems, particularly for elderly persons living in isolation.

Yes, the world will keep spinning after Discourse drops IE support.

Far more important though than this particular case is the attitude being expressed by so many in the software community, an attitude that suggests that many highly capable software engineers don’t really appreciate these users and the kinds of problems that many of these users may have, that can prevent them from making even relatively simple changes or upgrades to their systems — which they need to keep using as much as anyone — in the real world. 

And that’s an unnecessary tragedy.

–Lauren–

HotWhopper [ 17-Jan-20 12:05pm ]
It's hard to believe but poor Anthony Watts, despite all the help offered him over the years, is still totally befuddled, perplexed and bamboozled by the notion of temperature anomalies. You know he's not the brightest spark in deniersville yet you'd have thought that by now even he might have learnt something about temperature charts. But no.

The oddest thing is that he's unashamed of being numerically illiterate. He might even regard it as a strength. It means his readers have found someone, somewhere, who's dimmer than they are, and that could be why they keep coming back for more.

Today Anthony wrote about the global average surface temperature for 2019, saying at least in the USA it wasn't another "hottest year". That's a classic conspiratorial diversion tactic, by the way: focus on a detail and try to dispute the big picture.

Back to his troubles with temperature anomalies. Anthony complained he still can't figure them out, even after all these years of running the world's biggest climate conspiracy blog. He wrote:
In my opinion, the NOAA/NASA press release (and slideshow) is inconsistently presented. For example, they can't even agree on a common base period for comparisons. Some graphs use 1951-1980 while others compare to 1981-2010 averages to create anomaly plots. NOAA and NASA owe it to the public to present climate data with a consistent climate period for comparison, otherwise it's just sloppy science. NASA GISS has consistently resisted updating the 1951-1980 NASA GISS baseline period to the one NOAA and other datasets use, which is 1981-2010. GISS stubbornly refuses to change even though they have been repeatedly excoriated for keeping it.
As you know, Anthony's opinion isn't worth (I'm trying to think of an alternative to this saying), and his hoity toi attitude makes him look like a fool. Different agencies use different baselines for different reasons. Once a baseline is chosen it's best to keep it. That makes it easier for researchers and others to compare data over time. Otherwise you'd have to keep checking the baseline used each time you went to use the data. It's really not that hard to understand and work with anomalies.



Another big fat lie about temperature
Then comes the lie. Everyone familiar with global surface temperature changes knows that the coldest periods last century were in the first half of the century, yet Anthony wrote something wildly wrong and I don't know that anyone picked him up on it (dimwitted deniers that they are):
That 1951-1980 period just so happens to be the coolest period in the 20th century
Nope. Wrong! That 1951-1980 period was nothing like the coolest. In fact it was around 0.03 C warmer than the average of the 20th century.



Fig 1 | Global surface temperature for thirty year periods from 1880 to 2019. Data source: GISS NASA
Remember this is from someone who ridiculously pretends to know something about global surface temperatures. He doesn't know the first thing, does he.

Anthony goes on to explain why he doesn't understand anomalies. Or tries to. I think it might be something to do with different colours being used on different maps (NASA and NOAA). He shows one that goes from aqua to reddish brown and one that goes from a deeper blue through to brighter red. The first one has an anomaly scale from -4 K to plus 4 K, the next one from -5C to plus 5C, so maybe Anthony's not just confused by colours, he's confused by Celsius and Kelvin.

Anthony finally realises there's not much difference at all, even if you don't allow for different baselines. Since 1951-1980 average (used by NASA) is 0.03 C higher than the 20th century mean (used by NOAA), you'd expect NOAA's anomaly from the 20th century mean to be around 0.03 C lower than NASA's anomaly from 1951-80 average. And it is, as Anthony points out. He helpfully (or reluctantly) wrote:
The difference between the two analyses is NOAA @ 0.95°C/1.71 ° F and NASA GISS at 0.98 ° C/1.8 ° F
Just as expected.

Not daring to compare, So There!
There was little more Anthony could try to milk out of complaining the baselines are different, apart from saying the relentless rise in temperature is "trivial" (with some quote-mining games to show he doesn't understand the American language any more than he understands anomalies and trends). So he moved to the USA, which he said was cooler. So there!

Anthony Watts did a song and dance about the Climate Reference Network in the USA. He loves it it's so "pure" (even though it's homogenised). He wrote, rather Trumpishly:
NOAA's U.S. Climate Reference Network (USCRN) has the best quality climate data on the planet, yet it never gets mentioned in the NOAA/NASA press releases. Commissioned in 2005, it has the most accurate, unbiased, and un-adjusted data of any climate dataset.
Well, of course it wouldn't get a mention when reporting global temperatures, or even when reporting historical temperature changes for the USA. CRN temperature analysis only begins in 2005, whereas other analysis dates back to 1895.

Another thing Anthony couldn't bring himself to do was show any chart comparing the analysis of these 114 or so weather stations to that for the hundreds of weather stations in the nClimDiv, which NOAA uses for historical data to the present. That's because they are almost identical. So, let me do it for him. (Note, the 2019 data for nClimDiv hasn't been added yet.)



Fig 2 | NOAA temperature anomalies for USA with CRN, nClimDiv and USHCN, from 2005 to 2019. Data source: NOAA
Almost no difference at all! I guess that means the historical data is fine as well.
So let's take a peep at that:



Fig 3 | NOAA temperature anomalies for USA with CRN, nClimDiv and USHCN, from 1895 to 2019. Data source: NOAA
I'm not going to spend time plotting the actual trend. If you want to do that the data is on the NOAA website. It's fairly clear there were some ups and downs but overall not much change until the 1970s. The hottest year for the USA was 2012 and the coldest year was 1917.


Now for the conspiracy theory
Not content to avoid plotting historical temperature data for the USA, and not content to avoid showing there's little difference between his "pristine" data set and the larger ones, Anthony proceed to set out his conspiracy theory and wrote:
While the U.S. isn't the world, and the dataset is shorter than the requisite 30 year period for climate data, the lack of warming in the contiguous United States since 2005 shown in the graph above suggests that the data NOAA and NASA use from the antiquated Global Historical Climate Network (GHCN) reflects warmer biases due to urbanization and adjustments to the data.
He then highlighted a poster he got someone to prepare for him back in 2015. It's now seven and a half years since he promised a paper on the topic, and it's not yet surfaced (do you like the wordplay?). Anthony promises a lot of things that he can't deliver. Remember the Open Atmospheric Society?

He's wrong and he must know it. Lots of people have told him so. Urbanisation makes no difference to the global or the USA temperature data once it's been processed. There have been studies of the US record (e.g. here) that demonstrate this, including one by Anthony Watts himself!

If you're scared by global heating, just change the scale
I'm fairly sure this comment from Anthony wasn't meant as a joke but it sure looks like one:
But here's also something interesting. All of the temperature plots used to represent climate change are highly magnified. This is so variations of one degree or less are highly visible. Unfortunately, these huge variation often scare the public since they perceive them as "massive" temperature increases.

Fortunately, the NOAA online plotter allows adjustment of the vertical axis, and when the vertical axis of the climate data is adjusted to fit the scale of human temperature experience, they look less alarming.
Right. I'll bet that's what deep sea explorers do when contemplating going deep into the Mariana Trench. No biggie!

He added:
"Climate change" certainly looks a lot less scary when the temperature change is presented in the scale of human experience.
This is from the person who was close to the Camp Fire in California. And surely he's read about what's been happening in Australia - the deniers are all over those (they aren't real, they've happened before, it's arson, it's rained up north etc etc)

Seriously, these climate deniers will go to their grave swearing global warming is no big deal. They hate being scared so much they'd rather deny wildfires, floods, rising seas, melting ice, food price rises, climate migration and more rather than admit they are wrong.

CRN vs nClimDiv 
By the way, you can compare the number of stations in CRN vs nClimDiv by moving the arrow across the image below. I haven't lined them up perfectly, it's just to give you the idea.

May 18April 18

Figure 4 | Maps showing weather stations used by NOAA in nClimDiv and CRN. Data sources: NOAA - CRN and NOAA - nClimDiv

From the WUWT comments
I'm still getting back into the swing of blogging and don't have the energy or inclination to go through the comments. In any case, I'd best keep an eye on the local fire situation. A favourite spot of mine seems to be in the path of the fire. You may peruse them yourself. What I did see showed WUWT is getting worse the more the world warms.


From HotWhopper archivesSome of the links to images in some the older articles are broken. They still have gems in the text :D





Summary: 2019 was the second hottest year on record. December 2019 was the second hottest December on record. The last decade was the hottest decade on record.

According to GISS NASA, the average global surface temperature anomaly for 2019 was 0.98 °C, which is just 0.04 °C cooler than the previous hottest - 2016.

Below is a chart of the average of 12 months to December each year. 2019 was 0.06 °C hotter than the 12 months to December 2017, which is the third hottest year.

Figure 1 | Annual global mean surface temperature anomaly - 12 months to December each year. The base period is 1951-1980. Data source: GISS NASA


Next is a chart of the month of December only. This December was 1.11 °C above the 1951-1980 average and was the second hottest December on record. It was 0.05 °C hotter in December 2015 which, unlike this past year, was in the middle of a strong El Nino:
Figure 2 | Global mean surface temperature anomaly for the the month of December only. The base period is 1951-1980. Data source: GISS NASA



The decades are getting hotter
As you would know, each decade is getting hotter and hotter. Each of the five decades since 1971-1980 has been hotter than the previous one. The chart below shows what's happening. It includes a line showing the mean for the 20th century. Note the last column only includes nine years - to 2019. Let's see what next year brings.



Figure 3 | Global mean surface temperature anomaly by decade. The base period is 1951-1980. Data source: GISS NASA
Where was it hot?
Last year was hot almost everywhere. The only year that was hotter was 2016, but that year there was more contrast. In 2016 there were hotter parts and colder regions compared to last year. The hot Arctic helped drive the average temperature in 2016. 

Move the arrow at the left to the right to compare this year with 2016. Check out Australia, too, where 2019 was the hottest year on record.

May 18April 18

Figure 4 | Maps showing mean surface temperature, anomalies for 2019 and 2016, from the 1951-1980 mean. Data source: GISS NASA


July 2019 was the hottest month on record
The chart below confirms July this year was the hottest month on record, hotter than August 2016. The chart indicates the changes in monthly temperatures and shows the hottest months of the year are in July and August. (Sometimes it's July that's the hottest month and sometimes it's August.) I added a dotted line to the chart to make it easier to see. As I said previously, that's especially notable because there was no El Nino this year, unlike back in 2016.



Figure 5 | Seasonal cycle of global surface temperature anomaly. The chart shows the temperature anomaly with respect to the 1980-2015 (°C) mean. It is derived from the MERRA2 reanalysis over 1980-2015 and shows how much warmer is each month of the GISTEMP data than the annual global mean. Source: GISS NASA
Year to date chart
Below is the final year to date progressive chart for 2019. What it shows is the average temperature for the year at each point on each separate line on the chart. The topmost line is 2016. The fat black line with dots is 2019.

For each year at January, the point is just the anomaly for January. At February, the point is the average anomaly for January and February. At July, it's the average of January to July inclusive - all the way to December, which is the average for the whole year.

Back in July I wrote:
It's not out of the question that 2019 will end up the second warmest year on record, ahead of 2017. (The temperature anomaly for the rest of the year would have to average 0.87 C for 2019 to equal 2017.)
The temperature anomaly for the rest of the year, from August to December, averaged 1 °C, so it easily beat 2017 for second place.

The 2019 line shows that the average for the year is 0.98 °C (the last big dot on the 2019 line). This is just 0.04 °C lower than the 2016 (1.02 °C). Unlike 2015/16, there was no El Nino this past year, at least not using Australia's BOM criteria.

Figure 6 | Progressive year to date global mean surface temperature anomaly. The base period is 1951-1980. Data source: GISS NASA





08-Jan-20
Know what? If I see another know-nothing denier try to claim "it's not climate change it's arson" or "backburning" or "not enough prescribed burns" or "it's not happening", I'll scream.

I was going to deal quickly with "it's arson", then move onto prescribed or controlled burns. However, I'll now devote this article just to the arson furphy, because the false meme is appearing all over the place, even being insinuated in mainstream media. Some people are suggesting it's an organised disinformation campaign. I don't know about that, but it is being fanned by the usual crowd of deniers, including many from the USA and other places outside Australia.

Let me be clear. Arson is not the reason for the catastrophic fires this summer. There has always been arson but never a fire season as bad as this one. These major fires are there because the bush is so dry and because it's been so hot. Fires need ample fuel, wind and an ignition. The fuel is ample, because even though there's not been much growth in vegetation because of the drought, what's there is dry and easily ignited. There's been enough windy days to fan the flames and spread the fires further. And there's been ignition, obviously. Mostly (in the case of the major fires), the ignition has been lightning.


Most major bushfires in Australia are started by lightning
Most major bushfires (forest fires) in Australia are ignited by lightning. They typically start in bushland that is difficult to access.

At a recent community meeting we were told the fires around here in north eastern Victoria were ignited by lightning. I saw some of the lightning.

The fire in Mallacoota was also reportedly started by lightning.

The huge Gospers Mountain fire in Wollemi National Park in NSW was started by lightning.

On 8 January 2003 there were 87 fires ignited by lightning, eight of which persisted and led to the with huge Alpine fires that year.

Lightning fires are a risk in summer in south eastern Australia. We can get lightning at any time of the year. In winter, which is our wet season, fires don't normally take hold. In winter, lightning is usually accompanied by rain and, in any case, the temperature is low and the bush is not as dry so fires don't spread far.

Summer is our dry season. Lightning storms can pass through with very little or no rain. Not every lightning strike will cause a fire. It depends on what's been struck. However, it only takes one tree catching fire and it will spread quickly if conditions are right.

This summer has seen the hottest weather ever and much of south eastern Australia is very dry, with large parts having been in drought for some years. Conditions for wildfires have been almost perfect. This has resulted in a huge increase in the number of fires in NSW. There has also been an enormous area burnt in eastern Victoria though the charts (see below) from GFED to 3 January 2020 doesn't reflect this. (It shows number of fires not area burnt.)


The area burnt in Victoria in 2003 (>1.3 million ha or 3.2m acres) can be seen here, and the area burnt in 2006 (~1.2 million ha or 3m acres) is shown here. So far this year, it's estimated the fires in Victoria have burnt more than 1.2 million hectares.  The fires in Australia this season are estimated to have burnt at least 8.4 million hectares including 4.9 million ha in NSW.

Bushfires also create their own weather, and can even make lightning as described in an article by the Bureau of Meteorology:



Let me stress again, most of these large fires this summer were ignited by lightning. They have been exacerbated by climate change. Regardless, no matter what the source of ignition, whether natural, accidental, careless, reckless or deliberate, if conditions (dryness, heat and wind) had not been what they are, there would not have been anything like the catastrophe the world has seen unfolding in Australia.

Another thing worth considering is that sources of ignition are unlikely to have changed much over time. On the other hand, fire response has improved out of sight over the years, with huge advances in communications, fire-fighting technology and equipment, training and response management. If not for climate change making bad conditions worse, these fires would not have been anything like they are.

Why the deflection from deniers?
I don't know why some people are promoting the "it's arson" meme. Is it they can cope with the idea that people are capable of burning Australia by lighting a match but can't cope with the idea that people are capable of changing the climate by burning fossil fuels? Who knows.

I'm not saying there are not people who deliberately light fires. There are. There are also people who accidentally cause fires. When caught, all such people are subject to heavy penalties. In my home state, a person who intentionally or recklessly causes a bushfire can be locked away for up to 15 years (in NSW it's up to 21 years). A person caught lighting a fire that causes death can be sent to jail for up to 25 years.

Human activity can cause fires. Every year there could be hundreds of fires ignited by people, whether deliberately or inadvertently. Arsonists, people who deliberately light fires for whatever reason, exist and have probably always existed. They might want to collect insurance on a failed business or because they've overstretched their mortgage. They might be after revenge against someone so set their home or car alight. A few people just like fire, and set them in urban, peri-urban or, occasionally, rural areas. There have been homes burnt after arsonists lit fires.

There are fires caused accidentally by human activity. There were scores of lives lost in the East Kilmore fires after powerlines sparked a fire in strong winds on a catastrophic fire danger day. A recent fire that burnt Binna Burra Lodge in a precious area in Lamington National Park in Queensland has been attributed to a cigarette butt dropped by teenagers. The powerline fire and the cigarette fire wouldn't have taken hold or caused so much damage if conditions had not been so extreme.


Most fires lit deliberately or accidentally by people are quickly contained
Few people would hike into virtually inaccessible areas to deliberately start a major bushfire. They'd have to be suicidal as well as pyromaniacal. Most people who deliberately light fires do so near areas of population, and they are generally grass fires that are extinguished fairly quickly. Not always, but mostly. A 2006 report states (my emphasis):
Human action - most deliberate bushfires occur within or near the most densely populated regions of Australia. Consequently, the majority of deliberate fires occur along the coastal fringe, where climatic conditions are generally milder, and the period of adverse bushfire weather is shorter. Although they have the potential to burn out of control and cause immense damage, overall, the majority of deliberately lit fires are small in area (less than one to two hectares)
Analysis of a number of different data sources indicates that the highest rates of recorded deliberately lit fires during adverse bushfire weather occur in areas, regions or jurisdictions with highest rates of recorded deliberate fires generally.
A key question for bushfire arson prevention is whether there is a greater risk of deliberate fire lighting during periods of extreme weather conditions. This is a difficult question to answer with any degree of accuracy, as many fires are suspicious but not confirmed as arson incidents, and the intention of those who light fires is rarely known. A range of data shows that as the fire danger rating increases, recorded deliberate fires account for a smaller proportion of all bushfires. The increased risk of accidental and natural fires under more adverse conditions and the absence of definitive data on causal factors means that there is a lack of conclusive evidence to indicate a systematic increase in deliberate firesetting during these peak periods of risk.

Use and abuse of statistics
Although by far the majority of "it's arson" claims are unsubstantiated (and nonsense), I've seen people quote and misquote statistics from various sources, including some people who should know better. Let's have a look.

Below is a chart on arson offenses recorded, from the Crime Statistics Agency (Victorian Government). The bushfire arson is the bottom line (light grey). It peaks in the summer season, but that would probably be in part because fires lit in summer attract attention whereas fires lit at other times of the year go out quickly and/or don't spread. (As always, click to enlarge.)


In the year ending 30 September 2016, "there were 46 unique offenders apprehended by police for bushfire offences. Of these offenders, more than half (n=26) were known to police for prior offending before committing their bushfire offence, and 16 had previously committed an arson or criminal damage offence before causing a bushfire. Bushfire offenders were predominantly male, making up 91.3 per cent (n=42) of all offenders, and 56.5 per cent (n=26) were aged between 10 and 19, with the mean age of 23.6."

This number represents around 0.0074% of the population of Victoria at the time (6.2 million). There would also have been suspicious incidents where no-one was charged with an offense.

In NSW it's been reported that 24 people have been charged with deliberately lighting bushfires this fire season. In addition, action has been taken against 53 people "for failing to comply with a total fire ban and against 47 people for discarding a lighted cigarette or match on land."

There are suggestions a fire at Jindabyne last Friday may have been deliberately lit.

The report said that none of the fires currently on the south coast of NSW were related to those charges.

Although it was 24 people who've been charged with lighting fires, that SMH report said a total of 183 people had legal action taken against them, which might be the source of the mysterious "200 arsonists" that keep popping up. I don't know what the missing numbers relate to (24+53+47=124), it could be that some people were charged with more than one offense.

Another point worth making is to reiterate that most fires lit by people occur near populated areas, unlike the major fires currently in Victoria and NSW. The word "bushfire" is often applied to mean any fire, including grass fires. I prefer to reserve the term bushfire to a fire in the bush (a forest). Grass fires are quite different. They travel much faster than bushfires but can also be contained more easily. (In remote areas of the outback, grassfires and desert scrub fires will usually be left to burn themselves out.)


More to come
There is a lot more that could be and is being written about this year's fire season. There will be inquiries and maybe a Royal Commission or two. I'll possibly write more myself. (I'm thinking about an article to dispel another lie that's being pushed. Some deniers are blaming hazard reduction so as to avoid confronting how badly we're changing the climate. Maybe I'll get to that later.)


Note: Where I live there's a Watch and Act in effect. That's one step up from Advice and one step down from Evacuate Now. Our town, which should be full of tourists, feels strangely quiet. Visitors have left and so have a lot of residents. It's not just the fire risk, it's also the smoke which has prompted people to leave. We can avoid the fire risk by driving to a safe town 90 km or more distant. It's not as easy to avoid the smoke because those same towns are also affected by smoke.

References and further reading
Arson in NSW - an article from January 1990 from the NSW Bureau of Crime Statistics and Research (Arson is nothing new!)

Spotlight: Arson Offences - Crime Statistics Agency, Victoria, 2016

Bushfire weather - Bureau of Meteorology, Australia

When bushfires make their own weather - Bureau of Meteorology, Australia, January 2018

Bots and trolls spread false arson claims in Australian fires 'disinformation campaign' - article by Christopher Knaus at The Guardian, 7 January 2020

Ducat, Lauren, Troy McEwan, and James RP Ogloff. "Comparing the characteristics of firesetting and non-firesetting offenders: are firesetters a special case?." The Journal of Forensic Psychiatry & Psychology 24, no. 5 (2013): 549-569. https://doi.org/10.1080/14789949.2013.821514 (pdf here)

2019-20 Australian bushfire season - MODIS data on the recent fires with historical comparisons, from the Global Fire Emissions Database. (h/t Graham Readfearn)

Record-breaking 4.9m hectares of land burned in NSW this bushfire season - article by Naaman Zhou at The Guardian, 7 January 2009

Fires in Victoria destroy estimated 300 homes, former police chief to lead Bushfire Recovery Victoria - ABC News, 7 January 2019

Bushfires lit deliberately during adverse bushfire weather - Bushfire Arson Bulletin, Australian Institute of Criminology, December 2006

Patterns in bushfire arson - Bushfire Arson Bulletin, Australian Institute of Criminology, November 2009

Past bushfires - A chronology of major bushfires in Victoria from 2013 back to 1851 - Forest Fire Management, Victoria

Bushfire - Alpine Region and north-eastern Victoria - Australian Institute for Disaster Resilience, January 2003

Fire Scars in Australia's Simpson Desert - NASA Earth Observatory, November 2002

07-Jan-20
Everybody knows [ 07-Jan-20 4:36pm ]
Lenin Lives!
Philip Cunliffe
Zero Books, 2016

It can be disconcerting to read a book that upends your way of looking at the world. It's even more disconcerting when that book claims your own work as part of its inspiration. About which, more later.

The book's title and Soviet-kitsch cover are deeply ironic: baiting for some, and bait for others. In the alternate-history world Cunliffe imagines, Lenin is almost forgotten, because he succeeded. (It's tempting to add 'beyond his wildest dreams' but success beyond Lenin's wildest dreams would have meant spreading the revolution to the canal-builders of Mars.)

For what Lenin and the Bolsheviks set out to do in 1917 was to detonate an international, indeed global, revolution. This was an immediate perspective, where revolutionary romanticism meant staking all on the world revolution breaking out next week, while sober realism meant bearing in mind that it might be delayed for a few more months. In fact even the realists were too optimistic: it was delayed for a whole year. In November 1918 the red flags went up over the naval base at Kiel, and flew over all Germany within days. And then...

Well, everybody knows.

But what if the grip of German Social Democratic reformism had been that little bit shakier, and the revolutionary Left that little bit better organised and luckier? Cunliffe speculates on what sort of world might now exist, and how it might have come about, if the revolution that began in Russia had not only spread - as it did - but won, as it didn't.

In this missed turn of history, a decade or so of wars and civil wars see the capitalist core countries having gone socialist. The major independent underdeveloped countries have gone democratic, and the former colonial holdings have mostly opted to remain in loose voluntary federations that have replaced the empires. It's not all plain sailing but the resulting democratic workers' states of Europe and America are much less repressive than Bolshevik, let alone Stalinist, Russia was in our world. Planning emerges from increasing coordination (as indeed it did under the New Economic Policy) rather central imposition. Industrialisation proceeds at a brisk but measured, rather than a frantic, pace. Art, science, culture and personal freedom flourish. This is a world with no fascism or Stalinism, no Depression and no Second World War. Whether or not the reader finds it feasible or desirable, it's attractively and vigorously portrayed.

Cunliffe's alternate history has no decisive moment (no Jonbar Point, to use the science-fictional term) that I can see. Instead, the international revolutionary working-class movement (which, as Cunliffe usefully and repeatedly reminds us) actually did exist at the time is imagined as having been just a little bit stronger in arm and clearer in mind than it was in our world. It's by no means an unrealistic speculation. Even in our world, it was a close-run thing. So close, in fact, that stamping out every last smouldering ember of world revolution took tens of years and tens of millions of lives. But its suppression is now, at last, complete.

E. H. Carr, in an article or interview for New Left Review, remarked that all of Marx's predictions had come true, except for the proletarian revolution. Cunliffe's view is gloomier: he thinks that they all came true, including the revolution. It really happened, in 1917-1923, and the revolutionaries bungled it.

When most readers of the Communist Manifesto encounter the passage about how throughout history classes have waged 'an uninterrupted, now hidden, now open fight, a fight that each time ended, either in the revolutionary reconstitution of society at large, or in the common ruin of the contending classes' the example that springs to mind is the Fall of the Roman Empire to the barbarians. What Marx and Engels were really alluding to, Cunliffe argues, was subtly different: the Fall of the Republic, rather than the Fall of the Empire. It was the class struggles of patrician and plebeian in the Roman Republic that ended in mutual ruination, and stymied any chance of further progress centuries before the Empire fell.

If readers of the Manifesto are socialists, the common ruin they envisage for bourgeoisie and proletariat is a nuclear war or environmental catastrophe. No such luck, Cunliffe tells us: the common ruin has already happened. The class struggle between bourgeoisie and proletariat is over. The good guys lost. Get over it.

But the non-socialist reader can take no comfort. The suppression of communism, Cunliffe claims, undermined capitalism, sapping its economic dynamism and political stability. With no competing model - however unattractive in many respects - to keep it on its toes, capitalism becomes a couch potato. With no union militancy and shop-floor organisation to contend with, capitalists have less incentive to innovate and rationalise. With no need to integrate the working class in the affairs of state, mass political participation and engagement have been texted their redundancy notices.

The result, however, is that the elites and the rest of the population are more mutually alienated than they ever were in the class struggle. To the political class and state authorities, the ideas and attitudes of the underlying multitude are as a dark continent, viewed with alarm and suspicion, alternately patronised and deplored. Unmoored from the clash of material interests, politics drifts into a Sargasso Sea of slowly, pointlessly, endlessly swirling debris. Debate degenerates into a grandstanding narcissism of small differences around an elite consensus dedicated solely to keeping the show on the road. Political apathy and populist eruptions are its morbid symptoms. The ruin was mutual, and the ruins are where we must henceforth live.

This exhausted order could in principle totter along indefinitely, were it not for the instabilities, internal and external, that result. The political and moral authority of the state quietly unravels, even as its hard power and reach expand. As Britain's riots of 2011 starkly exposed, social order itself can dissipate overnight. And the quest for moral authority at home is transmitted all too easily into rash adventuring abroad, in the name of democratic and liberal values. To explain, say, the Iraq war as motivated by strategic or economic concerns, a 'war for oil', as leftists are wont to do, is misconceived. There's no underlying interest to expose: the war's liberal-democratic rationalisation really is what it's all about. As Tony Blair said: 'It's worse than you think. I believe in it.'

Readers of my own novels, particularly the Fall Revolution books and some of the more recent ones such as Intrusion and Descent, may find some of the themes outlined above familiar. In the early 1990s when I started writing my first novel, I was convinced that the Left had suffered a whopping, world-historic defeat with the fall of the Soviet bloc regardless of how critical or even hostile they had been to it. However, I did expect that this defeat would in time be overcome.

Whatever else it does, Lenin Lives! answers a question that has baffled better minds than mine: how on earth did a splinter of the far left mutate into a cadre of contrarian libertarian Brexiters? Two lines of explanation are often explored. The first is that they remain revolutionary communists under deep cover, engaged in some nefarious long-term scheme. The second is that they have been themselves subverted, suborned by the corporations from which they receive funding. I could go into the various reasons why both are wide of the mark, but I've already gone on long enough. By now you can figure it out for yourself:

It's worse than you think. They really believe in it.
05-Jan-20
Two-Bit History [ 5-Jan-20 12:00am ]

I express my network in a FOAF file, and that is the start of the revolution. —Tim Berners-Lee (2007)

The FOAF standard, or Friend of a Friend standard, is a now largely defunct/ignored/superseded1 web standard dating from the early 2000s that hints at what social networking might have looked like had Facebook not conquered the world. Before we talk about FOAF though, I want to talk about the New York City Subway.

The New York City Subway is controlled by a single entity, the Metropolitan Transportation Agency, better known as the MTA. The MTA has a monopoly on subway travel in New York City. There is no legal way to travel in New York City by subway without purchasing a ticket from the MTA. The MTA has no competitors, at least not in the "subway space."

This wasn't always true. Surprisingly, the subway system was once run by two corporations that competed with each other. The Inter-borough Rapid Transit Company (IRT) operated lines that ran mostly through Manhattan, while the Brooklyn-Manhattan Transit Corporation (BMT) operated lines in Brooklyn, some of which extended into Manhattan also. In 1932, the City opened its own service called the Independent Subway System to compete with the IRT and BMT, and so for a while there were three different organizations running subway lines in New York City.

One imagines that this was not an effective way to run a subway. It was not. Constructing interchanges between the various systems was challenging because the IRT and BMT used trains of different widths. Interchange stations also had to have at least two different fare-collection areas since passengers switching trains would have to pay multiple operators. The City eventually took over the IRT and BMT in 1940, bringing the whole system together under one operator, but some of the inefficiencies that the original division entailed are still problems today: Trains designed to run along lines inherited from the BMT (e.g. the A, C, or E) cannot run along lines inherited from the IRT (e.g. the 1, 2, or 3) because the IRT tunnels are too narrow. As a result, the MTA has to maintain two different fleets of mutually incompatible subway cars, presumably at significant additional expense relative to other subway systems in the world that only have to deal with a single tunnel width.

This legacy of the competition between the IRT and BMT suggests that subway systems naturally tend toward monopoly. It just makes more sense for there to be a single operator than for there to be competing operators. Average passengers are amply compensated for the loss of choice by never having to worry about whether they brought their IRT MetroCard today but forgot their BMT MetroCard at home.

Okay, so what does the Subway have to do with social networking? Well, I have wondered for a while now whether Facebook has, like the MTA, a natural monopoly. Facebook does seem to have a monopoly, whether natural or unnatural—not over social media per se (I spend much more time on Twitter), but over my internet social connections with real people I know. It has a monopoly over, as they call it, my digitized "social graph"; I would quit Facebook tomorrow if I didn't worry that by doing so I might lose many of those connections. I get angry about this power that Facebook has over me. I get angry in a way that I do not get angry about the MTA, even though the Subway is, metaphorically and literally, a sprawling trash fire. And I suppose I get angry because at root I believe that Facebook's monopoly, unlike the MTA's, is not a natural one.

What this must mean is that I think Facebook owns all of our social data now because they happened to get there first and then dig a big moat around themselves, not because a world with competing Facebook-like platforms is inefficient or impossible. Is that true, though? There are some good reasons to think it isn't: Did Facebook simply get there first, or did they instead just do social networking better than everyone else? Isn't the fact that there is only one Facebook actually convenient if you are trying to figure out how to contact an old friend? In a world of competing Facebooks, what would it mean if you and your boyfriend are now Facebook official, but he still hasn't gotten around to updating his relationship status on VisageBook, which still says he is in a relationship with his college ex? Which site will people trust? Also, if there were multiple sites, wouldn't everyone spend a lot more time filling out web forms?

In the last few years, as the disadvantages of centralized social networks have dramatically made themselves apparent, many people have attempted to create decentralized alternatives. These alternatives are based on open standards that could potentially support an ecosystem of inter-operating social networks (see e.g. the Fediverse). But none of these alternatives has yet supplanted a dominant social network. One obvious explanation for why this hasn't happened is the power of network effects: With everyone already on Facebook, any one person thinking of leaving faces a high cost for doing so. Some might say this proves that social networks are natural monopolies and stop there; I would say that Facebook, Twitter, et al. chose to be walled gardens, and given that people have envisioned and even built social networks that inter-operate, the network effects that closed platforms enjoy tell us little about the inherent nature of social networks.

So the real question, in my mind, is: Do platforms like Facebook continue to dominate merely because of their network effects, or is having a single dominant social network more efficient in the same way that having a single operator for a subway system is more efficient?

Which finally brings me back to FOAF. Much of the world seems to have forgotten about the FOAF standard, but FOAF was an attempt to build a decentralized and open social network before anyone had even heard of Facebook. If any decentralized social network ever had a chance of occupying the redoubt that Facebook now occupies before Facebook got there, it was FOAF. Given that a large fraction of humanity now has a Facebook account, and given that relatively few people know about FOAF, should we conclude that social networking, like subway travel, really does lend itself to centralization and natural monopoly? Or does the FOAF project demonstrate that decentralized social networking was a feasible alternative that never became popular for other reasons?

The Future from the Early Aughts

The FOAF project, begun in 2000, set out to create a universal standard for describing people and the relationships between them. That might strike you as a wildly ambitious goal today, but aspirations like that were par for the course in the late 1990s and early 2000s. The web (as people still called it then) had just trounced closed systems like America Online and Prodigy. It could only have been natural to assume that further innovation in computing would involve the open, standards-based approach embodied by the web.

Many people believed that the next big thing was for the web to evolve into something called the Semantic Web. I have written about what exactly the Semantic Web was supposed to be and how it was supposed to work before, so I won't go into detail here. But I will sketch the basic vision motivating the people who worked on Semantic Web technologies, because the FOAF standard was an application of that vision to social networking.

There is an essay called "How Google beat Amazon and Ebay to the Semantic Web" that captures the lofty dream of the Semantic Web well. It was written by Paul Ford in 2002. The essay imagines a future (as imminent as 2009) in which Google, by embracing the Semantic Web, has replaced Amazon and eBay as the dominant e-commerce platform. In this future, you can search for something you want to purchase—perhaps a second-hand Martin guitar—by entering buy:martin guitar into Google. Google then shows you all the people near your zipcode selling Martin guitars. Google knows about these people and their guitars because Google can read RDF, a markup language and core Semantic Web technology focused on expressing relationships. Regular people can embed RDF on their web pages to advertise (among many other things) the items they have to sell. Ford predicts that as the number of people searching for and advertising products this way grows, Amazon and eBay will lose their near-monopolies over, respectively, first-hand and second-hand e-commerce. Nobody will want to search a single centralized database for something to buy when they could instead search the whole web. Even Google, Ford writes, will eventually lose its advantage, because in theory anyone could crawl the web reading RDF and offer a search feature similar to Google's. At the very least, if Google wanted to make money from its Semantic Web marketplace by charging a percentage of each transaction, that percentage would probably by forced down over time by competitors offering a more attractive deal.

Ford's imagined future was an application of RDF, or the Resource Description Framework, to e-commerce, but the exciting thing about RDF was that hypothetically it could be used for anything. The RDF standard, along with a constellation of related standards, once widely adopted, was supposed to blow open database-backed software services on the internet the same way HTML had blown open document publishing on the internet.

One arena that RDF and other Semantic Web technologies seemed poised to takeover immediately was social networking. The FOAF project, known originally as "RDF Web Ring" before being renamed, was the Semantic Web effort offshoot that sought to accomplish this. FOAF was so promising in its infancy that some people thought it would inevitably make all other social networking sites obsolete. A 2004 Guardian article about the project introduced FOAF this way:

In the beginning, way back in 1996, it was SixDegrees. Last year, it was Friendster. Last week, it was Orkut. Next week, it could be Flickr. All these websites, and dozens more, are designed to build networks of friends, and they are currently at the forefront of the trendiest internet development: social networking. But unless they can start to offer more substantial benefits, it is hard to see them all surviving, once the Friend Of A Friend (FOAF) standard becomes a normal part of life on the net.2

The article goes on to complain that the biggest problem with social networking is that there are too many social networking sites. Something is needed that can connect all of the different networks together. FOAF is the solution, and it will revolutionize social networking as a result.

FOAF, according to the article, would tie the different networks together by doing three key things:

  • It would establish a machine-readable format for social data that could be read by any social networking site, saving users from having to enter this information over and over again
  • It would allow "personal information management programs," i.e. your "Contacts" application, to generate a file in this machine-readable format that you could feed to social networking sites
  • It would further allow this machine-readable format to be hosted on personal homepages and read remotely by social networking sites, meaning that you would be able to keep your various profiles up-to-date by just pushing changes to your own homepage

It is hard to believe today, but the problem in 2004, at least for savvy webizens and technology columnists aware of all the latest sites, was not the lack of alternative social networks but instead the proliferation of them. Given that problem—so alien to us now—one can see why it made sense to pursue a single standard that promised to make the proliferation of networks less of a burden.

The FOAF Spec

According to the description currently given on the FOAF project's website, FOAF is "a computer language defining a dictionary of people-related terms that can be used in structured data." Back in 2000, in a document they wrote to explain the project's goals, Dan Brickley and Libby Miller, FOAF's creators, offered a different description that suggests more about the technology's ultimate purpose—they introduced FOAF as a tool that would allow computers to read the personal information you put on your homepage the same way that other humans do.3 FOAF would "help the web do the sorts of things that are currently the proprietary offering of centralised services."4 By defining a standard vocabulary for people and the relationships between them, FOAF would allow you to ask the web questions such as, "Find me today's web recommendations made by people who work for Medical organizations," or "Find me recent publications by people I've co-authored documents with."

Since FOAF is a standardized vocabulary, the most important output of the FOAF project was the FOAF specification. The FOAF specification defines a small collection of RDF classes and RDF properties. (I'm not going to explain RDF here, but again see my post about the Semantic Web if you want to know more.) The RDF classes defined by the FOAF specification represent subjects you might want to describe, such as people (the Person class) and organizations (the Organization class). The RDF properties defined by the FOAF specification represent logical statements you might make about the different subjects. A person could have, for example, a first name (the givenName property), a last name (the familyName property), perhaps even a personality type (the myersBriggs property), and be near another person or location (the based_near property). The idea was that these classes and properties would be sufficient to represent the kind of the things people say about themselves and their friends on their personal homepage.

The FOAF specification gives the following as an example of a well-formed FOAF document. This example uses XML, though an equivalent document could be written using JSON or a number of other formats:

<foaf:Person rdf:about="#danbri" xmlns:foaf="http://xmlns.com/foaf/0.1/">
  <foaf:name>Dan Brickley</foaf:name>
  <foaf:homepage rdf:resource="http://danbri.org/" />
  <foaf:openid rdf:resource="http://danbri.org/" />
  <foaf:img rdf:resource="/images/me.jpg" />
</foaf:Person>

This FOAF document describes a person named "Dan Brickley" (one of the specification's authors) that has a homepage at http://danbri.org, something called an "open ID," and a picture available at /images/me.jpg, presumably relative to the base address of Brickley's homepage. The FOAF-specific terms are prefixed by foaf:, indicating that they are part of the FOAF namespace, while the more general RDF terms are prefixed by rdf:.

Just to persuade you that FOAF isn't tied to XML, here is a similar FOAF example from Wikipedia, expressed using a format called JSON-LD5:

{
  "@context": {
    "name": "http://xmlns.com/foaf/0.1/name",
    "homepage": {
      "@id": "http://xmlns.com/foaf/0.1/workplaceHomepage",
      "@type": "@id"
    },
    "Person": "http://xmlns.com/foaf/0.1/Person"
  },
  "@id": "https://me.example.com",
  "@type": "Person",
  "name": "John Smith",
  "homepage": "https://www.example.com/"
}

This FOAF document describes a person named John Smith with a homepage at www.example.com.

Perhaps the best way to get a feel for how FOAF works is to play around with FOAF-a-matic, a web tool for generating FOAF documents. It allows you to enter information about yourself using a web form, then uses that information to create the FOAF document (in XML) that represents you. FOAF-a-matic demonstrates how FOAF could have been used to save everyone from having to enter their social information into a web form ever again—if every social networking site could read FOAF, all you'd need to do to sign up for a new site is point the site to the FOAF document that FOAF-a-matic generated for you.

Here is a slightly more complicated FOAF example, representing me, that I created using FOAF-a-matic:

<rdf:RDF
      xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#"
      xmlns:rdfs="http://www.w3.org/2000/01/rdf-schema#"
      xmlns:foaf="http://xmlns.com/foaf/0.1/"
      xmlns:admin="http://webns.net/mvcb/">
  <foaf:PersonalProfileDocument rdf:about="">
    <foaf:maker rdf:resource="#me"/>
    <foaf:primaryTopic rdf:resource="#me"/>
    <admin:generatorAgent rdf:resource="http://www.ldodds.com/foaf/foaf-a-matic"/>
    <admin:errorReportsTo rdf:resource="mailto:leigh@ldodds.com"/>
  </foaf:PersonalProfileDocument>
  <foaf:Person rdf:ID="me">
    <foaf:name>Sinclair Target</foaf:name>
    <foaf:givenname>Sinclair</foaf:givenname>
    <foaf:family_name>Target</foaf:family_name>
    <foaf:mbox rdf:resource="mailto:sinclairtarget@example.com"/>
    <foaf:homepage rdf:resource="sinclairtarget.com"/>
    <foaf:knows>
      <foaf:Person>
        <foaf:name>John Smith</foaf:name>
        <foaf:mbox rdf:resource="mailto:johnsmith@example.com"/>
        <rdfs:seeAlso rdf:resource="www.example.com/foaf.rdf"/>
      </foaf:Person>
    </foaf:knows>
  </foaf:Person>
</rdf:RDF>

This example has quite a lot of preamble setting up the various XML namespaces used by the document. There is also a section containing data about the tool that was used to generate the document, largely so that, it seems, people know whom to email with complaints. The foaf:Person element describing me tells you my name, email address, and homepage. There is also a nested foaf:knows element telling you that I am friends with John Smith.

This example illustrates another important feature of FOAF documents: They can link to each other. If you remember from the previous example, my friend John Smith has a homepage at www.example.com. In this example, where I list John Smith as a foaf:person with whom I have a foaf:knows relationship, I also provide a rdfs:seeAlso element that points to John Smith's FOAF document hosted on his homepage. Because I have provided this link, any program reading my FOAF document could find out more about John Smith by following the link and reading his FOAF document. In the FOAF document we have for John Smith above, John did not provide any information about his friends (including me, meaning, tragically, that our friendship is unidirectional). But if he had, then the program reading my document could find out not only about me but also about John, his friends, their friends, and so on, until the program has crawled the whole social graph that John and I inhabit.

This functionality will seem familiar to anyone that has used Facebook, which is to say that this functionality will seem familiar to you. There is no foaf:wall property or foaf:poke property to replicate Facebook's feature set exactly. Obviously, there is also no slick blue user interface that everyone can use to visualize their FOAF social network; FOAF is just a vocabulary. But Facebook's core feature—the feature that I have argued is key to Facebook's monopoly power over, at the very least, myself—is here provided in a distributed way. FOAF allows a group of friends to represent their real-life social graph digitally by hosting FOAF documents on their own homepages. It allows them to do this without surrendering control of their data to a centralized database in the sky run by a billionaire android-man who spends much of his time apologizing before congressional committees.

FOAF on Ice

If you visit the current FOAF project homepage, you will notice that, in the top right corner, there is an image of the character Fry from the TV series Futurama, stuck inside some sort of stasis chamber. This is a still from the pilot episode of Futurama, in which Fry gets frozen in a cryogenic tank in 1999 only to awake a millennium later in 2999. Brickley, whom I messaged briefly on Twitter, told me that he put that image there as a way communicating that the FOAF project is currently "in stasis," though he hopes that there will be a future opportunity to resuscitate the project along with its early 2000s optimism about how the web should work.

FOAF never revolutionized social networking the way that the 2004 Guardian article about it expected it would. Some social networking sites decided to support the standard: LiveJournal and MyOpera are examples.6 FOAF even played a role in Howard Dean's presidential campaign in 2004—a group of bloggers and programmers got together to create a network of websites they called "DeanSpace" to promote the campaign, and these sites used FOAF to keep track of supporters and volunteers.7 But today FOAF is known primarily for being one of the more widely used vocabularies of RDF, itself a niche standard on the modern web. If FOAF is part of your experience of the web today at all, then it is as an ancestor to the technology that powers Google's "knowledge panels" (the little sidebars that tell you the basics about a person or a thing if you searched for something simple). Google uses vocabularies published by the schema.org project—the modern heir to the Semantic Web effort—to populate its knowledge panels.8 The schema.org vocabulary for describing people seems to be somewhat inspired by FOAF and serves many of the same purposes.

So why didn't FOAF succeed? Why do we all use Facebook now instead? Let's ignore that FOAF is a simple standard with nowhere near as many features as Facebook—that's true today, clearly, but if FOAF had enjoyed more momentum it's possible that applications could have been built on top of it to deliver a Facebook-like experience. The interesting question is: Why didn't this nascent form of distributed social networking catch fire when Facebook was not yet around to compete with it?

There probably is no single answer to that question, but if I had to pick one, I think the biggest issue is that FOAF only makes sense on a web where everyone has a personal website. In the late 1990s and early 2000s, it might have been easy to assume the web would eventually look like this, especially since so many of the web's early adopters were, as far as I can tell, prolific bloggers or politically engaged technologists excited to have a platform. But the reality is that regular people don't want to have to learn how to host a website. FOAF allows you to control your own social information and broadcast it to social networks instead of filling out endless web forms, which sounds pretty great if you already have somewhere to host that information. But most people in practice found it easier to just fill out the web form and sign up for Facebook than to figure out how to buy a domain and host some XML.

What does this mean for my original question about whether or not Facebook's monopoly is a natural one? I think I have to concede that the FOAF example is evidence that social networking does naturally lend itself to monopoly.

That people did not want to host their own data isn't especially meaningful itself—modern distributed social networks like Mastodon have solved that problem by letting regular users host their profiles on nodes set up by more savvy users. It is a sign, however, of just how much people hate complexity. This is bad news for decentralized social networks, because they are inherently more complex under the hood than centralized networks in a way that is often impossible to hide from users.

Consider FOAF: If I were to write an application that read FOAF data from personal websites, what would I do if Sally's FOAF document mentions a John Smith with a homepage at example.com, and Sue's FOAF document mentions a John Smith with a homepage at example.net? Are we talking about a single John Smith with two websites or two entirely different John Smiths? What if the both FOAF documents list John Smith's email as johnsmith@gmail.com? This issue of identity was an acute one for FOAF. In a 2003 email, Brickley wrote that because there does not exist and probably should not exist a "planet-wide system for identifying people," the approach taken by FOAF is "pluralistic."9 Some properties of FOAF people, such as email addresses and homepage addresses, are special in that their values are globally unique. So these different properties can be used to merge (or, as Libby Miller called it, "smoosh") FOAF documents about people together. But none of these special properties are privileged above the others, so it's not obvious how to handle our John Smith case. Do we trust the homepages and conclude we have two different people? Or do we trust the email addresses and conclude we have a single person? Could I really write an application capable of resolving this conflict without involving (and inconveniencing) the user?

Facebook, with its single database and lack of political qualms, could create a "planet-wide system for identifying people" and so just gave every person a unique Facebook ID. Problem solved.

Complexity alone might not doom distributed social networks if people cared about being able to own and control their data. But FOAF's failure to take off demonstrates that people have never valued control very highly. As one blogger has put it, "'Users want to own their own data' is an ideology, not a use case."10 If users do not value control enough to stomach additional complexity, and if centralized systems are more simple than distributed ones—and if, further, centralized systems tend to be closed and thus the successful ones enjoy powerful network effects—then social networks are indeed natural monopolies.

That said, I think there is still a distinction to be drawn between the subway system case and the social networking case. I am comfortable with the MTA's monopoly on subway travel because I expect subway systems to be natural monopolies for a long time to come. If there is going to be only one operator of the New York City Subway, then it ought to be the government, which is at least nominally more accountable than a private company with no competitors. But I do not expect social networks to stay natural monopolies. The Subway is carved in granite; the digital world is writ in water. Distributed social networks may now be more complicated than centralized networks in the same way that carrying two MetroCards is more complicated than carrying one. In the future, though, the web, or even the internet, could change in fundamental ways that make distributed technology much easier to use.

If that happens, perhaps FOAF will be remembered as the first attempt to build the kind of social network that humanity, after a brief experiment with corporate mega-databases, does and always will prefer.

If you enjoyed this post, more like it come out every four weeks! Follow @TwoBitHistory on Twitter or subscribe to the RSS feed to make sure you know when a new post is out.

Previously on TwoBitHistory…

I know it's been too long since my last post, but my new one is here! I wrote almost 5000 words on John Carmack, Doom, and the history of the binary space partitioning tree.https://t.co/SVunDZ0hZ1

— TwoBitHistory (@TwoBitHistory) November 6, 2019
  1. Please note that I did not dare say "dead." 

  2. Jack Schofield, "Let's be Friendsters," The Guardian, February 19, 2004, accessed January 5, 2020, https://www.theguardian.com/technology/2004/feb/19/newmedia.media

  3. Dan Brickley and Libby Miller, "Introducing FOAF," FOAF Project, 2008, accessed January 5, 2020, https://web.archive.org/web/20140331104046/http://www.foaf-project.org/original-intro

  4. ibid. 

  5. Wikipedia contributors, "JSON-LD," Wikipedia: The Free Encyclopedia, December 13, 2019, accessed January 5, 2020, https://en.wikipedia.org/wiki/JSON-LD

  6. "Data Sources," FOAF Project Wiki, December 11 2009, accessed January 5, 2020, https://web.archive.org/web/20100226072731/http://wiki.foaf-project.org/w/DataSources

  7. Aldon Hynes, "What is Dean Space?", Extreme Democracy, accessed January 5, 2020, http://www.extremedemocracy.com/chapters/Chapter18-Hynes.pdf

  8. "Understand how structured data works," Google Developer Portal, accessed January 5, 2020, https://developers.google.com/search/docs/guides/intro-structured-data

  9. tef, "Why your distributed network will not work," Progamming is Terrible, January 2, 2013, https://programmingisterrible.com/post/39438834308/distributed-social-network

  10. Dan Brickley, "Identifying things in FOAF," rdfweb-dev Mailing List, July 10, 2003, accessed on January 5, 2020, http://lists.foaf-project.org/pipermail/foaf-dev/2003-July/005463.html

01-Jan-20
HotWhopper [ 1-Jan-20 10:22pm ]
Australia has just had another "hottest year" on record beating the last by quite a way. The average mean annual temperature was a huge 1.52 C above the 1961-1990 mean. The average maximum was a whopping 2.09 C above and the average minimum (not a record) was 0.95 C above the 1961-1990 mean.

I've plotted all these on the same vertical axis for comparison. Scroll over the charts to see the data labels:

Figure 1 | All Australia annual maximum surface temperature anomaly. The base period is 1961-1990. Data source: Bureau of Meteorology, Australia

Figure 2 | All Australia annual mean surface temperature anomaly. The base period is 1961-1990. Data source: Bureau of Meteorology, Australia

Figure 3 | All Australia annual minimum surface temperature anomaly. The base period is 1961-1990. Data source: Bureau of Meteorology, Australia

The trend in the maximum temperature since 1951 is 2.2 C / century, in the mean is 2.0 C/century and in the minimum is 1.8 C/century. I figure that means our days are heating more quickly than the nights and heat waves are getting hotter. Well, we know heat waves are getting hotter. We've just had the record broken twice in one week, and by a long way. It could also reflect seasonal changes. I'll see if I can find out more from the good people at the Bureau or do some more digging myself.

The chart below shows the change in the decadal temperature for maximum, mean and minimum.

Figure 4 | All Australia decadal temperature anomalies. The base period is 1961-1990. Data source: Bureau of Meteorology, Australia
The "bangs" in the title is a reference to the horrific fires here. People in seaside towns along the coast in eastern Victoria and south-eastern NSW could hear the LPG gas bottles exploding along with houses and trees, as the fires tore through the townships. (In places where there's no natural gas piped in, people usually have gas cylinders next to the house.)




Australia is burning [ 01-Jan-20 1:47pm ]
Fires in East and Far East Gippsland and the high country exploded on Monday. We were warned.

Some people who I thought would have known better were sceptical of the warning from Emergency Services to leave Far East Gippsland. After all, it's a huge area, was jam-packed with holiday-makers, and it's on the coast (water puts out fire, right?). They may have neglected to factor in a number of things:

First, the fire services know what they are doing. If they tell people to leave they have good reason for doing so. Their worst nightmares became real, as you've probably heard by now.

Second, there is only one major road through that whole area, the Princes Highway. It's around 250 km from Bairnsdale (the Victorian town just outside the evacuation zone) to Mallacoota, the easternmost town in Victoria. Much of the road is through national and state forest and densely wooded. The road has some sections where there are three lanes (two going, one coming) but it's mostly just two lanes. Towns are small and far apart. The road is mostly winding and hilly. You'd normally need to allow at least three hours to traverse it, then it's another three and a half hours to Melbourne. (The nearest town to the north from Mallacoota is Pambula, about 1 1/2 hours drive but the roads north are also closed.)

Thirdly, and this is related to the second point, it's common to get stuck behind a large vehicle that slows down going uphill, or a car towing a caravan that travels below the speed limit (100 kph except through towns). The number of people in this picturesque coastal region swells by tenfold or more over the school summer holidays. Imagine having ten thousand visitors leaving Mallacoota to head to Melbourne, plus more from all the other small towns on the way, and getting trapped by fire on the highway.



Right now, the tiny town of Club Terrace on the main highway was burnt out. Cann River on the main highway remains inaccessible and the people trapped there have run out of food and have no power or communications. Mallacoota is almost 30 km off the main highway, and the road has just been cleared for emergency vehicles only. Normal traffic can't get through and even if it could, people would not be able to get down the highway. The only other route would be north, but that road is also closed. You can see how people are trapped where they are on the VicRoads map below, with the red dots being road closures (click to enlarge it).


In the map above I've shown the direction of Melbourne and Sydney, but this is the scenic route, not the normal route between those two cities. Most people driving from Melbourne to Sydney or vice versa would take the more direct inland route up the Hume Freeway, which bypasses towns and is a four lane divided highway once you leave either city (could be six or eight lanes in Melbourne and Sydney).

On top of the terror of raging fires, a lot of power is out as well, which affects phones, refrigerators, air con etc. It could be out for days yet. The backup batteries on some mobile towers also ran out or maybe the towers got burned. Until it's safe to go in and assess the situation, the telcos and electrical distributors won't be able to begin repairs.

Because these fires are mostly in bushland they will go on for weeks, like the ones in NSW that are smoking up Canberra and Sydney. It's dangerous and difficult to get into the forests. Firefighters focus on protecting lives and property, but can't protect all property (or lives). These areas are sparsely populated normally so there wouldn't be a lot of local volunteers compared to the need. That means there'll be volunteers from distant areas and heaven help their own local communities if fires break out there.

That's not to say people will be stuck away from home for weeks. There are ships going to pick up people trapped in Mallacoota, for example, and maybe some other towns; and there's talk of helicopter rescues. A top priority will be to open the roads again and hope there are not too many more days like Monday this summer. (This Sunday isn't looking too promising and there's a lot of summer to go, so no guarantees). For residents who've lost their homes and farms and businesses, it will be a long road to recovery. Hopefully there will be tradespeople willing to relocate temporarily to help rebuild. The forests will take even longer to recover. Some might not.

I've not yet mentioned the awfulness of the fires in our own region. The Walwa fire not far to the northeast of here has caused a huge amount of destruction in Corryong and the news is just starting to trickle in. [Apart from a very smoky summer, we're fine here in the Kiewa Valley. There have been a few fires this summer, but nothing too dramatic and they are either "contained" (i.e. not spreading), "controlled" (i.e. whole perimeter is secured and no breakouts expected) or extinguished.]

There's a good article by Nerilie Abram in Scientific American about climate change and fires. The tweet about it showed there are still a few deniers floating about. Few firefighters would doubt the world is warming and fire behaviour is changing. (WUWT hasn't mentioned the fires. Whoever's writing for and running the blog these days probably wants to avoid any clear evidence that would upset climate conspiracy theorists.)

Most deniers know next to nothing about wildfire. Some make up stuff about "lack of backburning", confusing it with prescribed burns for hazard reduction. (Backburning is where firefighters fight fire with fire, setting the forest alight ahead of the main fire when conditions allow it, to "burn back" into the fire.) Some blame it on the "greens" (who don't run any government in Australia so can't be blamed for anything, let alone a non-existent "crime"). I believe the allegation is along the lines of "greens won't let us beer-swilling or more likely cocoa-drinking deniers chop down all the trees and cover the entire country with concrete". As if these cocoa-drinking deniers would know what to do with a chainsaw or have a clue about mixing concrete anyway. Some deniers say "we've always had fires", which is like being in the middle of the worst cyclone on record saying "but we've always had rain and wind". There are probably some deniers who doubt there are any areas burning in Australia.

I've dug out some old images of fires from past years. 2013 was known as "the angry summer". 2020 will be the angrier summer, with worse heatwaves and worse fires. The big difference this year is not just the number of fires, but the location and the ferocity.  The east coast of Australia is heavily populated.



There is a lot more that could and will be written about this summer, including the abysmal non-reaction from the Australian government. Scott Morrison is the Prime Minister who trotted off for a holiday in Hawaii while his home state was burning to a crisp. He came back, got some photo ops with a few firefighters then, having figured he'd done enough on that score, threw a new year's eve party and went to the cricket. He is adamant he's not going to do anything more to reduce carbon emissions. (Some speculate it's on religious grounds. He's a member of a fairly small and suspect "religious" congregation.)

On that unpromising note, let me change the subject and wish you all a Happy New Year, or at least a fulfilling and satisfying year.

References and further reading
Australia's Angry Summer: This Is What Climate Change Looks Like - article by Nerilie Abram, Scientific American, December 31, 2019

Australia's Angry Summer - HotWhopper, March 2013

Corryong fires - via Australia's national broadcaster, the ABC

NSW fires - via the Sydney Morning Herald

'Not like other bushfires' - from The Age

Thousands forced to take refuge on Australian beach as deadly wildfires close in - from Washington Post

Why the Fires in Australia Are So Bad - from the New York Times

Australia fires: nine dead and hundreds of properties destroyed, with worse to come - The Guardian

Australia fires: Death toll rises as blazes destroy 200 homes - BBC






30-Dec-19
Predictions for 2020 and beyond [ 30-Dec-19 7:23pm ]

It’s almost the end of the year, so here are some predictions for 2020 and the 2020s.

Let’s get the easy ones out of the way:

  • Climate change will continue to accelerate.
  • Greenhouse gas emissions will continue to increase, and the rate of increase will be positive, possibly more positive than in 2019.
  • We’ll see extreme weather events, including extreme snowfall, extreme winds, massive cyclones, record-breaking drought, and extreme wildfire. Somewhere.  I don’t know which will occur when or where, though.
  • Business will continue to look for ways to profit off climate change, instead of stopping it.
  • People will continue protesting about climate issues, but since the movement leaders seem to be still reinventing the wheel and using ideas, tactics, and strategies that The Establishment has good counters to, they won’t get as much done as needs to be done.  And that will really and truly suck.

I’ll stick my neck out make some political predictions:

  • A billionaire will win the 2020 US election.
  • Trump will ultimately be indicted on counts they didn’t bother to impeach him over (either in 2021 or 2025).
  • Mitch McConnell will win his re-election.
  • Billionaires will continue to gain control.  Also, billionaires will continue to demonstrate that wealth is a good substitute for intelligence and character.
  • I will continue to be a contrarian pessimist who delights in giving reality opportunities to demonstrate how wrong my political prognostications are….

On the science and technology front:

  • Facebook and Social Media in general will increasingly become “so last decade,” especially as political, social, and hacking issues mount.
  • The truism that anything connected to the internet can be hacked will be demonstrated in a new way (this is going to be annoying to check in December 2020).
  • AI’s potential and shortcomings will become more evident.
  • Skeet- and trap-shooting will become more popular among non-hunters.  In possibly unrelated news, people will publish thought pieces about how to deal with the problems drone pose (checking this will be tricky too.  Oh well.).
  • There will be new battery technologies tested in labs and covered breathlessly by the tech press.  I’ll stick my neck out and say that one new battery technology actually demonstrates it can scale up to commercial use.
  • Disputes over lithium and sand will continue.  Phosphorus shortages will get coverage as a growing problem.
  • The fishing industry will get some horrific expose, either about working conditions, effects on sea life, and/or looming extinction of favorite food fishes.
  • Some tiny advance in nuclear fusion will be trumpeted as heralding the dawn of nuclear fusion as a power source.
  • A company focused on finding new antibiotics and bringing them to market will go out of business.

And a few more random predictions:

  • Some people (other than me) will call the decade we’re leaving “The Terrible Teens.”  Perhaps the decade we’re entering will be called “The Howling Twenties”?
  • Habitat gardening will increasingly become a thing (I’m cheating, because the group I’m active with is holding a workshop on this in a few weeks).
  • There will be a recession in the housing industry.
  • I will be able to post more on this blog.
  • I’ll be posting about the results of my predictions came to reality next December.

Anyway, congratulations on successfully escaping 2019. I hope your 2020 is no worse, possibly even better, than 2019 was for you.

What are your predictions?

24-Dec-19
HotWhopper [ 24-Dec-19 2:31am ]
Season's greetings to all [ 24-Dec-19 2:31am ]
A short, sweet and old-fashioned greeting to everyone.

I'm sorry I've not been blogging much this past couple of years, but fear not (or fear, depending who you are), I shall return in 2020.

Here is a picture of my most Christmas-y plant - Little John Callistemon, which keeps getting better and better each year and thrives on very light pruning and general neglect.



And another, this time a snapshot of the next door neighbours' decorations. They have been entertaining the local children (large and small) and raising money for local charities for decades and continue to do so despite the fact that Santa suffered a stroke some time ago, which has been quite debilitating for him. The photo doesn't do justice to the lights, which look amazing. Santa's daughter made the kangaroos :)



Happy holidays wherever and whoever you are, especially to all the courageous men and women fighting fires around the country and not forgetting all the people supporting them.

Stay safe.
20-Dec-19


From Mount Beauty Dec 2006The fires across Australia this year are horrific. Because the smoke is inundating the biggest capital city (not good), people are taking notice (which is good). The fires this season are probably vying for the worst ever experienced in this country. There will be worse to come with more global warming, so it's important to be prepared.

I expect there are a lot of people who've never had an up close and personal experience with fires or smoke, so I figured I'd put some thoughts down from my own experience. I'm not a fire expert but I've been through a few huge fires in my time, including three big ones this century. (If you've got better or different advice, based on your knowledge and experience, don't hesitate to say in the comments below.)

Unlike the current fires, the big ones that threatened our town were large and slow burning in the main, with some exceptions. Like most of the current fires, two were started by lightning. More on that in a bit.

Smoke hazard: One thing with fires in the bush that seem to go on forever (weeks not days), is the smoke. This causes problems for communities - you can't see flames if you can't see, which adds to anxiety. Visibility can get to a few metres some days. You can't breathe properly, your lungs hurt and your eyes suffer. Community meetings can be frustrating when you're told - well we can't see the fire front so we can't say how far it is from anywhere.



Smoke from a local fire.Smoke also causes problems for firefighters. Planes and helicopters have trouble with visibility and might not be able to be used. Everyone, especially firefighters, suffer smoke inhalation. Firefighters might not know where to best target their efforts.



Smoke cloud - Mount Beauty 2006I'd advise wearing a P2 mask, which filters out the worst smoke contaminants. Don't worry about looking uncool - you might even set a trend and make mask-wearing the latest and greatest fashion. It beats damaging your lungs and worse.

Embers: Especially when it's windy, embers can be carried kilometers from the fire. On a (relatively) clear day, you can see the spot fires starting up ahead of the main fire. If you're trying to protect your home and there's an ember storm, it will be almost impossible to keep up.



Dropping fire retardant on a fire
on a hill behind our house.Water: It can be tempting to stay to protect your home. You've got hoses out and bins and buckets filled with water, and lots of towels or hessian bags or blankets. You've filled every bath and basin. You've blocked and filled all the gutters with water. You think you'll be fine if you stay. The problem comes if you're relying on town water coming out of the tap, and everyone else in town does the same. Then, because everyone's pouring out gallons of water at the same time, the town's water pressure drops or dries up altogether.

Or maybe you're in a region where there's drought (much of Australia), and there's little to no water available. Helicopters and planes use water from local dams, but in a drought the dams might all be empty. Or maybe you're relying on water in a tank - except it's so hot the tank has melted. Or it could be you're using a pump to get water, but the pump stops working.

After the fire passes, the town water supply might be contaminated, especially after the next rain that washes everything that burnt into the water supply. Be prepared to get bottled water or, if you're lucky and there's still tap water, to boil it for the next few weeks.

Heat: I'm not talking about hot weather. Yes, indeed, hot weather can be deadly. Some places here are seeing maximum temperatures approaching 50C (122F). What I'm talking about is heat from the fire. If you've ever been near a bonfire you'll understand what I mean.

Whatever you do, make sure you wear protective clothing. (Look at what the firies wear even in unbelievable heat.) Don't wear thongs (flip flops), or shorts and singlet. Don't wear clothes made of flammable material. Go for wool or some other fire-resistant material with some insulating property. Wear long sleeves, long pants, gloves, boots and socks, and a mask. You'll get hot but you'll be less likely to get radiation burns. (If you've got a working hose then, as a last resort, spray the water to form a barrier between you and the flames.)

Wind: There is the normal wind that comes from changes in air pressure. A shift in that wind will change the direction of fire. It could change a fire front from being 200 m wide and heading east (fanned by a westerly wind) to a fire front 5 km wide and heading north (fanned by a strong southerly change).

There is also the wind created by the fire itself. When the fire is vigorous or fast moving, it can create its own weather. This makes an already unpredictable fire extremely unpredictable and dangerous.

Roads: You've finally decided enough is enough and it's too dangerous to remain, so you jump in the car or sturdy ute and head for anywhere but the fire. Problems you might encounter are that you can't see where you're going because of the smoke; or worse, you can't get through the road because of fallen trees. This is a huge problem if there are only one or two roads, which is the case in many areas.

The moral is, don't wait. Leave early.

Communications & electricity: We're used to picking up the phone, getting on the internet, watching television or tuning into the radio. In a big fire, communications towers can be destroyed and there can be power outages. You've been warned.



Firefighters: In much of Australia, fires on properties outside of the major cities are fought by organised volunteers, except for government land, where they are fought by government workers. Volunteer firefighters, like Victoria's CFA and NSW RFS are mostly men and women who live and work in country towns and on farms. It used to be they'd go out and put out a haystack fire, or a grass fire that might burn for a day or so. Now they can be giving up their work and income for weeks on end, fighting fires in their own district or traveling far from home and helping protect private property from megafires elsewhere. Not only are they giving up income, their employers (if they aren't fighting fires) are having to do without staff. Families have to make do on less income and with less support.



Some of our local fire fighters looking after us while there's a fire up the hill. Thank you.Then there's a problem that probably occurs too often. The local fire crew is off fighting a fire in the next valley (or another one 500 km away), and a fire breaks out in their home district, but there aren't enough people or equipment to fight it because they're all off fighting a fire elsewhere.

You rarely hear firefighters complain. Firefighting and emergency services are what they volunteer to do, and they are committed to it. In my view the system will need to change long term, and we should be compensating them. Until then (and after then), just bear in mind that firefighters (whether volunteers or government) will probably be tackling the fires with the following priorities: save lives first, then save property, then save bushland and, occasionally, wildlife.

Most people are aware and responsible when it comes to bushfires. Sometimes people can be unthinking, however. People who put themselves in harm's way, resulting in firefighters coming to their rescue, might be not just risking their own lives, they might be preventing the firefighters from saving lives elsewhere.

On that note, don't go gawking. You'll not just be risking your own life, you'll be cluttering up the road and endangering emergency responders as well as people who may be fleeing for their lives.

Managing emotions: Unless you've got no emotional capacity, you'll most likely be affected in one way or another if you've been through a fire or know people who are. Long drawn out fires take their toll. You're woken in the night by the loud cracking of exploding trees, or you can't get a decent night's sleep for weeks on end because you never know if the fire is far enough away or if it's working it's way down the hill behind you. You'll probably also find yourself becoming addicted to the radio. (You'll have dug out that old transistor radio and picked up some spare batteries, to tune into the emergency broadcast service on the local ABC.)

While long drawn out fires can heighten anxiety, immediate fire danger can elicit panic, or maybe a deceiving calm. You might think you're behaving rationally and with a clear head. Unless you're trained and have experience with disasters (and maybe even then), despite feeling calm and rational you risk making poor decisions.

If you've already got a plan (and you know you should) then follow it. Don't change things at the last minute.

Weeks, months, even years after living through a disaster, people can be affected. It might be post-traumatic stress or it might be a shadow of PTSD (not full blown). (Be prepared the following autumn to get a rush of adrenalin when you see leaves fall from trees, before you realise they are just autumn leaves not embers.)



Eerie colours - 2006 fires.

Implement your fire plan: If you're advised to get the hell out, do so. Grab your pre-packed bag that has water, masks, survival gear, protective clothing. Round up your family and put all your pets in their cages and into the car. Check you've got your wallet and phone and car keys. Jump in the car (which you've kept charged or full of fuel), do a final head count, and head for the nearest safe place. (You have looked up the designated safe places, haven't you. You know where they are.)

Whatever you do, don't go back home until the all clear has been given. That could kill you (and has killed people).

Before finishing, a word about deniers. They are dangerous (as well as all their other flaws). I've seen deniers claim "this is nothing new". That's wrong. Fires today are worsened by climate change. Each decade brings worse fires. The fires this season could well be the worst in Australia's history. The more prepared we are, the better the chance that while they might be the worst by many measures, they won't be the deadliest.

Another thing I've seen is deniers still trying to argue there's some sort of scientific conspiracy and that Australia isn't really that hot, or the records have been altered to make out it's got hotter than it really has - which is as ridiculous a notion as it sounds.

Then there are people claiming the fires were lit by people. Maybe some were, but the biggest and worst fires were caused by lightning. In any case, in catastrophic fire conditions it doesn't matter where the spark comes from. When weather conditions are not conducive to fires, whether they are started by lightning, a train, a power line, an angle grinder or an arsonist, they cause a lot less harm.

Final word: Lives are worth a lot more than houses, or art works, or photographs, or jazz collections, or whatever might cause you to delay or hesitate to leave. Too many people have lost their lives by remaining. Few people lose their lives by leaving early when a fire threatens.

Final final word: I hesitated a bit before writing this up. I'm not an expert on fire or disaster management. However, I've been through some major fires in recent years and I've not seen anything much like this on the web, despite the fires raging. It might be food for thought to someone.




15-Dec-19

Just some positive thinking–what’s happening to me?–for the holidays.  This is another in my series of alt-histories I wish someone else would write, and it’s kind of in the spirit of DC’s famous Watchmen comic.

The idea is pretty simple: how do you write an alt-history where people take climate change seriously enough to do something about it?  I’ve butted heads long enough to know that most people in the Baby Boom and Gen X (at least in Middle and Upper Class America) are not interested in dealing with climate change if it causes them any serious inconvenience.  I sympathize with the kids of “generation omega” (they’re now hitting college, and I hope the GenO doesn’t catch on).  They’re scared and furious, and they should be.  I’m getting sick of how many ways I’ve heard people find ways to not do anything.  Either we’re all going to hell anyway, or there’s no problem, or there’s a problem but nothing we do as individuals matters, or –look, squirrel/nuclear power/don’t eat meat/distract/disrupt/distract/bullshit….–I get this a lot when I talk with people.  Trump’s rubbing off on everyone.

So anyway, I’d suggest a different take on cli-fi.  It was sort of done in Watchmen, but it could be done better.  The idea starts with someone leaking all the research the petroleum companies did on climate change back in the 1950s and 1960s, with the document leak happening in the 1970s if not before.  At the height of the environmental movement, Boomers start taking climate change seriously, putting us decades ahead of where we are now.

I don’t think it would necessarily be blue skies and blooming roses, but it might be pleasant for writers and environmentalists to think about what might have been, had we given a shit when we had the oil to really rebuild global infrastructure instead of going on a mad, metastatic building spree as we did in the real world.  Heck, if you’re interested in writing this story universe, have Timothy Leary get run over by a hippy’s bus, so that LSD becomes legal too, while you’re at it.

As with any of my crazy ideas, feel free to take it and run with it.  I’m working on something entirely different.

 

Back on December 27, 2018, I posted a set of predictions.  I haven’t posted much since then, because I’ve been annoyingly busy with conservation work, fighting a bunch of leapfrog sprawl developments in San Diego County.  Most of that I can’t really talk about due to litigation issues, but I can at least go over the predictions I made a year ago and score how well I did.

Here they are.

  • That I'll write a column in December 2019 about what I got right and wrong.  This means, among other things, that I won't die in a pandemic or nuclear war in 2019, and that civilization won't collapse.

Got that right.  Yay!  It’s always good to start of a set of predictions with a win.

  • The US president as of December 2019 will be either Pence or Trump, most likely Trump.  This isn't because I'm a Trump supporter, but for two reasons.  One is that the US Senate is Republican, so they're not going to vote to impeach him.  Also, the US looks like it can weather having incompetence-in-chief, so long as we don't get into a war, and since he's smart enough not to start a nuclear war, I don't think the US is going to get invaded.  Rather, I think it's to a lot of peoples' advantages to dump liquid oxygen on the bonfire of his vanity, to make him unable to wage the struggle for reelection for 2020, even while he's roped into it as the only Republican candidate running.

To no one’s surprise, Trump’s still president.

  • Hard, no-deal Brexit won't happen.

Oh how I wish I’d gotten this right, but after Thursday’s election it looks like English Nationalism looks to North Korea for a model for cordial international relations.  Is this sour grapes for losing the empire?  If so, please do look at how France has handled their transition with rather more aplomb (and rather more islands still under their control)

  • There will be lots of disasters linked to climate change.

Such as the Midwest US floods, massive fires in Australia, methane bubbling from the east Siberian Sea…And I’ll make the same prediction for 2020.

  • Nuclear fusion will be announced to be 30 years away at some point during the year.

Kinda maybe?  The Lockheed baby fusion reactor of 2015 is on its fifth design, but still hasn’t gone live.   On the bad side, this shows their initial PR was so much BS (sadly, no surprise).  On the other hand, they’re actually working on it, which I guess is a good thing.  If fusion’s going to play any role in dealing with climate change, we needed it 30 years ago, really.

  • Suburban sprawl will largely stop in California and San Diego (this is part of the stuff I can't talk about further).

I was pretty sure, going into 2019, that this was a safe prediction, and I was largely correct.  However, it’s not a permanent stop.  What’s going on is that there were a bunch of highly problematic leapfrog sprawl developments.  Most of them were approved, and the ones that were approved ended up in court as their opponents sued.  There are three left in the queue, and they’ll all get the same treatment.  But wasn’t unanimous.   A couple of developments (notably Paradise Valley in Riverside County) were ignominiously shot down, after much work by the local environmentalists.  Local county supervisors (courageously) stood up against the corruption that’s generally accompanied the sprawl.  The problem with these projects is that they’ll gross hundreds of millions to billions of dollars at full build-out, so the cost of financing elections and other shenanigans is a minor part of the budget.   Kudos to the Riverside Supervisors for doing the right thing regardless with Paradise Valley.  I could only wish that the majority of San Diego Supervisors had similarly developed notochords.  If we’re really lucky, the judges will say some good things about not putting people in harm’s way from fires and earthquakes.  Perhaps their rulings (perhaps!) will make it harder to build in dangerous areas.  I’m skeptical about this last, because right now the trend on hazard analyses is to lie and hope that your opponents can’t sue (or pay for the appeal) to make you tell the truth.  Even if a judge sternly tells developers that they need to not hang people out to fry, they’ll likely do it anyway, while claiming on paper that they are not.

  • A bunch of new bench-top battery technologies will be published and then disappear as someone tries to create them on a commercial scale.

Oddly enough, this article just showed up, so yes, and I didn’t even have to keep track of the battery news in 2019.  This was a safe prediction.

  • Something vital we thought was correct about social media and its inevitability will turn out to be completely untrue in scary ways.

Since I was silly enough to throw “inevitability” in here, I have no idea how to score this one.  Definitely the glow is off social media, and I think a lot of us are getting sick about the “lock-in” it’s so far achieved with much of society.

  • There will be a lot of politicking around the Green New Deal.

Yes, and I’m not going to link with it.  While the words of the actual Green New Deal are quite inspiring, what I’ve seen about how it’s being organized on the ground…how do I say this?…could use some improvement.  The other thing I could say is that I started out involved, then I stopped being involved, and I sincerely hope that what ultimately comes out is better than what I saw earlier this year.

  • San Diego will start working on the third edition of their Climate Action Plan, as the first two have been thrown out by judges.

I was wrong.  The County has so far lost five times on this in court, but they decided to appeal again rather than just getting on with writing a new plan.   There’s a Forrest Gump saying that adequately covers their behavior.

  • The 2019 rainy season will be drier than the 2018 season.  And I'll wash my car a lot in the next few months too.

This one I can’t score.  We’ve had more rain than we had at this time last year, which was extremely fortunate, as it ended the fire season before it could properly get going.  The critical question was whether those few inches are all we’re going to get this rain year, or whether there’s more coming.  It’s impossible to tell, as rain in southern California almost always comes in a few storms, and we get rain or not depending on whether the storms hit us or miss us.

Given that I’m a vocal pessimist, it’s kind of embarrassing that the stuff I got wrong, like the English doing Brexit and the behavior of the San Diego supervisors, was because I was too optimistic about their behavior.  Guess I’ll have to work on that.

Happy holidays, all.

14-Dec-19
Scarfolk Council [ 14-Dec-19 2:24pm ]
Surrender Hope (1975) [ 14-Dec-19 2:24pm ]


10-Dec-19
rohorn [ 10-Dec-19 3:54am ]
Hunting and Gathering... [ 10-Dec-19 3:54am ]
All of this year's track time was spent behind the viewfinder shooting with a new JVC GY-HM620. A lot of my clips made it into this year's MRA Awards Banquet video. Already looking forward to wandering around and shooting at HPR next year!


After the Quail event, Jason Cormier at Odd Bike asked me for an article on the racer project and some of the background - that gave me a good excuse to explain a little how I got to this point and a little more about where I'm going. It's been a fun experience so far, which made it a lot of fun to write. Thanks, Jason!

All of the custom ordered parts have arrived: Connecting rods, crank pin, and hybrid Kawasaki KX500/Ducati 999 primary gear (All from England), Poly Chain GT 8 mm pulley stock (For final drive), and 56 mm Lectron downdraft carburetor (For feeding 706 cc crankcase displacement). And then there's the one and only part I just pulled off the last racer that goes on the next one - it's a part that's getting a lot harder to find.




The engine casting will have to work well with the 2WD final drive arm, so those get designed and built together - after that, the frame and 2WS system pretty much fall into place. The big plan for the bodywork is to have as little as possible, as simple as possible, and as cheap and easy to replace, vinyl wrap, and install as possible - wasting time and money on ugly over-styled plastic that gets vinyl wrapped anyway seems really stupid to me, especially for a racer, where bodywork is considered a consumable item, like tires, safety wire, collarbones, etc...



 
03-Dec-19
Scarfolk Council [ 3-Dec-19 8:06pm ]
Election Posters of the 1970s [ 03-Dec-19 8:06pm ]
Of all the 304 general elections that were held in the UK during the 1970s, these three election posters for the Conservative party are among the few campaign materials that are still extant. This is largely due to the fact that campaign slogans were more often compulsorily tattooed onto ailing citizens who collected welfare benefits.*

All promotional literature was designed and printed by the Scarfolk Advertising Agency, who, it was later revealed to the surprise of all clients concerned, had been working not only for the Conservative, but also the Labour and Liberal Parties.

Furthermore, the agency cleverly maximised its profits by selling exactly the same poster designs to all clients. Only the party name was changed. This made it difficult for voters to decide who to vote for, but it also confused politicians who became unsure which party they belonged to.





*See also: 'Trampvertising'.

Further reading: 'Watch Out! There's a Politician About' (1975), 'Voting isn't Working' election poster, 'Democracy Rationing', 'Put Old People Down at Birth' election pamphlet.

15-Nov-19
The Let's Think About... booklet was published by Scarfolk Council Schools & Child Welfare Services department in 1971. It was designed for use in the classroom and encouraged children between the ages of five and nine to focus on a series of highly traumatic images and events.

Parents and teachers assumed that the booklet was based on psychological research but it had no scientific basis whatsoever. The booklet's medically untrained author was one of the dinner ladies from the council canteen before she was fired for attempting to slip strychnine into bowls of blancmange.

Despite the scandal, the booklet remained on the school curriculum for many years and the author was invited by the council to pen an updated edition from her prison cell in 1979.


07-Nov-19

Apocalyptic toys were all the rage in the late 1970s, not that they were thought of as apocalyptic at the time. Citizens didn't fear their annihilation; they quite looked forward to demonstrating their 'Dunkirk spirit' with the misguided belief that it would somehow bring the country together. It didn't occur to them that their dogmatic nationalism might instead bring about the demise of the nation.
As the country moved toward collapse, social unrest and inevitable casualties increased. The paranoid state began anonymously exterminating citizens who so much as hinted at insurrection. Average (and the vast numbers of below-average) people were killed in street clashes between opposing factions and there were spates of frightened suicides.

Scar Toys exploited this expanding market opportunity and created a range of toys aimed at the many children in the process of being orphaned. One such toy, the Breath Mirror Set, aimed at young girls, was designed to accompany their more traditional beauty/vanity toys. The deluxe set (see picture above) included one mirror for each parent, colour-coded as per gender convention: pink for girls, blue for boys.

The wording on the back of the packaging encouraged children to use the mirrors beyond the death of their own parents. Included was a little booklet into which little pink stars could be affixed for every corpse that was identified using the mirrors. Highly sought-after prizes were awarded to the girls with the most stars and council archival documents reveal that the police turned a blind eye when gangs of little girls began slaughtering adults in frenzied attempts to accumulate more stars.
Two-Bit History [ 6-Nov-19 12:00am ]

In 1993, id Software released the first-person shooter Doom, which quickly became a phenomenon. The game is now considered one of the most influential games of all time.

A decade after Doom's release, in 2003, journalist David Kushner published a book about id Software called Masters of Doom, which has since become the canonical account of Doom's creation. I read Masters of Doom a few years ago and don't remember much of it now, but there was one story in the book about lead programmer John Carmack that has stuck with me. This is a loose gloss of the story (see below for the full details), but essentially, early in the development of Doom, Carmack realized that the 3D renderer he had written for the game slowed to a crawl when trying to render certain levels. This was unacceptable, because Doom was supposed to be action-packed and frenetic. So Carmack, realizing the problem with his renderer was fundamental enough that he would need to find a better rendering algorithm, starting reading research papers. He eventually implemented a technique called "binary space partitioning," never before used in a video game, that dramatically sped up the Doom engine.

That story about Carmack applying cutting-edge academic research to video games has always impressed me. It is my explanation for why Carmack has become such a legendary figure. He deserves to be known as the archetypal genius video game programmer for all sorts of reasons, but this episode with the academic papers and the binary space partitioning is the justification I think of first.

Obviously, the story is impressive because "binary space partitioning" sounds like it would be a difficult thing to just read about and implement yourself. I've long assumed that what Carmack did was a clever intellectual leap, but because I've never understood what binary space partitioning is or how novel a technique it was when Carmack decided to use it, I've never known for sure. On a spectrum from Homer Simpson to Albert Einstein, how much of a genius-level move was it really for Carmack to add binary space partitioning to Doom?

I've also wondered where binary space partitioning first came from and how the idea found its way to Carmack. So this post is about John Carmack and Doom, but it is also about the history of a data structure: the binary space partitioning tree (or BSP tree). It turns out that the BSP tree, rather interestingly, and like so many things in computer science, has its origins in research conducted for the military.

That's right: E1M1, the first level of Doom, was brought to you by the US Air Force.

The VSD Problem

The BSP tree is a solution to one of the thorniest problems in computer graphics. In order to render a three-dimensional scene, a renderer has to figure out, given a particular viewpoint, what can be seen and what cannot be seen. This is not especially challenging if you have lots of time, but a respectable real-time game engine needs to figure out what can be seen and what cannot be seen at least 30 times a second.

This problem is sometimes called the problem of visible surface determination. Michael Abrash, a programmer who worked with Carmack on Quake (id Software's follow-up to Doom), wrote about the VSD problem in his famous Graphics Programming Black Book:

I want to talk about what is, in my opinion, the toughest 3-D problem of all: visible surface determination (drawing the proper surface at each pixel), and its close relative, culling (discarding non-visible polygons as quickly as possible, a way of accelerating visible surface determination). In the interests of brevity, I'll use the abbreviation VSD to mean both visible surface determination and culling from now on.

Why do I think VSD is the toughest 3-D challenge? Although rasterization issues such as texture mapping are fascinating and important, they are tasks of relatively finite scope, and are being moved into hardware as 3-D accelerators appear; also, they only scale with increases in screen resolution, which are relatively modest.

In contrast, VSD is an open-ended problem, and there are dozens of approaches currently in use. Even more significantly, the performance of VSD, done in an unsophisticated fashion, scales directly with scene complexity, which tends to increase as a square or cube function, so this very rapidly becomes the limiting factor in rendering realistic worlds.1

Abrash was writing about the difficulty of the VSD problem in the late '90s, years after Doom had proved that regular people wanted to be able to play graphically intensive games on their home computers. In the early '90s, when id Software first began publishing games, the games had to be programmed to run efficiently on computers not designed to run them, computers meant for word processing, spreadsheet applications, and little else. To make this work, especially for the few 3D games that id Software published before Doom, id Software had to be creative. In these games, the design of all the levels was constrained in such a way that the VSD problem was easier to solve.

For example, in Wolfenstein 3D, the game id Software released just prior to Doom, every level is made from walls that are axis-aligned. In other words, in the Wolfenstein universe, you can have north-south walls or west-east walls, but nothing else. Walls can also only be placed at fixed intervals on a grid—all hallways are either one grid square wide, or two grid squares wide, etc., but never 2.5 grid squares wide. Though this meant that the id Software team could only design levels that all looked somewhat the same, it made Carmack's job of writing a renderer for Wolfenstein much simpler.

The Wolfenstein renderer solved the VSD problem by "marching" rays into the virtual world from the screen. Usually a renderer that uses rays is a "raycasting" renderer—these renderers are often slow, because solving the VSD problem in a raycaster involves finding the first intersection between a ray and something in your world, which in the general case requires lots of number crunching. But in Wolfenstein, because all the walls are aligned with the grid, the only location a ray can possibly intersect a wall is at the grid lines. So all the renderer needs to do is check each of those intersection points. If the renderer starts by checking the intersection point nearest to the player's viewpoint, then checks the next nearest, and so on, and stops when it encounters the first wall, the VSD problem has been solved in an almost trivial way. A ray is just marched forward from each pixel until it hits something, which works because the marching is so cheap in terms of CPU cycles. And actually, since all walls are the same height, it is only necessary to march a single ray for every column of pixels.

This rendering shortcut made Wolfenstein fast enough to run on underpowered home PCs in the era before dedicated graphics cards. But this approach would not work for Doom, since the id team had decided that their new game would feature novel things like diagonal walls, stairs, and ceilings of different heights. Ray marching was no longer viable, so Carmack wrote a different kind of renderer. Whereas the Wolfenstein renderer, with its ray for every column of pixels, is an "image-first" renderer, the Doom renderer is an "object-first" renderer. This means that rather than iterating through the pixels on screen and figuring out what color they should be, the Doom renderer iterates through the objects in a scene and projects each onto the screen in turn.

In an object-first renderer, one easy way to solve the VSD problem is to use a z-buffer. Each time you project an object onto the screen, for each pixel you want to draw to, you do a check. If the part of the object you want to draw is closer to the player than what was already drawn to the pixel, then you can overwrite what is there. Otherwise you have to leave the pixel as is. This approach is simple, but a z-buffer requires a lot of memory, and the renderer may still expend a lot of CPU cycles projecting level geometry that is never going to be seen by the player.

In the early 1990s, there was an additional drawback to the z-buffer approach: On IBM-compatible PCs, which used a video adapter system called VGA, writing to the output frame buffer was an expensive operation. So time spent drawing pixels that would only get overwritten later tanked the performance of your renderer.

Since writing to the frame buffer was so expensive, the ideal renderer was one that started by drawing the objects closest to the player, then the objects just beyond those objects, and so on, until every pixel on screen had been written to. At that point the renderer would know to stop, saving all the time it might have spent considering far-away objects that the player cannot see. But ordering the objects in a scene this way, from closest to farthest, is tantamount to solving the VSD problem. Once again, the question is: What can be seen by the player?

Initially, Carmack tried to solve this problem by relying on the layout of Doom's levels. His renderer started by drawing the walls of the room currently occupied by the player, then flooded out into neighboring rooms to draw the walls in those rooms that could be seen from the current room. Provided that every room was convex, this solved the VSD issue. Rooms that were not convex could be split into convex "sectors." You can see how this rendering technique might have looked if run at extra-slow speed in this video, where YouTuber Bisqwit demonstrates a renderer of his own that works according to the same general algorithm. This algorithm was successfully used in Duke Nukem 3D, released three years after Doom, when CPUs were more powerful. But, in 1993, running on the hardware then available, the Doom renderer that used this algorithm struggled with complicated levels—particularly when sectors were nested inside of each other, which was the only way to create something like a circular pit of stairs. A circular pit of stairs led to lots of repeated recursive descents into a sector that had already been drawn, strangling the game engine's speed.

Around the time that the id team realized that the Doom game engine might be too slow, id Software was asked to port Wolfenstein 3D to the Super Nintendo. The Super Nintendo was even less powerful than the IBM-compatible PCs of the day, and it turned out that the ray-marching Wolfenstein renderer, simple as it was, didn't run fast enough on the Super Nintendo hardware. So Carmack began looking for a better algorithm. It was actually for the Super Nintendo port of Wolfenstein that Carmack first researched and implemented binary space partitioning. In Wolfenstein, this was relatively straightforward because all the walls were axis-aligned; in Doom, it would be more complex. But Carmack realized that BSP trees would solve Doom's speed problems too.

Binary Space Partitioning

Binary space partitioning makes the VSD problem easier to solve by splitting a 3D scene into parts ahead of time. For now, you just need to grasp why splitting a scene is useful: If you draw a line (really a plane in 3D) across your scene, and you know which side of the line the player or camera viewpoint is on, then you also know that nothing on the other side of the line can obstruct something on the viewpoint's side of the line. If you repeat this process many times, you end up with a 3D scene split into many sections, which wouldn't be an improvement on the original scene except now you know more about how different parts of the scene can obstruct each other.

The first people to write about dividing a 3D scene like this were researchers trying to establish for the US Air Force whether computer graphics were sufficiently advanced to use in flight simulators. They released their findings in a 1969 report called "Study for Applying Computer-Generated Images to Visual Simulation." The report concluded that computer graphics could be used to train pilots, but also warned that the implementation would be complicated by the VSD problem:

One of the most significant problems that must be faced in the real-time computation of images is the priority, or hidden-line, problem. In our everyday visual perception of our surroundings, it is a problem that nature solves with trivial east; a point of an opaque object obscures all other points that lie along the same line of sight and are more distant. In the computer, the task is formidable. The computations required to resolve priority in the general case grow exponentially with the complexity of the environment, and soon they surpass the computing load associated with finding the perspective images of the objects.2

One solution these researchers mention, which according to them was earlier used in a project for NASA, is based on creating what I am going to call an "occlusion matrix." The researchers point out that a plane dividing a scene in two can be used to resolve "any priority conflict" between objects on opposite sides of the plane. In general you might have to add these planes explicitly to your scene, but with certain kinds of geometry you can just rely on the faces of the objects you already have. They give the example in the figure below, where , , and are the separating planes. If the camera viewpoint is on the forward or "true" side of one of these planes, then evaluates to 1. The matrix shows the relationships between the three objects based on the three dividing planes and the location of the camera viewpoint—if object obscures object , then entry in the matrix will be a 1.

The researchers propose that this matrix could be implemented in hardware and re-evaluated every frame. Basically the matrix would act as a big switch or a kind of pre-built z-buffer. When drawing a given object, no video would be output for the parts of the object when a 1 exists in the object's column and the corresponding row object is also being drawn.

The major drawback with this matrix approach is that to represent a scene with objects you need a matrix of size . So the researchers go on to explore whether it would be feasible to represent the occlusion matrix as a "priority list" instead, which would only be of size and would establish an order in which objects should be drawn. They immediately note that for certain scenes like the one in the figure above no ordering can be made (since there is an occlusion cycle), so they spend a lot of time laying out the mathematical distinction between "proper" and "improper" scenes. Eventually they conclude that, at least for "proper" scenes—and it should be easy enough for a scene designer to avoid "improper" cases—a priority list could be generated. But they leave the list generation as an exercise for the reader. It seems the primary contribution of this 1969 study was to point out that it should be possible to use partitioning planes to order objects in a scene for rendering, at least in theory.

It was not until 1980 that a paper, titled "On Visible Surface Generation by A Priori Tree Structures," demonstrated a concrete algorithm to accomplish this. The 1980 paper, written by Henry Fuchs, Zvi Kedem, and Bruce Naylor, introduced the BSP tree. The authors say that their novel data structure is "an alternative solution to an approach first utilized a decade ago but due to a few difficulties, not widely exploited"—here referring to the approach taken in the 1969 Air Force study.3 A BSP tree, once constructed, can easily be used to provide a priority ordering for objects in the scene.

Fuchs, Kedem, and Naylor give a pretty readable explanation of how a BSP tree works, but let me see if I can provide a less formal but more concise one.

You begin by picking one polygon in your scene and making the plane in which the polygon lies your partitioning plane. That one polygon also ends up as the root node in your tree. The remaining polygons in your scene will be on one side or the other of your root partitioning plane. The polygons on the "forward" side or in the "forward" half-space of your plane end in up in the left subtree of your root node, while the polygons on the "back" side or in the "back" half-space of your plane end up in the right subtree. You then repeat this process recursively, picking a polygon from your left and right subtrees to be the new partitioning planes for their respective half-spaces, which generates further half-spaces and further sub-trees. You stop when you run out of polygons.

Say you want to render the geometry in your scene from back-to-front. (This is known as the "painter's algorithm," since it means that polygons further from the camera will get drawn over by polygons closer to the camera, producing a correct rendering.) To achieve this, all you have to do is an in-order traversal of the BSP tree, where the decision to render the left or right subtree of any node first is determined by whether the camera viewpoint is in either the forward or back half-space relative to the partitioning plane associated with the node. So at each node in the tree, you render all the polygons on the "far" side of the plane first, then the polygon in the partitioning plane, then all the polygons on the "near" side of the plane—"far" and "near" being relative to the camera viewpoint. This solves the VSD problem because, as we learned several paragraphs back, the polygons on the far side of the partitioning plane cannot obstruct anything on the near side.

The following diagram shows the construction and traversal of a BSP tree representing a simple 2D scene. In 2D, the partitioning planes are instead partitioning lines, but the basic idea is the same in a more complicated 3D scene.

Step One: The root partitioning line along wall D splits the remaining geometry into two sets.

Step Two: The half-spaces on either side of D are split again. Wall C is the only wall in its half-space so no split is needed. Wall B forms the new partitioning line in its half-space. Wall A must be split into two walls since it crosses the partitioning line.

A back-to-front ordering of the walls relative to the viewpoint in the top-right corner, useful for implementing the painter's algorithm. This is just an in-order traversal of the tree.

The really neat thing about a BSP tree, which Fuchs, Kedem, and Naylor stress several times, is that it only has to be constructed once. This is somewhat surprising, but the same BSP tree can be used to render a scene no matter where the camera viewpoint is. The BSP tree remains valid as long as the polygons in the scene don't move. This is why the BSP tree is so useful for real-time rendering—all the hard work that goes into constructing the tree can be done beforehand rather than during rendering.

One issue that Fuchs, Kedem, and Naylor say needs further exploration is the question of what makes a "good" BSP tree. The quality of your BSP tree will depend on which polygons you decide to use to establish your partitioning planes. I skipped over this earlier, but if you partition using a plane that intersects other polygons, then in order for the BSP algorithm to work, you have to split the intersected polygons in two, so that one part can go in one half-space and the other part in the other half-space. If this happens a lot, then building a BSP tree will dramatically increase the number of polygons in your scene.

Bruce Naylor, one of the authors of the 1980 paper, would later write about this problem in his 1993 paper, "Constructing Good Partitioning Trees." According to John Romero, one of Carmack's fellow id Software co-founders, this paper was one of the papers that Carmack read when he was trying to implement BSP trees in Doom.4

BSP Trees in Doom

Remember that, in his first draft of the Doom renderer, Carmack had been trying to establish a rendering order for level geometry by "flooding" the renderer out from the player's current room into neighboring rooms. BSP trees were a better way to establish this ordering because they avoided the issue where the renderer found itself visiting the same room (or sector) multiple times, wasting CPU cycles.

"Adding BSP trees to Doom" meant, in practice, adding a BSP tree generator to the Doom level editor. When a level in Doom was complete, a BSP tree was generated from the level geometry. According to Fabien Sanglard, the generation process could take as long as eight seconds for a single level and 11 minutes for all the levels in the original Doom.5 The generation process was lengthy in part because Carmack's BSP generation algorithm tries to search for a "good" BSP tree using various heuristics. An eight-second delay would have been unforgivable at runtime, but it was not long to wait when done offline, especially considering the performance gains the BSP trees brought to the renderer. The generated BSP tree for a single level would have then ended up as part of the level data loaded into the game when it starts.

Carmack put a spin on the BSP tree algorithm outlined in the 1980 paper, because once Doom is started and the BSP tree for the current level is read into memory, the renderer uses the BSP tree to draw objects front-to-back rather than back-to-front. In the 1980 paper, Fuchs, Kedem, and Naylor show how a BSP tree can be used to implement the back-to-front painter's algorithm, but the painter's algorithm involves a lot of over-drawing that would have been expensive on an IBM-compatible PC. So the Doom renderer instead starts with the geometry closer to the player, draws that first, then draws the geometry farther away. This reverse ordering is easy to achieve using a BSP tree, since you can just make the opposite traversal decision at each node in the tree. To ensure that the farther-away geometry is not drawn over the closer geometry, the Doom renderer uses a kind of implicit z-buffer that provides much of the benefit of a z-buffer with a much smaller memory footprint. There is one array that keeps track of occlusion in the horizontal dimension, and another two arrays that keep track of occlusion in the vertical dimension from the top and bottom of the screen. The Doom renderer can get away with not using an actual z-buffer because Doom is not technically a fully 3D game. The cheaper data structures work because certain things never appear in Doom: The horizontal occlusion array works because there are no sloping walls, and the vertical occlusion arrays work because no walls have, say, two windows, one above the other.

The only other tricky issue left is how to incorporate Doom's moving characters into the static level geometry drawn with the aid of the BSP tree. The enemies in Doom cannot be a part of the BSP tree because they move; the BSP tree only works for geometry that never moves. So the Doom renderer draws the static level geometry first, keeping track of the segments of the screen that were drawn to (with yet another memory-efficient data structure). It then draws the enemies in back-to-front order, clipping them against the segments of the screen that occlude them. This process is not as optimal as rendering using the BSP tree, but because there are usually fewer enemies visible then there is level geometry in a level, speed isn't as much of an issue here.

Using BSP trees in Doom was a major win. Obviously it is pretty neat that Carmack was able to figure out that BSP trees were the perfect solution to his problem. But was it a genius-level move?

In his excellent book about the Doom game engine, Fabien Sanglard quotes John Romero saying that Bruce Naylor's paper, "Constructing Good Partitioning Trees," was mostly about using BSP trees to cull backfaces from 3D models.6 According to Romero, Carmack thought the algorithm could still be useful for Doom, so he went ahead and implemented it. This description is quite flattering to Carmack—it implies he saw that BSP trees could be useful for real-time video games when other people were still using the technique to render static scenes. There is a similarly flattering story in Masters of Doom: Kushner suggests that Carmack read Naylor's paper and asked himself, "what if you could use a BSP to create not just one 3D image but an entire virtual world?"7

This framing ignores the history of the BSP tree. When those US Air Force researchers first realized that partitioning a scene might help speed up rendering, they were interested in speeding up real-time rendering, because they were, after all, trying to create a flight simulator. The flight simulator example comes up again in the 1980 BSP paper. Fuchs, Kedem, and Naylor talk about how a BSP tree would be useful in a flight simulator that pilots use to practice landing at the same airport over and over again. Since the airport geometry never changes, the BSP tree can be generated just once. Clearly what they have in mind is a real-time simulation. In the introduction to their paper, they even motivate their research by talking about how real-time graphics systems must be able to create an image in at least 1/30th of a second.

So Carmack was not the first person to think of using BSP trees in a real-time graphics simulation. Of course, it's one thing to anticipate that BSP trees might be used this way and another thing to actually do it. But even in the implementation Carmack may have had more guidance than is commonly assumed. The Wikipedia page about BSP trees, at least as of this writing, suggests that Carmack consulted a 1991 paper by Chen and Gordon as well as a 1990 textbook called Computer Graphics: Principles and Practice. Though no citation is provided for this claim, it is probably true. The 1991 Chen and Gordon paper outlines a front-to-back rendering approach using BSP trees that is basically the same approach taken by Doom, right down to what I've called the "implicit z-buffer" data structure that prevents farther polygons being drawn over nearer polygons. The textbook provides a great overview of BSP trees and some pseudocode both for building a tree and for displaying one. (I've been able to skim through the 1990 edition thanks to my wonderful university library.) Computer Graphics: Principles and Practice is a classic text in computer graphics, so Carmack might well have owned it.

Still, Carmack found himself faced with a novel problem—"How can we make a first-person shooter run on a computer with a CPU that can't even do floating-point operations?"—did his research, and proved that BSP trees are a useful data structure for real-time video games. I still think that is an impressive feat, even if the BSP tree had first been invented a decade prior and was pretty well theorized by the time Carmack read about it. Perhaps the accomplishment that we should really celebrate is the Doom game engine as a whole, which is a seriously nifty piece of work. I've mentioned it once already, but Fabien Sanglard's book about the Doom game engine (Game Engine Black Book: DOOM) is an excellent overview of all the different clever components of the game engine and how they fit together. We shouldn't forget that the VSD problem was just one of many problems that Carmack had to solve to make the Doom engine work. That he was able, on top of everything else, to read about and implement a complicated data structure unknown to most programmers speaks volumes about his technical expertise and his drive to perfect his craft.

If you enjoyed this post, more like it come out every four weeks! Follow @TwoBitHistory on Twitter or subscribe to the RSS feed to make sure you know when a new post is out.

Previously on TwoBitHistory…

I've wanted to learn more about GNU Readline for a while, so I thought I'd turn that into a new blog post. Includes a few fun facts from an email exchange with Chet Ramey, who maintains Readline (and Bash):https://t.co/wnXeuyjgMx

— TwoBitHistory (@TwoBitHistory) August 22, 2019
  1. Michael Abrash, "Michael Abrash's Graphics Programming Black Book," James Gregory, accessed November 6, 2019, http://www.jagregory.com/abrash-black-book/#chapter-64-quakes-visible-surface-determination

  2. R. Schumacher, B. Brand, M. Gilliland, W. Sharp, "Study for Applying Computer-Generated Images to Visual Simulation," Air Force Human Resources Laboratory, December 1969, accessed on November 6, 2019, https://apps.dtic.mil/dtic/tr/fulltext/u2/700375.pdf

  3. Henry Fuchs, Zvi Kedem, Bruce Naylor, "On Visible Surface Generation By A Priori Tree Structures," ACM SIGGRAPH Computer Graphics, July 1980. 

  4. Fabien Sanglard, Game Engine Black Book: DOOM (CreateSpace Independent Publishing Platform, 2018), 200. 

  5. Sanglard, 206. 

  6. Sanglard, 200. 

  7. David Kushner, Masters of Doom (Random House Trade Paperbacks, 2004), 142. 

31-Oct-19
Scarfolk Council [ 31-Oct-19 9:04am ]

Many readers will remember the two packs of Horror Top Trumps, which were first issued in 1978. What is not commonly known is that the first pack was recalled after 3 days only to be rereleased a month later minus one card: The Scarfolk card.

The card had proved so effective that, not only could it effortlessly beat every other card, it also killed the losing player within moments of the game ending.

Learning of the inexplicable power of the card, the government immediately issued the recall, albeit not in the interest of public safety. Instead, it coerced citizens on welfare into playing the game during home assessment visits. The government also targeted enemies of the state, using the card in so-called 'black operations' at home and abroad.

In 1979, a catastrophe was narrowly avoided when the Scarfolk card was played in a game opposite a forgery of itself. Fortunately, the game's location was sparsely populated and the only victims of the resulting dark-matter explosion were a government agent, an unknown dissenter, seven ducks and, less significantly, four coachloads of orphans* who were driven to the remote site for reasons unknown.

*The orphans were children of disgraced artists, academics and other intellectuals who disappeared during the New Truth Purges of September 1977**.

** Edit: Apparently, according to fresh information, no such purges took place.

Happy Halloween/Samhain from everyone at Scarfolk Council.
17-Oct-19
The Scarfolk Annual 197X. OUT NOW! [ 17-Oct-19 10:19am ]

The Scarfolk Annual 197x. 
OUT NOW(US/Can: 10.29.2019)
Available from :Amazon (http://bit.ly/scarfolkbook), Hive, Waterstones, The Guardian Bookshop, Foyles, Wordery, Blackwells, Forbidden Planet, Barnes & Noble, Books-A-Million & others. 
For more information please reread.
03-Oct-19

Just a quick note for those who, like me, need to fiddle for a few hours while the world burns.  Oh wait, that’s not quite what I meant, but anyway, if you want a distraction, here’s one: the Younger Dryas Impact Hypothesis.

The basic idea, as noted in the Wikipedia link above, is that around 12,800 years ago, a bolide either fragmented above the Earth in sort of a super-Tunguska, or an asteroid hit (possibly under the Hiawatha glacier in Greenland, near where the Cape York meteorite was found.  And yes, possibly the Cape York fragments are part of it).  I’m personally partial to an asteroid strike because one of the (to me) more solid lines of evidence is a spike in platinum around the world dating from around 12,800 BP, found most recently in Africa, but basically on every continent except Antarctica.

This hypothesis is controversial of course–it should be, given the way normal science works.  But I think it does clear up some mysteries.  For example, it may explain why the megafaunal extinction happened around then in America and norther Eurasia, and not thousands of years earlier.

Anatomically modern humans were around for at least 300,000 years, and we evidently tried agriculture around 22,000 years ago near what’s now the Dead Sea.  While people like to hypothesize that ancient humans were more primitive than moderns, and that’s why they stayed few in number and simple in lifestyle, but I disagree.  I personally think that the reason that humans didn’t take over the Earth hundreds of thousands of years ago was that the climate in the ice age fluctuated too radically to allow the rise of civilization.    There’s little point in depending on crops if they fail most years.

Anyway, during those 300,000 years, humans lived alongside big animals (megafauna), except in the Americans (settled 10,000-20,000 years ago), in Australia (settled 65,000 years ago) and New Zealand and the Pacific (settled less than 3,000 years ago–we’ll ignore this for now).  My personal hypothesis before I started thinking about the Younger Dryas Impact Hypothesis was that megafaunal extinctions were due to human predation and habitat change.  While that’s unambiguously true in the Polynesian Islands and Madagascar (which I hate saying), it’s not clear what happened in Australia and the Americas.  In Australia, the aboriginal population first settled around 65,000 years BP, but the megafaunal die off happened “rapidly thereafter”(per the biologists)  starting around 46,000 years BP.  This is a classic example of why biologists need to do more math.  19,000 years of coexistence is NOT rapid.  Similarly in the Americas, humans lived alongside the megafauna for at least 2,000 years, if not 8,000 years, before the megafaunal extinction started “rapidly” happening.  We don’t blame Europeans or Asians for wiping out their mammoths and other megafauna (do you ever hear the Chinese criticized for wiping out the elephants and rhinos around Beijing 3,000 years ago?  That was considerably more rapid.).  That’s why I agree with the Native Americans and Aboriginals who say that accusations of ancient ecocide are just veiled neo-colonial attempts to justify taking their land.  They’re right: thousands of years of coexistence is not a short time.

And that leaves the Younger Dryas Impact. If it happened,  it presumably did not play a role in the Australian megafaunal extinction (it’s around 33,000 years too late), but it could have played a major role in the megafaunal extinctions in the northern hemisphere, and possibly into South America.  All that platinum had to come from somewhere.

One criticism leveled against the impact hypothesis is questioning why the proposed impact only killed big animals, not little ones.  That’s easily answered, at least if you believe Anthony Martin, author of The Evolution Underground: Burrows, Bunkers, and the Marvelous Subterranean World Beneath our Feet (BigMuddy Link, in case you want to read this really fun book).  He makes a point that during extinction events, mass or otherwise, animals that can shelter underground survive disproportionately well.  So if a smallish asteroid struck, especially during northern winter, it would harm everything living above the surface (e.g. the megafauna) but animals hunkered down in burrows, especially under the snow, would be proportionally less affected.    That’s not quite what we see, as things like bison and moose survived the possible impact, but it’s a reasonable hypothesis that can be tested.

Anyway, you can want to dive down the rabbit hole for shelter, you can waste  happy hours on something other than obsessing about national meltdowns in the US or UK.  That’s  one reason I’m posting this.

The other reason to post is that I don’t know of much, well any, alt-history SF that explores worlds where the impact didn’t happen and the megafauna of the Americas and Eurasia didn’t go extinct 12,800 years ago.  As an alt-history, the changes are rather subtle, more about setting than plot, in a No Younger Dryas (NYD) world. But they could be fun.

I’m pretty sure that agriculture and civilization would have arisen in NYD as they did in our timeline, although possibly 1,000 years or more earlier  (the Younger Dryas lasted around 1,200 years).  There are multiple reasons for this confidence:

  • Agriculture arose in West Africa, Ethiopia, China, India, and possibly Southeast Asia in places where there were lots of megafauna (elephants, rhinos, lions, tigers, etc.), so having big herbivores around does not preclude people inventing agriculture.
  • Someone tried agriculture back during the preceding ice age at least once that we know of, and that was with a full panoply of biggish critters around.  They most likely failed due to climate change, not rampaging mammoths.

What would be different in a NYD world is that mammoths, rhinos, cave lions, sabertooths, and all that ilk would either be present in modern times or recently extinct in civilized lands.  This would be particularly true in the Americas, if only because the classical Mediterranean civilizations, the Medieval Europeans, and the Chinese were all pretty darn good at getting rid of their megafauna.  Colonizing the New World would have been a bit more like attempts to colonize Africa than what actually happened, with the Hudson’s Bay Company equivalent trading as much in mammoth or mastodon ivory as in beaver furs, and livestock kept at night in kraals of, perhaps, spiny osage orange branches or similar, to keep the lions away.*

Anyway, it’s something for creatives to play with, if they want to distract themselves from the current chaos.  Heck, you could combine NYD with the Alt-Chinese colonizing (or attempting to colonize) the west coast of North America and introducing iron-working, first generation firearms, and a full complement of Old World diseases to the peoples of the Pacific Coast.  That would make things much, much weirder, especially if the Europeans colonized the east coast of the Americas centuries later in timeline, so that both the diseases and the technologies had their chance to rampage around the continent.

Have fun!

*Actually, there’s a whole post I could write about beavers as ecological engineers and about how their loss from the US just prior to European settlement has given us a really distorted idea of how this continent is supposed to work.  Maybe later.

 

30-Sep-19
Joi Ito's Web [ 30-Aug-19 9:01am ]

If you looked at how many people check books out of libraries these days, you would see failure. Circulation, an obvious measure of success for an institution established to lend books to people, is down. But if you only looked at that figure, you'd miss the fascinating transformation public libraries have undergone in recent years. They've taken advantage of grants to become makerspaces, classrooms, research labs for kids, and trusted public spaces in every way possible. Much of the successful funding encouraged creative librarians to experiment and scale when successful, iterating and sharing their learnings with others. If we had focused our funding to increase just the number of books people were borrowing, we would have missed the opportunity to fund and witness these positive changes.

I serve on the boards of the MacArthur Foundation and the Knight Foundation, which have made grants that helped transform our libraries. I've also worked over the years with dozens of philanthropists and investors--those who put money into ventures that promise environmental and public health benefits in addition to financial returns. All of us have struggled to measure the effectiveness of grants and investments that seek to benefit the community, the environment, and so forth. My own research interest in the begun to analyse the ways in which people are currently measuring impact and perhaps find methods to better measure the impact of these investments.

As we see in the library example, simple metrics often aren't enough when it comes to quantifying success. They typically are easier to measure, and they're not unimportant. When it comes to health, for example, iron levels might be important, but anemia isn't the only metric we care about. Being healthy is about being nourished and thus resilient so that when something does happen, we recover quickly.

Iron levels may be a proxy for this, but they aren't the proxy. Being happy is even more complicated; it involves health but also more abstract things such as feelings of purpose, belonging to a community, security, and many other things. Similarly, while I believe rigor and best practices are important and support the innovation and thinking going into these metrics when it comes to all types of philanthropy, I think we risk oversimplifying problems and thus having the false sense of clarity that quantitative metrics tend to create.

One of the reasons philanthropists sometimes fail to measure what really matters is that the global political economy primarily seeks what is efficient and scalable. Unfortunately, efficiency and scalability are not the same as a healthy system. In fact, many things that grow quickly and without constraints are far from healthy--consider cancer. Because of our belief in markets, we tend to accept that an economy has to be growing for society to be healthy--but this notion is misguided, particularly when it comes to things we consider social goods. If we examine a complex system like the environment, for instance, we can see that healthy rainforests don't grow in overall size but rather are extremely resilient, always changing and adapting.

There is more to assessing a complex system than looking at its growth, efficiency, and the handful of other qualities that can be quantified and thus measured.

As biologists know, healthy ecosystems are robust and resilient. They can tolerate reductions in certain species populations ... until they can't. Scholars in ecology and biology have tried to model the robustness and resilience of systems in an effort to understand how to build and maintain such systems. Scientists have tried to apply these models to non-biological systems like the internet and ask questions, such as "How many and which nodes can you remove from the internet before it stops functioning?" These models are different from the mathematics economists use. Instead of relying on aggregate numbers and formulae, they use network models of nodes and links to ponder dynamics among connections in the system, rather than stocks and flows of economies.

Maybe there is something to learn from biologists and ecologists--the people who study the complex and messy real world of nature--when philanthropists are thinking about how to save the planet. We know from ecology and biology, for instance, that monocultures and simple approaches tend to be weak and fragile. The strongest systems are highly diverse and iterate quickly. When the immune system goes to war against a pathogen, the body engages in an arms race of mutations, deploying a diversity of approaches and constant iteration, communication, and coordination. Scientists also are learning that the microbiome, brain, and immune system are more integrated and complex than we ever imagined; they actually understand and tackle the more complex diseases currently beyond our scientific abilities. This research is pushing biology and computational models to a whole new and exciting level.

Many diseases, just like all of the systems that philanthropy tries to address, are complex networks of connected problems that go beyond any one specific pathway or molecule. Obesity is often described as simply a matter of managing one's calories and consequently cast as a lack of willpower on the part of an overweight individual.
But it is probably more accurately understood
in the context of a global food system that is incentivized by financial markets to produce low cost, high-calorie, unhealthy, and addictive foods. Calorie counting as the primary way to lose weight has been a rule of thumb, but we are learning that healthy fats are fine while sugar calories cause insulin resistance, which often leads to diabetes and obesity. So solving the obesity problem is going to require much more than increasing or reducing any one single thing like calories.
It's our food system that is unhealthy, and one result is overweight individuals.

In such a complex world, what are we to do? We need respect for plurality and heterogeneity. It's not that we shouldn't measure things, but rather that we should measure different things, have different approaches and iterate and adapt. This is how nature builds resilient networks and systems. Because we as a society have an obsession with scale and other common measures of success, researchers and do-gooders have a natural tendency to want to use simple measures (as described in our blog post) and other "gold standards" to gauge the impact of the money spent and effort expended. I would urge us to instead support greater experimentation, smaller projects, more coordination and better communication. We should surely measure indicators of negative effects--blood tests to measure what may be going wrong (or right) with our bodies are very useful for instance.

We also need to consider that every change usually has multiple effects, some positive and others negative. We must constantly look for additional side effects and dynamically adapt whatever we do. Sticking with our obesity example, there is evidence that high fat, low sugar diets, generally known as ketogenic diets, are great for losing weight and preventing diabetes; the improvement can be assessed by measuring one's blood glucose levels. However, recent studies show that this diet might contribute to thyroid problems and if we adhere to one, we must monitor thyroid function and occasionally take breaks from it.

Coming up with hypotheses about causal relationships, testing them and connecting them to larger complex models of how we think the world works is an important step. In addition, asking whether we are asking the right questions and solving the right problems, rather than prematurely focusing on solutions, is key. Jed Emerson, who pioneered early attempts to monetize the economic value of social impact, makes the same point in his recent book The Purpose of Capital.

For the last 1,300 years, the Ise Shrine in Japan has been ritually rebuilt by craftspeople every 20 years. The lumber mostly comes from the shrine's forest managed in 200 year time scales as part of a national afforestation plan dating back centuries. The number of people working at Ise Shrine isn't growing, the shrine isn't trying to expand its business, and its workers are happy and healthy--the shrine is flourishing. Their primary concern is the resilience of the forest, rivers, and natural environment around the shrine. How would we measure their success and what can we learn from their flourishing as we try to manage our society and our planet?

It is heartening to see impact investors developing evidence-based methods to tackle the complex and critical challenges that face us. It's also heartening that capital markets and investors are supportive of investing, and in some cases even accepting reduced returns, in an effort to help tackle our big, complex challenges. We must, however, make changes in the way we fund potential solutions so that it supports a diversity of disciplines and approaches. That, in turn will require new methods of measurement and perhaps we can take advantage of some very old ones, such as the data from Shinto priests who have been measuring ice on a lake for resist oversimplification. If we don't, we risk wasting these funds or, even worse, amplifying existing problems and creating new ones.

22-Aug-19
Two-Bit History [ 22-Aug-19 1:00am ]

I sometimes think of my computer as a very large house. I visit this house every day and know most of the rooms on the ground floor, but there are bedrooms I've never been in, closets I haven't opened, nooks and crannies that I've never explored. I feel compelled to learn more about my computer the same way anyone would feel compelled to see a room they had never visited in their own home.

GNU Readline is an unassuming little software library that I relied on for years without realizing that it was there. Tens of thousands of people probably use it every day without thinking about it. If you use the Bash shell, every time you auto-complete a filename, or move the cursor around within a single line of input text, or search through the history of your previous commands, you are using GNU Readline. When you do those same things while using the command-line interface to Postgres (psql), say, or the Ruby REPL (irb), you are again using GNU Readline. Lots of software depends on the GNU Readline library to implement functionality that users expect, but the functionality is so auxiliary and unobtrusive that I imagine few people stop to wonder where it comes from.

GNU Readline was originally created in the 1980s by the Free Software Foundation. Today, it is an important if invisible part of everyone's computing infrastructure, maintained by a single volunteer.

Feature Replete

The GNU Readline library exists primarily to augment any command-line interface with a common set of keystrokes that allow you to move around within and edit a single line of input. If you press Ctrl-A at a Bash prompt, for example, that will jump your cursor to the very beginning of the line, while pressing Ctrl-E will jump it to the end. Another useful command is Ctrl-U, which will delete everything in the line before the cursor.

For an embarrassingly long time, I moved around on the command line by repeatedly tapping arrow keys. For some reason, I never imagined that there was a faster way to do it. Of course, no programmer familiar with a text editor like Vim or Emacs would deign to punch arrow keys for long, so something like Readline was bound to be created. Using Readline, you can do much more than just jump around—you can edit your single line of text as if you were using a text editor. There are commands to delete words, transpose words, upcase words, copy and paste characters, etc. In fact, most of Readline's keystrokes/shortcuts are based on Emacs. Readline is essentially Emacs for a single line of text. You can even record and replay macros.

I have never used Emacs, so I find it hard to remember what all the different Readline commands are. But one thing about Readline that is really neat is that you can switch to using a Vim-based mode instead. To do this for Bash, you can use the set builtin. The following will tell Readline to use Vim-style commands for the current shell:

$ set -o vi

With this option enabled, you can delete words using dw and so on. The equivalent to Ctrl-U in the Emacs mode would be d0.

I was excited to try this when I first learned about it, but I've found that it doesn't work so well for me. I'm happy that this concession to Vim users exists, and you might have more luck with it than me, particularly if you haven't already used Readline's default command keystrokes. My problem is that, by the time I heard about the Vim-based interface, I had already learned several Readline keystrokes. Even with the Vim option enabled, I keep using the default keystrokes by mistake. Also, without some sort of indicator, Vim's modal design is awkward here—it's very easy to forget which mode you're in. So I'm stuck at a local maximum using Vim as my text editor but Emacs-style Readline commands. I suspect a lot of other people are in the same position.

If you feel, not unreasonably, that both Vim and Emacs' keyboard command systems are bizarre and arcane, you can customize Readline's key bindings and make them whatever you like. This is not hard to do. Readline reads a ~/.inputrc file on startup that can be used to configure various options and key bindings. One thing I've done is reconfigured Ctrl-K. Normally it deletes from the cursor to the end of the line, but I rarely do that. So I've instead bound it so that pressing Ctrl-K deletes the whole line, regardless of where the cursor is. I've done that by adding the following to ~/.inputrc:

Control-k: kill-whole-line

Each Readline command (the documentation refers to them as functions) has a name that you can associate with a key sequence this way. If you edit ~/.inputrc in Vim, it turns out that Vim knows the filetype and will help you by highlighting valid function names but not invalid ones!

Another thing you can do with ~/.inputrc is create canned macros by mapping key sequences to input strings. The Readline manual gives one example that I think is especially useful. I often find myself wanting to save the output of a program to a file, which means that I often append something like > output.txt to Bash commands. To save some time, you could make this a Readline macro:

Control-o: "> output.txt"

Now, whenever you press Ctrl-O, you'll see that > output.txt gets added after your cursor on the command line. Neat!

But with macros you can do more than just create shortcuts for strings of text. The following entry in ~/.inputrc means that, every time I press Ctrl-J, any text I already have on the line is surrounded by $( and ). The macro moves to the beginning of the line with Ctrl-A, adds $(, then moves to the end of the line with Ctrl-E and adds ):

Control-j: "\C-a$(\C-e)"

This might be useful if you often need the output of one command to use for another, such as in:

$ cd $(brew --prefix)

The ~/.inputrc file also allows you to set different values for what the Readline manual calls variables. These enable or disable certain Readline behaviors. You can use these variables to change, for example, how Readline auto-completion works or how the Readline history search works. One variable I'd recommend turning on is the revert-all-at-newline variable, which by default is off. When the variable is off, if you pull a line from your command history using the reverse search feature, edit it, but then decide to search instead for another line, the edit you made is preserved in the history. I find this confusing because it leads to lines showing up in your Bash command history that you never actually ran. So add this to your ~/.inputrc:

set revert-all-at-newline on

When you set options or key bindings using ~/.inputrc, they apply wherever the Readline library is used. This includes Bash most obviously, but you'll also get the benefit of your changes in other programs like irb and psql too! A Readline macro that inserts SELECT * FROM could be useful if you often use command-line interfaces to relational databases.

Chet Ramey

GNU Readline is today maintained by Chet Ramey, a Senior Technology Architect at Case Western Reserve University. Ramey also maintains the Bash shell. Both projects were first authored by a Free Software Foundation employee named Brian Fox beginning in 1988. But Ramey has been the sole maintainer since around 1994.

Ramey told me via email that Readline, far from being an original idea, was created to implement functionality prescribed by the POSIX specification, which in the late 1980s had just been created. Many earlier shells, including the Korn shell and at least one version of the Unix System V shell, included line editing functionality. The 1988 version of the Korn shell (ksh88) provided both Emacs-style and Vi/Vim-style editing modes. As far as I can tell from the manual page, the Korn shell would decide which mode you wanted to use by looking at the VISUAL and EDITOR environment variables, which is pretty neat. The parts of POSIX that specified shell functionality were closely modeled on ksh88, so GNU Bash was going to have to implement a similarly flexible line-editing system to stay compliant. Hence Readline.

When Ramey first got involved in Bash development, Readline was a single source file in the Bash project directory. It was really just a part of Bash. Over time, the Readline file slowly moved toward becoming an independent project, though it was not until 1994 (with the 2.0 release of Readline) that Readline became a separate library entirely.

Readline is closely associated with Bash, and Ramey usually pairs Readline releases with Bash releases. But as I mentioned above, Readline is a library that can be used by any software implementing a command-line interface. And it's really easy to use. This is a simple example, but here's how you would you use Readline in your own C program. The string argument to the readline() function is the prompt that you want Readline to display to the user:

#include <stdio.h>
#include <stdlib.h>
#include "readline/readline.h"

int main(int argc, char** argv)
{
    char* line = readline("my-rl-example> ");
    printf("You entered: \"%s\"\n", line);

    free(line);

    return 0;
}

Your program hands off control to Readline, which is responsible for getting a line of input from the user (in such a way that allows the user to do all the fancy line-editing things). Once the user has actually submitted the line, Readline returns it to you. I was able to compile the above by linking against the Readline library, which I apparently have somewhere in my library search path, by invoking the following:

$ gcc main.c -lreadline

The Readline API is much more extensive than that single function of course, and anyone using it can tweak all sorts of things about the library's behavior. Library users can even add new functions that end users can configure via ~/.inputrc, meaning that Readline is very easy to extend. But, as far as I can tell, even Bash ultimately calls the simple readline() function to get input just as in the example above, though there is a lot of configuration beforehand. (See this line in the source for GNU Bash, which seems to be where Bash hands off responsibility for getting input to Readline.)

Ramey has now worked on Bash and Readline for well over a decade. He has never once been compensated for his work—he is and has always been a volunteer. Bash and Readline continue to be actively developed, though Ramey said that Readline changes much more slowly than Bash does. I asked Ramey what it was like being the sole maintainer of software that so many people use. He said that millions of people probably use Bash without realizing it (because every Apple device runs Bash), which makes him worry about how much disruption a breaking change might cause. But he's slowly gotten used to the idea of all those people out there. He said that he continues to work on Bash and Readline because at this point he is deeply invested and because he simply likes to make useful software available to the world.

You can find more information about Chet Ramey at his website.

If you enjoyed this post, more like it come out every four weeks! Follow @TwoBitHistory on Twitter or subscribe to the RSS feed to make sure you know when a new post is out.

Previously on TwoBitHistory…

Please enjoy my long overdue new post, in which I use the story of the BBC Micro and the Computer Literacy Project as a springboard to complain about Codecademy.https://t.co/PiWlKljDjK

— TwoBitHistory (@TwoBitHistory) March 31, 2019
20-Aug-19
ART WHORE [ 20-Aug-19 11:45am ]

A reply to the baseless accusations of Lee Holmes of Clones of Bruce Lee.

I've no wish to draw others into your attempt to create a spat, so I will not bother to cover all the issues raised by your brickbat on pages I do not run regardless of how obsessively you repost your rant on social media. Here no one else need be involved, unless they chose to involve themselves. So let's go through your preposterous claims. You write:

"I must say I am pretty annoyed at the reference to me in the book. The author seems to be obsessed with trying to put down other writers who have delved into this genre in some sort of attempt to make himself out as the more superior researcher."

Here's most of what I have to say about you: "Within Brucesploitation and the related Chansploitation phenomena, actors who copy and clone Bruce Lee or Jackie Chan make up one strand of these subgenres, but their importance can and has been over-stated. This is evident not just from the title of the book Here Come The Kung Fu Clones by Carl Jones, but also the UK fan site Clones of Bruce Lee run by Lee Holmes. Both Jones and Holmes treat Bruce Liang as a clone. My own view is that when Liang appears as Bruce Lee in The Dragon Lives Again (1977) he is there as an actor playing the Little Dragon in the underworld after death rather than a clone; this is emphasised by dialogue in the English dub addressing head on the fact that Liang doesn't look like Bruce Lee…. Movies such as The Black Dragon's Revenge (1975), with a narrative that revolves around a fictional investigation into the death of Bruce Lee, belong to the Brucesploitation genre without even featuring a clone so copyists are not essential to this film category. Lee Holmes on his Cloneswebsite at one time listed Black Dragon's Revenge supporting actor Charles Bonet as a Bruce Lee clone, but given this martial artist's karate leanings and rejection of kung fu, this is not a claim I take at all seriously. I would further argue that those who see figures like Bonet as clones do so because they approach Brucesploitation in thrall to the misleading idea that copyists define it. Tadashi Yamashita, sometimes called Bronson Lee after a character he played, is another example of a karateka I do not accept as a Bruce Lee clone; despite Jones and Holmes - among others - mistakenly asserting he is one."

Seeing this any intelligent reader will immediately realise that your claim that I want to pose as "the more superior researcher" is based on a basic category error.  The passage above is focused more on interpretation than research and I certainly wouldn't damn myself with feint praise by claiming to be a superior theorist to you because you are not a theorist at all. Likewise your clumsy attempt at commentary on something you failed to fully understand might be cited as evidence that I am a superior writer to you; sadly your prose as quoted in the present paragraph is so clunky that this hardly requires pointing out. While I may be putting you down now for a ridiculously over-sensitive and stupid response to Re-Enter The Dragon, this was not what I was doing in the book when I laid out the differences between my positions on Brucesploitation as a genre and dominant discourse on it to date, of which your website simply provides an example. If you don't want your views of Brucesploitation to be met with anything other than agreement then you'd be best advised not to air them in public, or indeed private.

You write: "…who doesn't think that Fist of Unicorn should be categorised as Bruceploitation? This not some big revelation."

Newsflash for Lee Holmes, billions of people in the world have never heard of Fist of Unicorn or Brucesploitation, and it is therefore extremely unlikely they think a film of which they are unaware should be categorised as part of a genre they aren't familiar with. However if you look at what I say in regard to this in context then it is also obvious that I'm not claiming this as some 'big revelation' but rather deploying it as part of a broader argument: "I have seen it falsely asserted in a number of places - including Wikipedia - that Brucesploitation movies attempted to exploit interest in Bruce Lee after his death. Fist of Unicorn (1973) can and should be treated as part of the genre, and it was made and released before Lee died on 20 July 1973…" In case you want to check the Wikipedia entry, although it appears you don't bother to fact check anything very much (see below), there is an archived version of the page here: https://web.archive.org/web/20181102091239/https://en.wikipedia.org/wiki/Bruceploitation

Incidentally if you think Fist of Unicorn is Brucesploitation then you implicitly support my argument that the genre predates the Little Dragon's death, and Wikipedia - among others - was wrong to claim it is made up of movies shot after 20 July 1973. Note that this Wikipedia entry opens with various errors I am attempting to correct in Re-Enter The Dragon: "Bruceploitation (a portmanteau of Bruce Lee and exploitation) refers to the practice on the part of filmmakers in mainland China, Hong Kong, and Taiwan of hiring Bruce Lee look-alike actors ("Lee-alikes") to star in many imitation martial arts films in order to cash in on Lee's success after his death." Alongside the dating error in this opening sentence, there are the misleading assertions that Brucesploitation is characterised by look-alike actors (or clones to use the term found in the title of your website) and about the geographical areas that produced such films (which, of course, also include The Philippines, Korea, Indonesia, Japan and the USA). The claim that Brucesploitation movies are 'imitation martial arts films' is particularly silly; in my experience most of those interested in the genre currently consider them to be actual martial arts films rather than imitation fight flicks. That said, such a slippage does serve to illustrate the damage the clone fallacy does to a proper understanding of the genre.

Wikipedia entries are highly ranked by search engines and are influential, therefore misconceptions within them and the sources they draw upon and link to - including in the instance of the one on 'Bruceploitation' your website - need to be challenged, which is what I've been doing. I would also point out that this Wikipedia entry has for some time contained a link to a review of the Carl Jones book Here Come The Kung Fu Clones that I wrote and published in 2012, and that my understanding of Brucesploitation has changed since then; although I would stand by the review's premise that Jones in his book was confused about the Bruce Le filmography - this is reiterated in less detail in Re-Enter The Dragon.

You say: "I also don't think anyone has ever said that Bruce Lee A Dragon Story is the first Bruceploitation movie, it is the first Bruce Lee Bio-pic."

The top two entries of the web search I just did for Bruce Lee: A Dragon Story (1974), both addressed the matter of it being the 'first' Brucesploitation movie. I got live links for Wikipedia and Hong Kong Movie Database but I'm providing archived ones here:

"Bruce Lee: A Dragon Story… is a 1974 Bruceploitation film starring Bruce Li…. The film is notable for being the first biopic of Bruce Lee (it was released the year following his death), the debut film of notorious Lee imitator Bruce Li, and the first film in the Bruceploitation genre."https://web.archive.org/web/20190626211837/https://en.wikipedia.org/wiki/Bruce_Lee:_A_Dragon_Story

"Bruce Lee: A Dragon Story is thought to be the first entry in the extraordinary genre of what are known as "Brucesploitation" films." https://web.archive.org/web/20120710022900/http://hkmdb.com/db/movies/reviews.mhtml?id=9646&display_set=eng

You say: "…how do you know my opinions on Bruce Leung Siu-Lung or Tadashi Yamashita and how they fit into Bruceploitation? I've never published a profile on them on my site. If you wanted my opinion on them, here is a radical idea, you could have just asked me!"

I assume it is narcissism that makes you think I'd be interested in your opinions. To clarify, I couldn't give a flying fuck about your opinions on Bruce Liang (AKA Bruce Leung Siu-Lung), Tadashi Yamashita, or anything else for that matter. My book dealt with Brucesploitation as a genre and that meant I needed to address the discourse(s) that create and shape it, and unfortunately your website is a part of this and is publicly accessible. On your site you have a page dedicated to 'lesser known stars of Bruceploitation', where you mention three major clones and go on to provide a list of others who were 'impersonating The Little Dragon'. You include both Bruce Liang (AKA Bruce Leung Siu-Lung) and Tadashi Yamashita on this list and therefore effectively treat them as clones. It would have been completely redundant to ask you about this because you'd already implicitly stated your position online. In case you've forgotten what's on your own website here's an archived version of the page: https://web.archive.org/web/20190819111923/http://clonesofbrucelee.info/enter-another-dragon/

You say: "And why would anyone classify Mission Terminate as a Bruceploitation movie? It is only included on my site due to the fact that it features Bruce Le and I cover his entire filmography."
If you cover Bruce Le's entire filmography why am I unable to find coverage of it all on your site? For example I can find nothing about Treasure of Bruce Lee or My Name Called Bruce.When I use the search engine on your site for these films it produces no results, see screenshots below. It's claims like this, which I'm unable to substantiate, that lead me to suspect you may be a habitual liar. Since I've never been able to find coverage of ALL Bruce Le's films on your site, your sorry justification isn't exactly convincing. There's nothing on the page containing the Richard Norton interview to suggest you see Mission Terminate as anything other than Brucesploitation. That page is archived here: https://web.archive.org/web/20190819112551/http://clonesofbrucelee.info/richard-norton/

Your homepage explicitly states: "This website is dedicated to Bruce Lee exploitation cinema, or 'Bruceploitation' as it has become to be known." This is at the top of the page in capital letters and it is therefore reasonable for anyone visiting the site to conclude that anything on it - such as the coverage of Mission Terminate - you consider to be Brucesploitation, unless you explicitly state otherwise. BTW: your sentence construction is shockingly bad and you really ought to rewrite the dreadful 'as it has become to be known' since this sloppy phrasing is very visible on the page.  In case you've forgotten what's on your homepage there's an archived version of it here: https://web.archive.org/web/20190209093714/http://clonesofbrucelee.info/

You write: "I applaud anyone who goes to the effort to bring out a book on this genre that I love I just don't see why you think you had to include my name, and other writers (e.g. Carl Jones) in such a negative way to try make yourself and your book look better. As a fan and researcher of this genre for more than 30 years I wouldn't see the need to try and put down you in anything I write. My research into the genre consists of more than merely watching what i can find online or purchased from the poundshop and writing a basic plot line and sticking it in a book."

This self-refuting passage really made me laugh. You are attempting to put me down in your brickbat, and it is something you've written, so why pointlessly contradict yourself within it by rhetorically stating: "I wouldn't see the need to try and put down you in anything I write…" You appear incapable of making or sustaining a coherent argument or writing a well-constructed sentence. Likewise some of the absurd errors on your part addressed here rather belie your claims to have been researching 'this genre for more than 30 years'. It would appear that what you call 'research' consists mostly of spouting the first piece bullshit that enters your head and deluding yourself into thinking no one will notice you're utterly clueless. Likewise your claim that me 'putting you down' will make me or my book 'look better' is ridiculous, since you're a complete twit who is utterly incapable of making me or anyone else 'look better' by comparison. I also hope it's clear by now I wasn't putting you down in my book even if I am now. I'm doing that here to demonstrate the difference between civil critical engagement with your website - which is my stance towards it in Re-Enter The Dragon - and personalised refutation with humorous insults, which as I trust this reply illustrates is a style of address that I am also familiar with and that I can deploy as and when is necessary. It would be great if this eventually helped you to understand the difference between the two, although at present that seems rather unlikely.

You say: "And one final thought, I've never seen Bruceploitation spelt "Brucesploitation". I've no idea where you got that idea from."

No shit Sherlock! I discuss the variant spellings of Brucesploitation in Re-Enter The Dragon and if as you claim you've been researching the genre for 30 years then you really ought to have seen the spelling I use elsewhere. Either you're lying or you haven't done any serious research, or both. I'm going to give you one example of the Brucesploitation spelling being used here but you can find many more by doing a simple web search, assuming - of course - you're not too simple to use a search engine: https://www.grindhousedatabase.com/index.php/Brucesploitation


10-Aug-19
Lauren Weinstein's Blog [ 10-Aug-19 6:15pm ]

Views: 36

Rumors are circulating widely — and some news sources claim to have seen actual drafts — of a possible Trump administration executive order aimed at giving the government control over content at large social media and other major Internet platforms. 

This effort is based on one of the biggest lies of our age — the continuing claims mostly from the conservative right (but also from some elements of the liberal left) that these firms are using politically biased decisions to determine which content is inappropriate for their platforms. That lie is largely based on the false premise that it’s impossible for employees of these firms to separate their personal political beliefs from content management decisions.

In fact, there is no evidence of political bias in these decisions at these firms. It is completely appropriate for these firms to remove hate speech and related attacks from their platforms — most of which does come from the right (though not exclusively so). Nazis, KKK, and a whole array of racist, antisemitic, anti-Muslim, misogynistic, and other violent hate groups are disproportionately creatures of the political right wing. 

So it is understandable that hate speech and related content takedowns would largely affect the right — because they’re the primary source of these postings and associated materials. 

At the scales that these firms operate, no decision-making ecosystem can be 100% accurate, and so errors will occur. But that does not change the underlying reality that the “political bias” arguments are false. 

The rumored draft Trump executive order would apparently give the FCC and FTC powers to determine if these firms were engaging in “inappropriate censorship” — the primary implied threat appears to be future changes to Section 230 of the Communications Decency Act, which broadly protects these (and other) firms and individuals from liability for materials that other parties post to their sites. In fact, 230 is effectively what makes social media possible in the first place, since without it the liability risks of allowing users to post anything publicly would almost certainly be overwhelming. 

But wait, it gets worse!

At the same time that these political forces are making the false claims that content is taken down inappropriately from these sites for political purposes, governments and politicians are also demanding — especially in the wake of recent mass shootings — that these firms immediately take down an array of violent postings and similar content. The reality that (for example) such materials may be posted only minutes before shootings occur, and may be widely re-uploaded by other users in an array of formats after the fact, doesn’t faze the politicians and others making these demands, who apparently either don’t understand the enormous scale on which these firms operate, or simply don’t care about such truths when they get in the way of politicians’ political pandering.

The upshot of all this is an insane situation — demands that offending material be taken down almost instantly, but also demands that no material be taken down inappropriately. Even with the best of AI algorithms and a vast human monitoring workforce, these dual demands are in fundamental conflict. Individually, neither are practical. Taken together, they are utterly impossible.

Of course, we know what’s actually going on. Many politicians on both the right and left are desperate to micromanage the Net, to control it for their own political and personal purposes. For them, it’s not actually about protecting users, it’s mostly about protecting themselves. 

Here in the U.S., the First Amendment guarantees that any efforts like Trump’s will trigger an orgy of court battles. For Trump himself, this probably doesn’t matter too much — he likely doesn’t really care how these battles turn out, so long as he’s managed to score points with his base along the way. 

But the broader risks of such strategies attacking the Internet are enormously dangerous, and Republicans who might smile today about such efforts would do well to imagine similar powers in the hands of a future Democratic administration. 

Such governmental powers over Internet content are far too dangerous to be permitted to any administration of any party. They are anathema to the very principles that make the Internet great, and they must not be permitted to take root under any circumstances.

–Lauren–

07-Aug-19
Joi Ito's Web [ 7-Aug-19 2:27pm ]

Ethan Zuckerman thoughtful and appropriately points out that one big missing question in my recent Wired piece on measuring philanthropic impact is whether some of this positive societal change should be in the hands of government instead of philanthropists. He correctly points out that since the Reagan/Thatcher era of the 80s, we've started shrinking the role of government and have started to see big philanthropists and the private sector being called on to do what government used to do. In a post from 2013, Ethan wonders why he doesn't have rail solution to his commuting problem from Western Massachusetts. He suggests that without government, things like railway system are difficult to fund - the market isn't the best solution for many social goods.

I think the idea about whether we should be doubling down on philanthropy or fixing government and increasing government resources is a great question and probably the right one. I think the idea of fixing the government and turning the corner on the privatization is a daunting idea, but something we need to discuss.

30-Jul-19
Lauren Weinstein's Blog [ 30-Jul-19 6:31pm ]

Views: 16

Another day, another massive data breach. This time some 100 million people in the U.S., and more millions in Canada. Reportedly the criminal hacker gained access to data stored on Amazon’s AWS systems. The fault was apparently not with AWS, but with a misconfigured firewall associated with a Capital One app, the bank whose customers were the victims of this attack.

Firewalls can be notoriously and fiendishly difficult to configure correctly, and often present a target-rich environment for successful attacks. The thing is, firewall vulnerabilities are not headline news — they’re an old story, and better solutions to providing network security already exist.

In particular, Google’s “BeyondCorp” approach (https://cloud.google.com/beyondcorp) is something that every enterprise involved in computing should make itself familiar with. Right now!

BeyondCorp techniques are how Google protects its own internal networks and systems from attack, with enormous success. In a nutshell, BeyondCorp is a set of practices that effectively puts “zero trust” in the networks themselves, moving access control and other authentication elements to individual devices and users. This eliminates the need for traditional firewalls (and in most instances, VPNs) because there is no longer a conventional firewall which, once breached, gives an attacker access to all the goodies.

If Capital One had been following BeyondCorp principles, there would be 100+ million less of their customers who wouldn’t be in a panic today.

–Lauren–

 

 

 

 

Joi Ito's Web [ 22-Jul-19 9:24pm ]

I decided to write my column this month in Wired about impact investing and the opportunity to bring new perspectives to the space. As I wrote the piece and started to negotiate with my truly great editor at Wired, I got feedback that it was a bit dense, jargony and wonky. My colleague Louis Kang was doing a lot of research for the article, so I decided to move the nitty-gritty details from the Wired piece to this co-authored "explainer" essay. This essay, now a companion piece to the Wired column, is an overview of what impact investing is, describing some different ways that we currently measure impact and some of the concerns we have with these measurement methods. The Wired article discusses my observations of this field and provides some suggestions on how we might better measure impact.

- Joi


Impact Investment Metrics and Their Limitations

By Joi Ito and Louis Kang

As the pile of philanthropic money aimed at solving the world's problems grows, the desire for assessment and rigor has pushed experts to develop metrics to measure impact and success.

But our world's biggest problems -- climate change, poverty, global health, social instability -- don't easily lend themselves to measurement. Climate change, poverty, global health and social instability, for instance, are complex self-adaptive systems that are irreducible to simple metrics and mathematics. In fact, it's simple math and the hyper-efficient optimizations of the financial markets that have caused most of these problems in the first place. Consider for example, capital markets that focus much more on shareholders than other stakeholders, which has caused extraction and exploitation of natural resources; the efficient production of cheap calories that has contributed to obesity; mass consumption that has led to climate change; and Internet and social media platforms that have amplified hate speech and new forms of adversarial attacks. Are modern foundations and financial institutions armed with quants and global development principles, such as the UN's Sustainable Development Goals, enough to tackle such complex challenges? I don't think so.

Philanthropy as a concept has existed for centuries.The U.S. Internal Revenue Service began providing tax benefits for charitable gifts in the early 1900s, and since then, philanthropy has continued to grow and become more sophisticated.

At the MacArthur Foundation, where I serve on the Board of Directors, "impact investing" emerged in the early 1980s as a way to channel capital to communities plagued by underinvestment and spur the growth of revenue-generating nonprofits and social-purpose businesses. Around this time, Nobel Peace Prize winner Muhammad Yunus founded the Grameen Bank on the principle that loans are more effective than charity to disrupt poverty, and it started by offering tiny loans to impoverished entrepreneurs, which we now know as microfinance. Since then, new types of investment capital and assets, as well as financing and organizational structures and impact measurement practices, have emerged to better engage in the active creation of positive impact. Although the purpose and practice of impact investing are continuously revisited and refined, the core idea is to unlock more traditional investment capital to contribute to solving the world's problems. Today, more than 1,340 organizations manage roughly $500 billion in impact investing assets worldwide.

Many companies now proactively claim to be public benefit companies or are undergoing certification by B-Lab to qualify as B-Corps. These include Patagonia and a company that I invested in, Kickstarter. These companies claim to use, and sometimes disclose, auditable measures of their non-financial societal impact. In addition to companies like these, there is a push among more mainstream businesses to go beyond mere measures of financial success and assess their societal or environmental impacts with a "triple bottom line." Although impact investing has largely been seen as a philanthropic activity, which by definition is prone to accepting little or no return on investment, many traditional impact funds and investors now assert that they are designing investment practices to achieve market level returns on investments and meet positive impact targets. According to one Global Impact Investing Network (GIIN) report, 49 such funds have, on average, achieved an 18.9 percent return on equity-based impact investments in emerging markets. Recently, we've seen more established institutional investors, such as Goldman Sachs, KKR and Bain Capital, to name a just a few now active in the impact investing scene.

Texas Pacific Group (TPG) has created an impact investment fund called the Rise Fund with the help of The Bridgespan Group. The Rise Fund has devised a method that attempts to calculate the economic value of impact called the Impact Multiple of Money, or IMM. IMM is one of a growing number of models and protocols, each of which comes with pros and cons, used to assess non-financial impact. The Rise/Bridgespan method generates an economic estimate of the social impact of an investment by first estimating the number of people impacted by it using relevant scientific studies and multiplying that number by the U.S. "value of life" of $5.4 million, as calculated by the U.S. Department of Transportation to quantify "the additional cost that individuals would be willing to bear for improvements in safety (that is, reductions in risks)." This dollar value of the investment's impact is then adjusted by multiplying it by something called the "probability of impact realization," which is an estimated probability of achieving the expected impact calculated based on a review of relevant scientific studies. Using this number, Rise then projects the investment's Net Present Value, or NPV, using an estimated annual discount set by itself. Finally, the NPV is multiplied by the percent of the company's overall equity owned by Rise to figure out how much of the impact Rise is accountable for, which is then divided by the investment amount to determine the IMM (see this HBR case about an alcoholism program that is part of the Rise Fund as an example). For example, if Rise invested $10 million for 50 percent of the equity in a venture, when the NPV is $100 million, Rise determines $50M ($100 million multiplied by 50 percent) is the value of the impact for which it can claim credit. So its IMM would be five times its investment, or $50 million divided by $10 million, the amount it spent to make the investment. In this example, the IMM was five times its investment, exceeding the three times minimum IMM for the Rise Fund.

Robin Hood, which claims to be New York's largest poverty-fighting organization, takes a similar approach to the IMM. It uses a Benefit-Cost Ratio (BCR) to "assign a dollar figure to the amount of philanthropic good that a grant does" and focuses solely on improving the Quality-Adjusted Life Year (QALY). Robin Hood's metrics are demonstrated over 163 different cases, which can be found here. For example, the BCR of Robin Hood's support for a substance abuse treatment program was calculated by first counting the number of individuals who received the treatment as reported by their grantee. Robin Hood staff then estimated three factors: what percent of these individuals received the treatment solely because of their support; how much the QALY was reduced due to substance abuse, and how much the QALY was improved due to intervention. Suppose the treatment program reached 1,000 people, and Robin Hood estimates that it is only accountable for 10 percent of them. Of those 100 people, QALY reduction due to substance abuse is 10 percent and QALY improvement due to intervention is 20 percent. Robin Hood multiplies 100 by 10 percent, then by 20 percent and finally $50,000 (the QALY value as determined by its staff) to argue that the BCR of the program is approximately $100,000.

Not all of the new approaches attempt to measure impact using a dollar value. My college classmate and collaborator on many projects, Pierre Omidyar, has been an influential leader in impact investing through the work of his organization, the Omidyar Network (ON). The ON funds companies and intermediaries that also provide some social benefit. Over the years, the ON has developed and articulated a variety of methodologies to describe how it measures and categorizes opportunities and risks in funding socially beneficial companies. It has also recently experimented with Acumen's "lean data" approach which seeks to allow rapid iteration in social enterprises in the same way start-ups iterate. Acumen has developed software tools to survey beneficiaries of impact investments and calculate an average Net Promoter Score (NPS), which reflects a combination of many factors. NPS is a method originally developed to measure customer satisfaction in marketing. Via Acument's platform, the ON surveyed 36 investees and 11,500+ customers the investees reached across 18 countries, meaning it received an NPS score of 42 (For comparison, Apple's NPS is 72). And with its surveying capability, the ON contends that its investments improved the quality of life of about 74 percent of its customers.

Having reviewed various impact measurement techniques that are practiced today, now ask yourself: IMMs, BCRs, and NPSs -- do these numbers truly reflect what impact means? Understanding impact through measurement has importance, but we must be careful not to oversimplify complex systems into reducible metrics and lose sight of the intricate dynamics of the world. "Of course, many of us who want to see impact investing attain real scale would welcome the simplicity of an 'Impact Earnings Per Share' calculation or other simplified way to compare relative impact of competing investment opportunities -- but being able to advance a simple metric and having a metric framework that actually helps us assess our true impact and value contribution are two different things," Jed Emerson, who invented the framework Social Return on Investment (SROI), recently told me. "As we know from the history of economics and finance, a single metric can't reflect more nuanced aspects of value or impact. Simplified metrics tell us how we are thinking today but not what we truly seek to know."

Impact measurement is still a nascent field. Understanding impact is fragmented, sometimes misguided, and often inadequate. This makes evaluating and generating impact highly inefficient. We need more clarity and transparency, as well as robust scholarship to study and maximize the impact of philanthropy and impact investing. To address this, I've started to discuss and develop new methods to measure impact. My early thoughts and suggestions are introduced in this upcoming Wired article.

07-Jul-19

I would like to suggest a new word.

Anthropocosmos, n. and adj. Chiefly with "the." The epoch during which human activity is considered to be a significant influence on the balance, beauty, and ecology of the entire universe.

Based on ...

Anthropocene, n. and adj. Chiefly with "the." The era of geological time during which human activity is considered to be the dominant influence on the environment, climate, and ecology of the earth. --The Oxford English Dictionary

As we become painfully aware of the extent to which human activity is influencing the planet and its environment, we are also accelerating into the epoch of space exploration. Not only will our influence substantially affect the future of this blue dot we call Earth, but also our never-ending desire to explore and expand our frontiers is extending humanity's influence on the cosmos. I think of it as the Anthropocosmos, a term that captures the idea of how we must responsibly consider our role in the universe in the same way that Anthropocene expresses our responsibility for this world.

The struggle to protect the commons--the public spaces and resources we all depend on, like the oceans or Central Park--is not a new problem. Shepherds grazing sheep on shared land without consideration for other flocks will soon find grass growing thin. We already know that farming and the timber industry deplete the forests, and the destruction of that commons in turn affects the commons that is the air we breathe. These are versions of the same problem--the tragedy of the commons. It suggests that, left unchecked, self-interest can deplete resources that support the common good.

Joi Ito is an Ideas contributor for WIRED, and his association with the magazine goes back to its inception. He is coauthor with Jeff Howe of Whiplash: How to Survive Our Faster Future and director of the MIT Media Lab.

The early days of the internet were an amazing example of people and organizations from a variety of sectors coming together to create a global commons that was self-governed and well-managed by those who built it. Similarly, we're now in an internet-like moment in which we can imagine an explosion of innovation in space, our ultimate commons, as nongovernment groups, companies, and individuals begin to drive progress there. We can learn from the internet--its successes and failures--to create a generative and well-managed ecosystem in space as we grow into our responsibility as stewards of the Anthropocosmos.

Like the internet, space exploration has been mostly a government-vs.-government race and a government-with-government collaboration. The internet started out as Arpanet, which was funded by the Department of Defense's Advanced Research Projects Agency and operated by the military until 1990. A great deal of anxiety and deliberation went into the decision to allow commercial and nonresearch uses of the network, much as NASA extensively deliberated over opening the doors to "public-private partnership" leading up to the Commercial Crew Program launch in 2010. This year is the 50th anniversary of the Apollo 11 mission that put men on the moon, a multibillion-dollar effort funded by US taxpayers. Today, the private space industry is robust, and private firms compete to deliver payloads, and soon, put people into orbit and on the moon.

The state of the development of the space industry reminds me of where the internet was in the early '90s. The cost of putting a satellite into orbit has gone from supercomputer-level costs and design cycles to just a few thousand dollars, similar to the cost of a fully loaded personal computer. In many ways, SpaceX, Blue Origin, and Rocket Lab are like UUNET and PSINet1 --the first commercial internet service providers--doing more efficiently what government-funded research networks did in the past.

1 Disclosure: I was at one point an employee of PSINet and the CEO of PSINet Japan.

When these private, for-profit ISPs took over the process of building out the internet into a global network, we saw an explosion of innovation--and a dot-com bubble, followed by a crash, and then another surge following the crash. When we were connecting everyone to the internet, we couldn't imagine all the possible things--good and bad--that it would bring. In the same way, space development will most likely expand far beyond the obvious--mining, human settlements, basic research--to many other ideas. The question now is, how can we direct the self-interested businesses that will undoubtedly power entrepreneurial expansion, growth, and innovation in space toward the shared, long term health of the space commons?

In the early days of the internet, everyone pitched in like people tending a community garden. We were a band of jolly pirates on a newly discovered island paradise far away from the messiness of the real world. In "A Declaration of the Independence of Cyberspace," John Perry Barlow even declared cyberspace a new place, saying "We are forming our own social contract. This governance will arise according to the conditions of our world, not yours." His utopian idea, which I shared at the time, is now echoed by some of today's spacebound entrepreneurs who dream of settling Mars or deploying terraforming pods on planets across the galaxy.

While it wasn't obvious how life on the internet would play out when we were building the early infrastructure, back then academics, businesses, and virtually anyone else who was interested worked on its standards and resource allocation. We created governance mechanisms in communities like ICANN for coordination and dispute resolution, run by people dedicated to the protection and flourishing of the internet commons. In short, we built the foundations on which everyone could develop businesses and communities. At least in the beginning, the internet effectively harnessed the self-interest of commercial players and money from the markets to develop open protocols, free for everyone to use, that the communities designed. In the early 1990s, the internet was one of the best examples of a well-managed commons, with no one controlling it and everyone benefiting from it.

A quarter-century on, cyberspace hasn't evolved into the independent, self-organized utopia that Barlow envisioned. As the internet "democratized," new users and entrepreneurs who weren't involved in the genesis of the internet joined. It was overrun by people who didn't think of themselves as pirate gardeners tending the sacred network that supported this idealistic cyberspace--our newly created commons. They were more interested in products and services created by companies, and these companies often didn't care as much about ideals as in making returns for their investors. On the early internet, for example, people ran their own web servers, and fees for connectivity were always flat--sometimes simply free--and almost all content was shared. Today, we have near-monopolies, walled garden services; the mobile internet is metered and expensive; and copyright is vigorously enforced. From the perspective of this internet pioneer and others, cyberspace has become a much less hospitable place for users as well as developers, a tragedy of the commons.

Such disregard for the commons, if allowed to continue into planetary orbit and beyond, could have tangibly negative consequences. The decisions we make in the sociopolitical, economic, and architectural foundations of Earth's near-space cocoon will directly impact daily life on the surface--from debris falling in populated areas to advertisements that could block our view of the skies. A piece of space junk has already hit a woman in Oklahoma and an out-of-control Chinese space station caused a lot of anxiety and luckily fell harmlessly into the Pacific Ocean.

So I think the rules and governance models for space are extremely important to understand to mitigate known problems such as space debris, set precedents for the unknown, and managing the race to lunar settlements. We already have the Outer Space Treaty, which governs our efforts and protects our resources in space as a shared commons. The International Space Station is a great example of a coordinated effort by many competing interests to develop standards and work together on a common project that benefits all participants.

However, recent announcements by Vice President Mike Pence of an "America First" agenda for the moon and space fail to acknowledge the fact that the US pursues space exploration and science with deep coordination and interdependence with other countries. As new opportunities are emerging for humans to develop economic activities and communities in orbit around the Earth, on asteroids, and beyond, nationalistic actions by the Trump administration could undermine the opportunity to pursue a multiple stakeholder, internationally coordinated approach to designing future human space activities and ensure that space benefits all humankind.

As space becomes more commercial and pedestrian like the internet, we must not allow the cosmos to become a commercial and government free-for-all with disregard for the commons and shared values. In a recent Wall Street Journal article, Media Lab PhD student and director of the Media Lab Space Exploration Initiative2 Ariel Ekblaw suggested we need a new generation of "space planners" and "space architects" to coordinate such expansive growth while enabling open innovation. Through such communities, we can build the space equivalents of ICANN and the Internet Engineering Task Force, in coordination with international policy and governance guidance from the UN Office for Outer Space Affairs.

Disclosure : I am one of the two principal investigators on this initiative.

I am hopeful that Ariel and a new generation of space architects can learn from our successes and failures in protecting the internet commons and build a better paradigm for space, one that will robustly self-regulate and allow growth and generative creativity while developing strong norms that help us with our environmental and societal issues here on Earth. Already there are positive signs: SpaceX recently decided to fly low to limit space debris.

Fifty years ago, America "won" the moonshot. Today, we must "win" the Earthshot. The internet connected our world like never before, and as the iconic 1968 Earthrise photo shows, space helps us see our world like never before. Serving as responsible stewards of these crucial commons profoundly expands our circles of awareness. My dear friend Margarita Mora often asks, "What kind of ancestors do we want to be?" I want to be an ancestor who helped make the Anthropocene and the Anthropocosmos periods of history when humans helped the universe flourish with life and prosperity.

Many forms of Government have been tried, and will be tried in this world of sin and woe. No one pretends that democracy is perfect or all-wise. Indeed, it has been said that democracy is the worst form of Government except all those other forms that have been tried from time to time.
--Winston Churchill

I was on the board of the International Corporation for Names and Numbers (ICANN) from 2004 to 2007. This was a thankless task that I viewed as something like being on jury duty in exchange for being permitted to use the internet, upon which much of my life was built. Maybe people hate ICANN because it seems so bureaucratic, slow, and political, but I will always defend it as the best possible solution to something that is really hard--resolving the problem of allocating names and numbers for the internet when every country and every sector in the world has reasons for believing that they deserve a particular range of IP addresses or the rights to a domain name.

I view the early architecture of the internet as the most successful experiment in decentralized governance. The internet service providers and the people who ran the servers didn't need to know how the whole thing ran, they just needed to make sure that their corner of the internet was working properly and that people's email and packets magically found their way around the internet to the right places. Almost everything was decentralized except one piece--the determination of the unique names and numbers that identified every single unique thing connected to the internet. So it makes sense that this is the thing that was the hardest thing to do for the open and decentralized idealists there.

After Reuters picked up the news on May 20 that ICANN handed over the top level domain (TLD) .amazon to Jeff Bezos' Amazon.com, pending a 30 day comment period, Twitter and the broader internet turned into a flurry of conversations criticizing the ICANN process. It brought out all of the usual conspiracy theorists and internet governance pundits, which brought back old memories and reminded me how some things are still the same, even though much on the internet is barely recognizable from the early days. And while it made me cringe and wish that the people of the Amazon basin had gotten control of that TLD, I agree with ICANN's decision. I remembered my time at ICANN and how hard it was to make the right decisions in the face of what, to the public, appeared to be obviously wrong.

Originally, early internet pioneer Jon Postel ran the root servers that managed the names and numbers, and he decided who got what. Generally speaking, the rule was first come first serve, but be reasonable about the names you ask for. A move to design a more formal governance process for managing these resources began as the internet became more important and included institutions such as the Berkman Center, where I am a faculty associate. The death of Jon Postel accelerated the process and triggered a somewhat contentious move by the US Commerce Department and others to step in to create ICANN.

ICANN is a multi-stakeholder nonprofit organization originally created under the US Department of Commerce that has since transitioned to become a global multi-stakeholder community. Its complicated organizational structure includes various supporting organizations to represent country-level TLD organizations, the public, businesses, governments, the domain name registrars and registries, network security, etc. These constituencies are represented on the board of directors that deliberates on and makes many of the key decisions that deal with names and numbers on the internet. One of the keys to the success of ICANN was that it wasn't controlled by governments like the United Nations or the International Telecommunications Union (ITU), but that the governments were just part of an advisory function--the Government Advisory Council (GAC). This allowed many more voices at the table as peers than traditional intergovernmental organizations.

The difficulty of the process is that business and intellectual property interests believe international trademark laws should govern who gets to control the domain names. The "At Large" community, which represented users, has other views, and the GAC represents governments who have completely different views on how things should be decided. It's like playing with a Rubik's cube that actually doesn't have a solution.

The important thing was that everyone was in the room when we made decisions and got to say their say and the board, which represented all of the various constituents, would vote and ultimately make decisions after each of the week-long deliberation sessions. Everyone walked away feeling that they had their say and that in the end, they were somehow committed to adhere to the consensus-like process.

When I joined the board, my view was to be extremely transparent about the process and to stick to our commitments and focus on good governance, even if some of the decisions made us feel uncomfortable.

During my tenure, we had two very controversial votes. One was the approval of the .xxx TLD. Some governments, such as Brazil, thought that it would be a kind of "sex pavilion" that would increase pornography on the internet. The US conservative Christian community engaged in a letter-writing campaign to ICANN and to politicians to block the approval. The ICM Registry, the company proposing the domain, suggested that .xxx would allow them to create best practices including preventing copyright infringement and other illegal activity and create a way to enforce responsible adult entertainment.

It was first proposed in 2000 by the ICM Registry and resubmitted in 2004. They received a great deal of pushback and continued to fight for approval. In 2008, ICM filed an application with the International Centre for Dispute Resolution and the domain came up for vote again in 2009, when I was on the board. The proposal was struck down in a 9 to 5 vote against the domain--I voted in the minority, in favor of the proposal, because I didn't feel that we should deviate from our process and allow political pressure to sway us. Eventually, in 2011, ICANN approved the .xxx generic top-level domain.

In 2005 we approved .cat for Catalan, which also received a great deal of criticism and pushback because the community worried that it would be the beginning of a politicization of TLDs by various separatist movements and that ICANN would become the battleground for these disputes. But this concern never really manifested.

Then, on March 10, 2019, the board of ICANN approved the TLD .amazon, against the protests of the Amazon Cooperation Treaty Organization and the governments of South America representing the Amazon Basin. The vote was the result of seven years of deliberations and process, with governments arguing that a company shouldn't get the name of geographic region and Jeff Bezos' Amazon arguing that it had complied with all of the required processes.

When I first joined MIT, we owned what was called net 18. In other words, any IP address that started with 18. The IP addresses 18.0.0.1 through 18.255.255.254 were all owned by MIT. You could recognize any MIT computer because its IP address started with 18. MIT, one of the early users of the internet, was allocated a whole "class A" segment of the internet which adds up to 2,147,483,646 IP addresses--more than most countries. Clearly this wasn't "fair," but it was consistent with the "first come first serve" style of early internet resource allocation. In April 2017, MIT sold 8 million of these addresses to Amazon and broke up our net 18, to the sorrow of many of us who so cherished this privilege and status. This also required us to renumber many things at MIT and turn our network into a much more "normal" one.

Although I shook my fist at Amazon and capitalism when I heard this, in hindsight the elitist notion that MIT should have 2 billion IP addresses was also wrong and Amazon probably needed the addresses more.

So it was with similar ire that I read the tweet that said that Amazon got .amazon. I've been particularly involved in the protection of the rights of indigenous people through my conservation and cultural activities and my first reaction was that, yet again, Western capitalism and colonialism were treading on the rights of the vulnerable.

But then I remembered those hours and hours of deliberation and fighting over .xxx and the crazy arguments about why we couldn't let this happen. I also remember fighting until I was red in the face about how we needed to stick to our principles and our self-declared guidelines and not allow pressure from US politicians and their constituents to sway us.

While I am not close to the ICANN process these days, I can imagine the pressure that they must have come under. You can see the foot-dragging and years of struggle just reading the board resolution approving .amazon.

So while it annoys me, and I wish that .amazon went to the people of the Amazon basin, I also feel like ICANN is probably working and doing its job. The job of ICANN is to govern the name space in an open and inclusive process and to steward this process in the best, but never perfect, way possible. And if you really care, we are in that 30 day public comment period so speak up!

 
News Feeds

Environment
Blog | Carbon Commentary
Carbon Brief
Cassandra's legacy
CleanTechnica
Climate | East Anglia Bylines
Climate and Economy
Climate Change - Medium
Climate Denial Crock of the Week
Collapse 2050
Collapse of Civilization
Collapse of Industrial Civilization
connEVted
DeSmogBlog
Do the Math
Environment + Energy – The Conversation
Environment news, comment and analysis from the Guardian | theguardian.com
George Monbiot | The Guardian
HotWhopper
how to save the world
kevinanderson.info
Latest Items from TreeHugger
Nature Bats Last
Our Finite World
Peak Energy & Resources, Climate Change, and the Preservation of Knowledge
Ration The Future
resilience
The Archdruid Report
The Breakthrough Institute Full Site RSS
THE CLUB OF ROME (www.clubofrome.org)
Watching the World Go Bye

Health
Coronavirus (COVID-19) – UK Health Security Agency
Health & wellbeing | The Guardian
Seeing The Forest for the Trees: Covid Weekly Update

Motorcycles & Bicycles
Bicycle Design
Bike EXIF
Crash.Net British Superbikes Newsfeed
Crash.Net MotoGP Newsfeed
Crash.Net World Superbikes Newsfeed
Cycle EXIF Update
Electric Race News
electricmotorcycles.news
MotoMatters
Planet Japan Blog
Race19
Roadracingworld.com
rohorn
The Bus Stops Here: A Safer Oxford Street for Everyone
WORLDSBK.COM | NEWS

Music
A Strangely Isolated Place
An Idiot's Guide to Dreaming
Blackdown
blissblog
Caught by the River
Drowned In Sound // Feed
Dummy Magazine
Energy Flash
Features and Columns - Pitchfork
GORILLA VS. BEAR
hawgblawg
Headphone Commute
History is made at night
Include Me Out
INVERTED AUDIO
leaving earth
Music For Beings
Musings of a socialist Japanologist
OOUKFunkyOO
PANTHEON
RETROMANIA
ReynoldsRetro
Rouge's Foam
self-titled
Soundspace
THE FANTASTIC HOPE
The Quietus | All Articles
The Wire: News
Uploads by OOUKFunkyOO

News
Engadget RSS Feed
Slashdot
Techdirt.
The Canary
The Intercept
The Next Web
The Register

Weblogs
...and what will be left of them?
32767
A List Apart: The Full Feed
ART WHORE
As Easy As Riding A Bike
Bike Shed Motorcycle Club - Features
Bikini State
BlackPlayer
Boing Boing
booktwo.org
BruceS
Bylines Network Gazette
Charlie's Diary
Chocablog
Cocktails | The Guardian
Cool Tools
Craig Murray
CTC - the national cycling charity
diamond geezer
Doc Searls Weblog
East Anglia Bylines
faces on posters too many choices
Freedom to Tinker
How to Survive the Broligarchy
i b i k e l o n d o n
inessential.com
Innovation Cloud
Interconnected
Island of Terror
IT
Joi Ito's Web
Lauren Weinstein's Blog
Lighthouse
London Cycling Campaign
MAKE
Mondo 2000
mystic bourgeoisie
New Humanist Articles and Posts
No Moods, Ads or Cutesy Fucking Icons (Re-reloaded)
Overweening Generalist
Paleofuture
PUNCH
Putting the life back in science fiction
Radar
RAWIllumination.net
renstravelmusings
Rudy's Blog
Scarfolk Council
Scripting News
Smart Mobs
Spelling Mistakes Cost Lives
Spitalfields Life
Stories by Bruce Sterling on Medium
TechCrunch
Terence Eden's Blog
The Early Days of a Better Nation
the hauntological society
The Long Now Blog
The New Aesthetic
The Public Domain Review
The Spirits
Two-Bit History
up close and personal
wilsonbrothers.co.uk
Wolf in Living Room
xkcd.com